Machine learning (ML) has the potential to transform multiple fields by generating highly accurate, data-driven predictions. ML modeling can help oceanographers map the sea floor, enable energy providers to find the ideal locations for windmills, or help pharmaceutical companies accelerate drug discovery.

Unfortunately, relatively few organizations or individuals have access to the data science skills, innovative apps and powerful graphical processing unit (GPU)–based infrastructure needed for ML modeling. neurothink wants to change that. The company’s mission is to make ML “radically accessible” by providing ML as a service (MLaaS) to a broad audience.

neurothink built its MLaaS offering on VMware. The company is using VMware Cloud Foundation with VMware vSphere Bitfusion to create a pool of shared, network-accessible GPU resources. VMware Carbon Black Cloud identifies threats while VMware Aria Operations for Applications (formerly VMware Tanzu Observability) monitors performance of the service. With this VMware-based infrastructure, neurothink enables more organizations and individuals to tap into powerful ML capabilities.

Build on VMware Cloud Foundation

Create a cloud experience on premises or extend your data center to a public cloud by establishing consistent infrastructure and ops across clouds.

Safeguard Your Environment with VMware Carbon Black Cloud

Transform security with intelligent endpoint and workload protection that adapts to your needs.

Create a Pool of Shared Resources for ML

Bring together resources for artificial intelligence and machine learning workloads by virtualizing hardware accelerators with VMware vSphere Bitfusion.

Improving access to ML

ML and artificial intelligence (AI) capabilities are no longer the sole domains of big universities and well-funded research labs. But these powerful technologies are still not accessible to many businesses, government agencies or other organizations. Some lack the engineering skills to deploy ML applications; others lack the budget to implement the large-scale infrastructure needed to support ML modeling. 

The leaders at neurothink believe that democratizing access to ML could have a revolutionary impact in many fields. “The combustion engine was an amazing invention, but innovation needs reach,” says neurothink COO Charles Donly. “It was only with the arrival of Ford’s production line, and the Model T, that the automobile started to revolutionize our lives. Fundamentally, the neurothink platform is about giving everyone—including data scientists—a better experience.” 

To make ML accessible to a broad audience, neurothink needed to build a software-based environment that would equip users with the tools to easily build, train and deploy AI models. The environment had to harness the power of numerous GPUs and enable them to be shared among customers. In addition, it had to provide consistent performance and tight security.

When building this new environment, neurothink also needed to control costs and complexity. That meant finding the right technology partners. “We want to build our name with respected companies in the AI/ML community,” says neurothink founder and CEO Brian Rogers. “It’s important we find the right companies to partner with in the early stages.”

Virtualizing GPUs with VMware

The neurothink leadership team decided to build the company’s MLaaS environment on VMware Cloud Foundation, which provides a set of software-defined services for compute, storage, networking, security, container orchestration and cloud management. neurothink worked with the VMware Partner Sterling to design and deploy the multi-faceted solution.

The ability to use multiple capabilities from a single vendor played a key role in selecting VMware Cloud Foundation. “We knew the platform would require a complex, integrated stack of technologies,” says Akhilesh Miryala, senior cloud architect at neurothink. “We wanted to be able to reach out to a single support channel. This would increase the speed and accuracy of issue resolution.”

Using VMware vSphere BitFusion, a capability available with VMware Cloud Foundation, neurothink virtualized GPUs, creating a pool of shared, network-accessible resources. “GPU fractionalization is the new frontier,” says Miryala. “VMware vSphere BitFusion allows us to fractionalize a single GPU for multiple teams. And when a team is done using it, it comes back into the pool. VMware Cloud Foundation creates a key differentiator for us.”

“Without VMware, it would have been impossible to virtualize and fractionalize our GPUs,” says Rick Rodriguez, the company’s infrastructure manager.

Virtualizing resources helps keep costs down for neurothink and enables the company to offer affordable services. “Traditionally, 85 percent of GPU resources stand idle, and only 20 percent of models make it to production. That is an overall success rate of two percent,” says Donly. “We can get that utilization … to more than 90 percent. That performance means we can bring down the cost of machine learning, and customers can increase their rate of gains in the factory or business operations. That is the benefit of building the entire platform—from hardware to user experience.”

Creating a robust, reliable and more secure service with VMware

neurothink has integrated several additional VMware technologies into the MLaaS environment, each playing a key role in delivering a robust, reliable service. For example, the company uses Tanzu Kubernetes Grid to run container-based workloads, which give users greater flexibility to move compute resources and speed workflows.

To help ensure constant availability and consistent performance for the service, the company uses VMware Aria Operations for Applications. Administrators can now identify issues fast and resolve them before they affect performance. “[VMware Aria Operations for Applications] helps us enormously in how we deal with these issues,” explains Miryala.

The company also uses VMware Carbon Black Cloud to help administrators monitor threats. “Carbon Black is monumental,” says Rodriguez. “It provides visibility, from endpoint to container, giving us a clear eye on security intrusions.”

Staying focused on delivering ML

By building on VMware, neurothink was able to launch its service rapidly and cost effectively. Knitting together components from different vendors would have required much more time and money. “The cost would have been two to three times the value of the overall start-up investment,” says Donly. “We’ve been able to offload this integration expertise to VMware.”

Going forward, the neurothink team will be able to stay focused on its MLaaS offering instead of infrastructure. “The combination of VMware technologies enables us to better manage the platform,” says Rogers.

Empowering clients with simple access to ML

The newly launched MLaaS offering from neurothink enables organizations to access cutting-edge ML capabilities without having to buy or maintain their own compute power. They don’t even need to install anything on their own computers to start using ML resources.

So far, neurothink customers have represented a diverse array of fields and have explored a wide range of use cases. Early adopters have included organizations testing autonomous vehicles, searching for cures to cancer and studying manufacturing processes.

neurothink is now working with universities to encourage new ways of adopting AI and ML. “We want to build a community around the subject of data science,” Rogers says. “We believe we can encourage the sharing of ideas, create a library of data sets and demonstrate how users can build on existing models. We are also excited to announce that we recently launched our public beta, which enables a broader audience of data scientists to develop and train their models on our innovative MLaaS platform.”

The neurothink team also looks forward to further enhancing its platform with help from VMware. “VMware is the right partner for us,” says Rogers. “It brings us technology expertise in a fast-changing environment, and it welcomes our feedback. We feel our experience can help influence the next round of VMware innovation.”  ▪