Skip to main content

Run:AI brings virtualization to GPUs running Kubernetes workloads

In the early 2000s, VMware introduced the world to virtual servers that allowed IT to make more efficient use of idle server capacity. Today, Run:AI is introducing that same concept to GPUs running containerized machine learning projects on Kubernetes. This should enable data science teams to have access to more resources than they would normally […]

In the early 2000s, VMware introduced the world to virtual servers that allowed IT to make more efficient use of idle server capacity. Today, Run:AI is introducing that same concept to GPUs running containerized machine learning projects on Kubernetes.

This should enable data science teams to have access to more resources than they would normally get were they simply allocated a certain number of available GPUs. Company CEO and co-founder Omri Geller says his company believes that part of the issue in getting AI projects to market is due to static resource allocation holding back data science teams.

“There are many times when those important and expensive computer sources are sitting idle, while at the same time, other users that might need more compute power since they need to run more experiments and don’t have access to available resources because they are part of a static assignment,” Geller explained.

To solve that issue of static resource allocation, Run:AI came up with a solution to virtualize those GPU resources, whether on prem or in the cloud, and let IT define by policy how those resources should be divided.

“There is a need for a specific virtualization approaches for AI and actively managed orchestration and scheduling of those GPU resources, while providing the visibility and control over those compute resources to IT organizations and AI administrators,” he said.

Run:AI creates a resource pool, which allocates based on need. Image Credits Run:AI

Run:AI built a solution to bridge this gap between the resources IT is providing to data science teams and what they require to run a given job, while still giving IT some control over defining how that works.

“We really help companies get much more out of their infrastructure, and we do it by really abstracting the hardware from the data science, meaning you can simply run your experiment without thinking about the underlying hardware, and at any moment in time you can consume as much compute power as you need,” he said.

While the company is still in its early stages, and the current economic situation is hitting everyone hard, Geller sees a place for a solution like Run:AI because it gives customers the capacity to make the most out of existing resources, while making data science teams run more efficiently.

He also is taking a realistic long view when it comes to customer acquisition during this time. “These are challenging times for everyone,” he says. “We have plans for longer time partnerships with our customers that are not optimized for short term revenues.”

Run:AI was founded in 2018. It has raised $13 million, according to Geller. The company is based in Israel with offices in the United States. It currently has 25 employees and a few dozen customers.

Run.AI raises $13M for its distributed machine learning platform

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.