Artificial intelligence startup Run:AI secured $13 million in funding this month for its high-tech training solution for deep learning models, the company announced April 3.
Run:AI, which is based out of Tel Aviv, Israel, created a high-performance compute virtualization layer for deep learning that speeds up the training of neural network models, according to a release. Right now, researchers typically train models by running deep learning workloads on a number of graphic processing units, which can run continuously for days to weeks on pricey computers.
“Traditional computing uses virtualization to help many users or processes share one physical resource efficiently,” Omri Geller, co-founder and CEO of Run:AI, said in the release. “Virtualization tries to be generous. But a deep learning workload is essentially selfish since it requires the opposite—it needs the full computing power of multiple physical resources for a single workload, without holding anything back.
“Traditional computing software just can’t satisfy the resource requirements for deep learning workloads.”
Run:AI’s software, on the other hand, creates a compute abstraction layer that automatically analyzes the computational characteristics of workloads, eliminating bottlenecks and optimizing workloads with graph-based parallel computing algorithms. It automatically allocates and runs workloads, making deep learning experiments run faster and lowering the costs associated with training AI. According to the company, its solution will enable the development of “huge” AI models.
Run:AI received $3 million from TLV Partners in its seed round and an additional $10 million in a Series A round led by Haim Sadger’s S Capital and TLV Partners.
“Executing deep neural network workloads across multiple machines is a constantly moving target, requiring recalculations for each model and iteration based on availability of resources,” Rona Segev-Gal, managing partner of TLV Partners, said in the release. “Run:AI determines the most efficient and cost-effective way to run a deep learning training workload, taking into account the network bandwidth, compute resources, cost, configurations and the data pipeline and size. We’ve seen many AI companies in recent years, but Omri, Ronen and Meir’s approach blew our mind.”