How Helm.ai Uses Generative AI for Self-Driving Cars

How Helm.ai Uses Generative AI for Self-Driving Cars

Self-driving cars were supposed to be in our garages by now, according to the optimistic predictions of just a few years ago. But we may be nearing a few tipping points, with robotaxi adoption going up and consumers getting accustomed to more and more sophisticated driver-assistance systems in their vehicles. One company that’s pushing things forward is the Silicon Valley-based Helm.ai, which develops software for both driver-assistance systems and fully autonomous vehicles.

The company provides foundation models for the intent prediction and path planning that self-driving cars need on the road, and also uses generative AI to create synthetic training data that prepares vehicles for the many, many things that can go wrong out there. IEEE Spectrum spoke with Vladislav Voroninski, founder and CEO of Helm.ai, about the company’s creation of synthetic data to train and validate self-driving car systems.

How is Helm.ai using generative AI to help develop self-driving cars?

Vladislav Voroninski: We’re using generative AI for the purposes of simulation. So given a certain amount of real data that you’ve observed, can you simulate novel situations based on that data? You want to create data that is as realistic as possible while actually offering something new. We can create data from any camera or sensor to increase variety in those data sets and address the corner cases for training and validation.

I know you have VidGen to create video data and WorldGen to create other types of sensor data. Are different car companies still relying on different modalities?

Voroninski: There’s definitely interest in multiple modalities from our customers. Not everyone is just trying to do everything with vision only. Cameras are relatively cheap, while lidar systems are more expensive. But we can actually train simulators that take the camera data and simulate what the lidar output would have looked like. That can be a way to save on costs.

And even if it’s just video, there will be some cases that are incredibly rare or pretty much impossible to get or too dangerous to get while you’re doing real-time driving. And so we can use generative AI to create video data that is very, very high-quality and essentially indistinguishable from real data for those cases. That also is a way to save on data collection costs.

How do you create these unusual edge cases? Do you say, “Now put a kangaroo in the road, now put a zebra on the road”?

Voroninski: There’s a way to query these models to get them to produce unusual situations—it’s really just about incorporating ways to control the simulation models. That can be done with text or prompt images or various types of geometrical inputs. Those scenarios can be specified explicitly: If an automaker already has a laundry list of situations that they know can occur, they can query these foundation models to produce those situations. You can also do something even more scalable where there’s some process of…

Read full article: How Helm.ai Uses Generative AI for Self-Driving Cars

The post “How Helm.ai Uses Generative AI for Self-Driving Cars” by Eliza Strickland was published on 03/18/2025 by spectrum.ieee.org