George Hotz

This Hacker Is Making a Driverless Car in His Garage

George Hotz wants to remake everything from your car to your phone, cheaper and faster than Google or Tesla.

|


"Self-driving cars are going to be the effort of a lot more than one small startup," George Hotz says. "Download our stuff, build it into your cars, don't give me anything."

Known online as "GeoHot," Hotz became one of the world's most famous hackers at age 17, when he broke into an early iPhone and reconfigured it to be compatible with providers other than AT&T. He was also the first to jailbreak the PlayStation 3. Now 28, the technical wunderkind is going up against Tesla founder Elon Musk and the entire auto industry in a race to build the first fully operational autonomous vehicle.

George Hotz. Photo by Mikkel Aaland.

While bigger companies such as Google develop complex systems that rely on expensive light detection and radar (LIDAR) sensor systems, Hotz is trying to bring plug-and-play driverless technology to the masses. Operating with $3.1 million in seed money, his company, Comma.ai, builds products that can hijack modern cars' existing features. The goal: To create a kit that can convert your car to a self-driving model for under $1,000.

Hotz has a history of taking on tech titans—and garnering mixed reactions. After the iPhone jailbreak, Apple co-founder Steve Wozniak sent him a letter of congratulations. When he hacked the PS3, Sony filed a lawsuit against him.

Comma.ai is Hotz's attempt to take on the major players in a new way. The 12-person team makes an app called Chffr that turns your smartphone into a dashcam and monitors its GPS and accelerometers. More recently, they launched Panda, an $88 dongle that connects to the car, providing even more granular Fitbit-like data about its operations.

The company then takes all the information collected and uses it to inform OpenPilot, an open-source computer program that is slowly learning how to drive. Hotz insists that within five years, he'll be able to release a software update "and then boom, all these cars are level-four self-driving."

In August, Reason's Justin Monticello sat down with the hacker to discuss his unusual approach to solving the driverless-car problem and why he believes "we're living in the best time ever" even if privacy is a thing of the past.

Reason: How did you start thinking about self-driving car technology?

George Hotz: All these auto manufacturers are hopelessly clueless when it comes to self-driving. Think about it like this: Ever use a nav system in a car?

Yeah. Google Maps.

Google Maps, but not [the navigation system built into] the car, right?

Right.

You use a phone. You can see some great pictures where people stick the phone holder, the suction cup, right onto the nav screen.

Our cars are released every five years. Phones are released every one. So my question was, How do you build self-driving navigation on a cycle that looks a lot more like the smartphone cycle than the car cycle? A 5-year-old phone is so, so old-looking. It looks a lot like, well, a car navigation system. They're using the chips from 5-year-old phones. It's just the way the car manufacturers think.

So your idea is that you should be able to plug and play your self-driving system into pretty much any car?

It's not going to be any. You need some tailoring to the car. But you take the top 20 cars in America and that's like 50 percent of the cars sold. We want to support most of those, and then don't worry about the long tail.

What led you to start Comma.ai, and what's your vision for the company?

I want to win self-driving cars. The top three cars sold in America are pickup trucks. But of the next seven, six are Hondas and Toyotas. Honda and Toyota aren't going to have self-driving tech anytime soon. These are the cars that I want to support—the cars that Americans are really driving. Toyota Corollas, Honda Civics, RAV4s, CRVs.

How does your system work in contrast to systems like Google's, which use expensive equipment like LIDAR?

We ship a camera, and we use the radar that's built into the car and the sensors that are built into the car. The systems that companies like Google are building, they're not feasible for passenger vehicles. There's no market for a $100,000 quasi-self driving car. It's not even full self-driving. Google's never made a physical product that has shipped.

How about Chromecast?

A $40 TV dongle. Were they the first one to ship a TV dongle? No. Google can only iterate and build on top of what other companies have already done.

Was Chrome the first web browser? No, but it was the best. Was Android the first non-iOS operating system for smartphones? No, but it was way better than Symbian and the Blackberry OS. Google in some ways is not an innovative company, and when they try to innovate, you get things like Google Glass. Google is the Xerox PARC of the self-driving industry.

And you think your cheaper system will be able to accomplish everything that these other companies are doing with their more expensive LIDAR systems?

Absolutely right. The truth is, even with those LIDAR systems, nobody has the kind of reliability they need to get from level two to level four.

Can you explain what that means?

These are the different levels of self-driving cars. Level two is what ships in cars today. [Tesla's] Autopilot is a level-two system. [My company's] OpenPilot is a level-two system. This means that the human needs to be paying attention, in the loop, and liable at all times. Even if they're not touching the wheel, the gas, or the brake.

Level three means occasionally the human could not pay attention. Level four means the human is never liable. You could practically remove the steering wheel and pedals from the car. But in order for that to be OK, your system better be a good bit better than humans. Not perfect—it's never, ever going to be perfect—but it's got to be better than humans. No system is there yet.

So in this analogy you would be the Steve Jobs going into the Xerox campus, finding the graphical user interface, and seeing that they're messing it up—

I'm a lot more like the Bill Gates. Elon's the Steve Jobs.

You want to be the Bill Gates?

Yeah, I'll be the Bill Gates.

Wow, nobody ever wants to be the Bill Gates.

I'm not an Apple kind of company. I'm a lot more like a Microsoft kind of company.

What do you mean by that?

You think of Apple as this vertically integrated, consumer-first company. They practically are more art than technology, right? They do have good technology backing it up, but for me it's all about the technology.

Most companies use technology in order to build products. I'm sure that the company that made this chair used some amount of technology to build their product, but clearly a chair company's not a technology company.

Mikkel Aaland

We use products to build technology. Technology is the only thing in history that has had this compounding effect on human wealth. We live in houses, they're heated, we have lights. This is all technology, and that's what I want to be a part of, a lot more than, well, art.

So you don't care about making things beautiful?

Yeah, I don't care about making things beautiful. I'm very, very anti-advertising. The idea of advertising, like we're going to manipulate people into buying our product? No, I never want to be that kind of company. It's not about, "Here's the pie. I want my slice to be bigger." It's about, "We can build a really big pie. Let's build."

You gave an infamous interview where you took a Bloomberg reporter on a ride in this self-driving car that you had just got to work that morning. How did you know that you weren't going to die when you took it out onto the road?

We have this open-source software called OpenPilot, which you can install into your car and it will drive it. But the safety works like this, to a fault: One, the second the user does anything, the system disengages. The second you touch either the gas or the brakes, the system just stops doing anything, so you always have that user override. This is one of our pillars of safety.

But the other thing you even need more is to make sure the car's never going to do anything so quickly that you can't respond, and the way you deal with this is torque limits. So when our car moves the steering wheel, it moves the steering wheel [gradually]. If you don't like what it's doing, just casually reach out and grab it.

So it's not able to jerk your car in front a truck, or something like that.

Exactly. As long as you know that, you know it's not going to do anything terrible.

Your first entry into the consumer market is a device called Panda. Can you tell us what it is?

Technically we were in the consumer market a bit before that. I have a dash-cam app called Chffr. It's software, not hardware—an app that runs on your phone. You mount it on your car, and it records the accelerometer, records your GPS, lets you review all of your routes. So it's an appified experience. Then all the data that goes to our servers from Chffr is used to train the self-driving cars.

"Six [of the top 10 cars sold in America] are Hondas and Toyotas. Honda and Toyota aren't going to have self-driving tech anytime soon. These are the cars that I want to support—the cars that Americans are really driving."

What Panda does is it actually connects to your car, serving as the bridge between the phone and the car. You can use it to show you your miles per gallon, your RPM, your speed, right there in the Chffr app. You'll also be able to use it to diagnose any errors with your car. As we start to get this data we'll be able to do preventative—like, "We know your transmission's about to fail because we've seen 10 other cars' transmissions fail after they were showing the same things."

So that's the real consumer experience, but it's also a super powerful car reverse-engineering tool. It competes with products that are $2,000.

So just to clarify, you've got Chffr, which is an app that uses the camera on your phone. You've got Panda, which is the dongle that measures everything from your car. And then you've got OpenPilot, which is the software that you can actually use for self-driving purposes.

Yeah. Panda is a universal car interface. When it's used by Chffr, it's read only. But when it's used by OpenPilot, it can actually drive your car.

And are there already cars that have drive-by-wire, brake-by-wire, gas-by-wire that you can tap into?

We support Hondas and Acuras right now. We just bought a Toyota Prius, so we're going to be doing all the Toyotas this year. We also have a bounty program. One of our users has ported [our software] to a Chevy Volt. I'm just going to sequence up the code a little bit and merge it in, and we're going to pay him out $10,000. We have a bounty up for somebody to do the Ford Fusion. We have a bounty for the Tesla Model S, the BMW i3. We want to support them all.

You're basically putting it out there that if you can hack into these cars' systems, we'll give you this money?

I wouldn't think about it like that. You're certainly not hacking into anybody's system, because it's your car, right? You're just looking at how the communication works in your car. You're not changing the firmware; you're not jailbreaking. It's just that every car has a different [application programming interface, or API] to get to the steering wheel, the gas, and the brakes. So it's about finding the APIs in the new cars.

But the car makers don't want you to be able to access that, do they?

Car manufacturers sell cars. What do they care?

Then why don't you just call up Chevy and be like, "Hey, can I get access to the Volt?" Why do you have to pay a $10,000 bounty instead?

Because business development is a disaster. If we called up Chevy and said, "Hey, could we get access to this stuff?" Well, it's going to take two months for them to go back and forth with the lawyers.

Let's even say they wanted us to get access to it. They're not opposed to it. I think they probably would rather us do it than not, because at the end of the day, you sell more Chevys. And that's what matters to car companies: the sales. But they gotta get it through their lawyers, and you're talking to the business development guy—and you know, he's a level five, he can't be in the meeting with the level fours—and they're just such bureaucratic organizations that it'll take me less time to reverse engineer it than it will to get legal approval. Then, once you finally do get legal approval, you can be sure it's coming with a 10-page contract that says, "You can do this, this, this, not this, and this, and not this, and definitely not that. Don't even think about that."

People think, "Oh, Comma.ai, they're anti-regulation!" We're not anti-regulation. Not at all. Government asked us not to sell something before it was ready; we didn't sell it. It's simple. I'm not looking to fight. I've got enough fights. I think of myself as at war. We're an army. But the enemy is not people; it's nature. There's a reason self-driving cars don't exist. It's not because there's some evil consortium of people. It's just, this is hard.

You've described Panda as basically like a Fitbit for your car. The user gets all this data. But you're also feeding the information that you crowdsource—because you have all these people who are voluntarily putting this software in their cars—into your A.I., which then is learning how to drive. Why is that approach better than what Waymo's doing, what Tesla's doing, these sort of bigger players?

Let's look at Waymo. When they want to figure out how to build a self-driving car, they sit four engineers down in a room: "We come upon a stop sign. We know that we should stop, and we know that we should stop at this distance—and wait, can you get the DMV handbook out? Let me see. OK, we have to signal at 15 meters." That's not what driving is.

This is the same failure that computer vision had for years. When people wanted to build a detector to see if there was a chair in an image, they would write out the definition of a chair. They would say, "Well, a chair has a back, and it has four legs, and it has—wait, what about a bar stool? Is that a chair? Well, I don't know. We're going to investigate bar stools." This is how a lot of people are thinking about this problem and it's absurd. You want to figure out if there's a chair in an image or not? You get a million images of chairs, and you use machine learning to train a classifier: That's a chair or no chair. Right?

So that's the same approach we take to driving. There is no rigid specification or definition of driving; driving is just what people do when they drive. In order to really get access to the full diverse spectrum of what driving is, you need a huge crowdsourced database, and that's what we're building. We'll just learn what it means to drive from people who actually do it.

Because you've got to operate with humans. There's this leftist utopian fantasy: "We're going to wipe the roads of all the cars, and we're just going to have electric cars that interact." It's not what's going to happen. What's actually going to happen is there's going to be a bunch of self-driving cars, a bunch of human-operated cars, and they've all got to interoperate. And the humans ain't changing to match the self-driving spec. A lot of people get this wrong. You build the technology to adapt to humans. Changing people is much harder than messing with machines.

But don't other companies also use A.I. and deep learning techniques?

Well, everybody is moving towards A.I. and deep learning, and obviously Google is going to get to level four before we will. Nobody doubts this. But the reason Google is going to lose is they're going to get to level four with a $100,000 system, and then they have to deal with, "OK, how do we actually get this in the hands of people?"

If we get the A.I. problem solved a year after Google, Google's still going to be sitting there thinking, "Well, maybe we could finance it to people." For us, we're just going to push the software update out. Elon knows this too. We're going to push a button and then boom, all these cars are level-four self-driving. We get the insurance company to underwrite the policy, and drivers don't have to pay attention anymore. Done. We're not going to wait until everything is perfect and then hypothetically ship it out. Not a single product ever has worked like that.

If Tesla can just push out a software update, how do you compete?

Well, not every car's a Tesla. And not every car will be a Tesla for a while. Just like not every phone's an iPhone. We think Tesla has a very bright future. I think Tesla's definitely going to be a big player in self-driving—they're just going to be an Apple kind of player. They're going to own the high-end 20–25 percent of the market. But the truth is, most cars are not high-end cars. The 25–30 percent high end we're not going to capture. That's fine. What about that other 70 percent? Most smartphones in the world are Androids. Most self-driving cars in the world, in five years, will be Comma.ai.

There are people out there who are already using your products and actually posting videos on YouTube.

There's 99 people right now running OpenPilot. But we have thousands on the Chffr network, so we'll see 10x growth in the next year. I'm not sure how many cars Waymo has running—they might be the second-largest network, but I think we're pretty close. And then Tesla obviously has tens of thousands of these things, so we're quite a ways to go before we beat them. It's just like Android and iOS. In 2008, there were tons of iPhones in the world, and like 10 androids. Who bought the T-Mobile G1? Nobody. But Android slowly took over the world because they were cheap and they were mass market. And they work pretty good.

You're collecting a lot of personally identifiable information—things that would be attractive to stalkers, hackers, private investigators, the National Security Agency [NSA]. Being a super hacker yourself, as the media described you, you know that this information is vulnerable, so how do you view privacy issues?

Here's the thing about privacy in general, and at the end of the day people might yell at me for this, but this is how you have to look at the world: The reason the NSA is a big problem is because they have privacy and you do not. I don't like this. I'd much rather the world be the other way around—I have privacy and they do not—but I don't see that happening. It would be nice if we both had privacy, but I don't see that happening either. So maybe the real solution is: We don't have privacy, but they don't have privacy either. Maybe privacy is not the best thing to think about going forward.

"Humans ain't changing to match the self-driving spec. A lot of people get this wrong. You build the technology to adapt to humans. Changing people is much harder than messing with machines."

In your own life, would you be upset if privacy went away?

Well, let's talk about personally identifiable information. With Chffr, we do not record either the microphone or the front-facing camera. I don't want pictures of your face. I don't want to know your name. I don't want to know your age. I don't want to know your gender. This is not your data that I'm interested in. This is your car's data that I'm interested in. The camera is outfacing into the public world.

I take your point, but the NSA makes a similar argument: "We're not listening to your phone calls. We just use metadata." But if you look at somebody's metadata, you can find out where they work, if they have kids, where they go to school, what their religion is…

I don't think these things are necessarily bad, I just don't like the idea that one organization has a monopoly on it. What if we could open our data up more, and think about it not as "Facebook owns this data, Google owns this data," but we all collectively own the data and you're contributing to a big collective pool of data. All the data combined is a whole lot more powerful than any piece of the data alone, and I think we can do incredible things with these sorts of data sets.

That's a pretty radical view, isn't it? I don't think very many people would be comfortable with all of their data being public.

In terms of what you're saying on a phone call [that may be true]. But where you drive? That's already public, and we have to accept this. I could hire a P.I. to follow you around and that doesn't really violate any expectation of privacy. I mean, Google did it all with Street View. And you're outraged about Street View? Come on. Google spends all this money to do this, and then provides it as a free service for everybody. You gotta think about whether the benefits outweigh the costs. Let's stop saying "Reclaim our privacy!" The NSA is going to beat you there. I want to beat the NSA. I like winning.

Mikkel Aaland

But isn't it a problem that someone can then hack you?

Again, that's a people problem. I think I'm much more concerned with problems with nature, problems of technology. I was looking today at atomic clocks. Do you know these things drift nanoseconds per day? We can build a better atomic clock. That's a much more interesting problem to me.

Is that next?

I'm not going to do it. I want somebody to, though. I want to see better atomic clocks, better GPS, better everything.

Talk about the fact that your stuff is all open-source.

It just kind of became obvious while I was working on this that self-driving cars are going to be the effort of a lot more than one small startup. So there's a few ways you can play this. You can say, "I'm going to own one small horizontal and then try and license this to other companies." But I just couldn't stomach the meetings with the business development people in these companies.

We're an open, vertical company. You'll be able to play anywhere on the vertical. You want to start a company in the self-driving space? Literally rip us off. Please. Just download our stuff, build it into your cars, don't give me anything. Because the battle is not between these companies, it's between open and closed. It's between self-driving cars and no self-driving cars.

Comma.ai had to stop work on an earlier product after getting a concerned letter from the National Highway Traffic Safety Administration [NHTSA]. What regulatory issues do you see having to tackle as you move forward into a semi-self-driving or a full self-driving system?

I've seen this NHTSA regime be a lot better than the last NHTSA regime. [Obama's last highways chief] talked about how his dad died in a motorcycle accident, and how this is a personal crusade for him, and how, "if the Tesla Autopilot has two more accidents, we're pulling it off the road." Stop. Thirty thousand people are dying a year [in car accidents]. Stop with that rhetoric. Let's use statistics. Let's use math. Let's use engineering, and let's solve these problems.

Earlier you mentioned Elon Musk. You guys got into a little bit of a public spat. He says that you wanted to bet that you could outperform his system, but then you ultimately didn't want to agree to the bet.

Well, the question is how the bet gets resolved. For something to be a contract, the bet needs to be resolved from external third party criteria. And that's what I argued for. He argued for the bet to get resolved by him deciding at the end of the day whether I did a good enough job. So I'm gonna bet with you on a coin flip, but you're going to decide whether the coin landed heads or tails? "Oh look, it's heads." "That looked kind of tails-like to me. Give me $20."

But I have a ton of respect for Elon Musk. I really do. He is doing so much good for the world when it comes down to it. At the end of the day, Tesla's not our competition. We would love to see Tesla succeed. I would personally just love to see Tesla and SpaceX succeed, because that's the kind of world I want to live in. It comes back to the pie argument: I'd rather Elon Musk have a much bigger share of the pie than me, as long as the pie tastes awesome.

He said in an interview that you were underestimating the technical challenges involved in self-driving cars. He said, "It's not a one-guy-in-three-months problem. It's more like thousands of people for two years." Well, it's two years later, so who's closer to making this happen?

Tesla's always going to be a little bit ahead of us, because they started earlier. It depends what "it" was referring to there. If it was referring to the whole self-driving problem, I never would have said that'd be solved by one guy in three months. But I'm [also] not going to say it's a thousand-person problem. Comma.ai is a 12-person company. It's been about two years. I think it's going to be at least another three in order to fully solve the problem.

You also bet him, last year, that your system could successfully navigate the Golden Gate Bridge before his could.

That one was played up by the media a whole lot. We can both drive across the Golden Gate Bridge without intervention, let's just say that.

So you both win?

We both win. Everyone wins.

Musk is an alarmist about artificial intelligence. He wants international bodies to regulate it, and he thinks it's going to destroy humanity if we don't. What do you think?

In some ways, A.I. is a very, very powerful weapon. I'm not sure it presents any particular threat that previous weapons have not. Some people think that the A.I. is going to get out of control—there's going to be a Skynet/Terminator scenario. This to me seems highly unlikely. And a lot of researchers in the field will agree. What does seem likely to me is humans will get their hands on the A.I. weapon and do what humans have done when they've gotten their hands on any other weapon in the past: "We don't like those people. How can we use it against them?" It's not about the technology, it's just about how people are going to use the technology.

Now, if Elon Musk is calling for an international arms control regulation, he should look at history and see how well that has worked. I certainly think it's incredibly premature to talk about any sort of government-level A.I. regulation.

So you're not an alarmist about artificial intelligence in that sense. But you have said in the past that A.I. is going to take everybody's jobs.

Absolutely.

And you think this is a good thing, right?

Of course.

People have been warning that technology was coming for everyone's jobs since before the Industrial Revolution. What's different now?

Very few jobs today, at least in First World countries, are people doing work with their muscles. You even think of traditionally muscular work, like mining. It's not people with pickaxes; it's people operating hydraulic machines. So the Industrial Revolution replaced man's muscles.

The car really did replace all the jobs for the horses. There aren't that many horses left. So I think it's the same thing with humans and their jobs. The last refuge is man's mind, and once you start building minds that are superhuman, well, there are no more jobs for humans.

This is a great world. Isn't that the endgame of technology in general? Didn't we sit there and be like, "Man, I'm really sick of hoeing this field. Wouldn't it be great if there was some mechanized hoe?"

You don't see a dystopian future?

No. I'm a complete optimist about the future. I don't know why people are so outraged about things. We're living in the best time ever. We are living in paradise, practically. When's the last time you had to deal with, "Oh man, I want to go to the coffee shop, but there might be lions. Gotta watch out for the lions." The miracle of modern society: no lions. No, not society—from a society perspective, we can do a whole lot better. But technology has been incredible.

This interview has been condensed and edited for style and clarity. For a video version, visit reason.com.