Tesla, Autopilot and Data Collection


2013-tesla-model-s-test-review-car-and-driver-photo-490335-s-original

f I were going to build a self-driving car, I’d want to have a large corpus of field data. What situations come up most commonly? How do humans handle them? How might our computer react in a similar situation?

Google’s self-driving car efforts are well known and they’ve spoken publicly about how all the miles their autonomous cars drive are carefully logged and analyzed. When a human takes over, presumably something has gone wrong or is at risk of going wrong and those situations are carefully scrutinized. Unfortunately this data is not what I’d call a clean sample of real human behavior. Everyone driving one of Google’s car is either an employee of Google or closely affiliated and most importantly, they know they’re driving one of Google’s special and expensive cars. No doubt they’ve signed a bunch of confidentiality and other forms. And so even if they’re driving themselves, they’re likely to be highly cautious. Those cars can only be used in certain controlled circumstances and the data Google can collect will be constrained.

When Tesla announced the autopilot features it really struck me that the hardware installed seems much more capable than what is really required for the features they’re offering. Radar for adaptive cruise control? Seems like overkill. But hundreds of thousands of Tesla cars with these sensors, all collecting data on their drivers and the situations they encounter seems like an amazing opportunity to build a corpus of real world situations encountered by human drivers and what they do. We already know that Tesla has the ability to connect to their cars remotely. What if Tesla is already deploying their self-driving software to their cars and running it in a mode where it’s just not hooked up to the actuators in the car? At every moment the car software could be simulating what it might be doing in the present situation and logging what happens when its choices differ from what the human actually does. Tesla engineers can then analyze these logs, adjust their software and re-simulate the car encountering that situation. At some point they’ll have it down to where the only places the human and the computer diverge is where they’re convinced the computer is making better choices. At that point, ship it! (Modulo lots of regulatory and insurance concerns.)

As an engineer that sounds exciting and cool. For me the ideal version of this would be that all the car data would be uploaded to HQ where I could analyze it indefinitely. Of course I’m sure customers and law enforcement would be interested to know if a full sensor download from all Tesla cars were being stored at Tesla HQ indefinitely. So it’s possible that the data would be anonymized somehow before being uploaded to HQ.

Does any Tesla owner out there want to share the privacy policy or the text of any opt-ins for the autopilot features on the new cars?

,