My longtime blogging friend Roland Tanglao recently posted something about the horizon for Tesla to reach full self driving, and how it keeps being a decade away.

It was the same a decade ago. In 2015 I posted a little rant about the false ethical dilemma’s involved, and the blind spot they result from.

We’re still in the same spot, despite a decade of advances.

The faulty assumption is that ‘self driving’ means that the car needs to do all the work autonomously.
Whereas ‘self driving’ only means that the human driver no longer has to do any of the work. Everything else is assumption.

The car is not the sole locus of sensing, everything else is a way more relevant locus of sensing

The car is not the sole source of data, it’s more likely the smallest source of data, relating only to its current behaviour and intentions in order to broadcast that to everything else.

The car is not the sole unit of decision making, it’s more likely it needs to be the recipient of mostly outside instructions from other decision making nodes.

A self driving car is not autonomous, it runs guided on tracks of data. Those tracks are external to the car.
Yet all involved attempt to make the car do all the work.
That’s the blind spot I think ensures that the self driving time horizon is moving backwards as fast as the projects progress, as it has done for at least a decade.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To respond on your own website, enter the URL of your response which should contain a link to this post's permalink URL. Your response will then appear (possibly after moderation) on this page. Want to update or remove your response? Update or delete your post and re-enter your post's URL again. (Find out more about Webmentions.)