My longtime blogging friend Roland Tanglao recently posted something about the horizon for Tesla to reach full self driving, and how it keeps being a decade away.
It was the same a decade ago. In 2015 I posted a little rant about the false ethical dilemma’s involved, and the blind spot they result from.
We’re still in the same spot, despite a decade of advances.
The faulty assumption is that ‘self driving’ means that the car needs to do all the work autonomously.
Whereas ‘self driving’ only means that the human driver no longer has to do any of the work. Everything else is assumption.
The car is not the sole locus of sensing, everything else is a way more relevant locus of sensing
The car is not the sole source of data, it’s more likely the smallest source of data, relating only to its current behaviour and intentions in order to broadcast that to everything else.
The car is not the sole unit of decision making, it’s more likely it needs to be the recipient of mostly outside instructions from other decision making nodes.
A self driving car is not autonomous, it runs guided on tracks of data. Those tracks are external to the car.
Yet all involved attempt to make the car do all the work.
That’s the blind spot I think ensures that the self driving time horizon is moving backwards as fast as the projects progress, as it has done for at least a decade.