odysseus2000 wrote:There is no question in my mind that Tesla fsd is getting better but I do not know how much better it can get and as of now it isn't good enough, but with exponential phenomenon massive improvements can happen very quickly.
Development like FSD is the opposite of exponential - or rather the exponential is working against progress rather than helping.
I mean, take for example that video I linked where the Tesla wanted to pull out in front of an approaching car.
The driver tried to explain it by saying the approaching road isn't straight, so the cars are coming from a little further left than normal.
To be honest, I'm doubtful that that was simply the reason in that case, but it's good to illustrate my point...
Something about that junction was subtly different which has meant that what works normally, this time didn't.
But stop and think about that for a while... just think how subtle the difference was, and then try to think how many other variations you might encounter... not just in slight differences in the approach angle of the road, but also lighting conditions, weather conditions, what line approaching cars are taking and also what type and size of cars they might be.
This is the issue - the more you try to account for these less common situations, the more you realise that there are bleeping loads of them!
And this is what I mean about the 'exponential' working against you. To get basic driving .. e.g. simple, nice straight give ways, etc, with regular cars, are probably relatively 'easy' to manage. And that might get you 90% of the way there. But to get the next 5% of the way there now requires a huge increase in development effort compared to the prior 90%. And then the next even just 2% probably requires substantially more again.
Rather than being exponential, progress is more likely to be a logarithmic pattern...
https://en.wikipedia.org/wiki/File:Logarithm_plots.png And intuitively that fits as well, because clearly no-one expects cars to reach a point where they race to the FSD finish... they're never going to be perfect... as I think everyone accepts, what is most important is getting them better than people. If the progress were exponential, the closer you get to perfecting FSD the quicker your progress would be, and you'd be slam dunk home with a perfect driver. Clearly that isn't anything like realistic!
Which does raise an interesting point from here on in with self driving cars....
How do you decide that it's 'ready'?The FSD 10.5 beta testers seem quite happy, and you seem to be impressed.
But let's just consider the collection of videos that we've posted between us in this thread on 10.5. There's what 5 or 6 videos, covering roughly the same amount of journeys.
And in those videos, if the driver hadn't been there, there was the potential for at least one car to come off at a corner likely damaging the car quite badly (though the occupant would probably be OK, just a little shaken), but another incident where it was likely the car would have moved into the path of an approaching vehicle and been hit quite hard. Likely quite a more serious accident resulting in some degree of injury or worse.
Let's imagine... if Tesla decided this 10.5 was good to go and rolled it out to however many Tesla's there are out there. And told owners that they could leave the driver's seat unoccupied and sit in the back and go to sleep.
If every Tesla driver uses there car for more than just a couple of miles a couple of times a week, within possibly 1 to 2 months, probably almost every Tesla out there would likely as a bare minimum have had a bump of some sort requiring fixing in a garage. There'd be unlikely any undamaged Tesla's out there.
And a not insignificant proportion of owners would probably have suffered injury or even death as a result of the FSD.
When you think about it like this, it becomes clear why Tesla are only even releasing FSD beta as a 'driver assistance' aid to only a select number of owners.
But back to my point...
With the progress being more logarithmic than exponential, how do Tesla get from here, to releasing a proper self driving - 'hands off the wheel' - as per Waymo?
Clearly, any self driving car, is still going to need some interaction, even if just to ask the riders where they want the car to park - e.g. which driveway, etc. In these cases the car can still determine what's safe, and it could just ask a passenger without a driver's licence which of the safe options the passenger wants. So this isn't what I'm talking about in deciding whether to release an FSD.
I'm talking about the criteria for deciding whether the software can be trusted enough to roll out to millions of owners, such that they could go to sleep on the journey.
With the smaller and smaller each time increments in functionality - coupled with the risk that changes could break prior functionality - what's the road map from here?
On the Tesla videos on youtube, it's still a case of disengagements (or driver inputs) per journey, rather than jouney's before disengagement.
And this is just a select few showing only their journeys.
As already mentioned in my previous posts, I've used the analogy of building a skyscraper, and I believe that Waymo have targeted a much taller skyscraper, so to speak, with their approach and architecture so I believe that Waymo are less likely to suffer from the ever diminishing returns before they hit something 'acceptable' - they'll still be on a steeper upward part of the curve when they reach 'acceptable'. In fact that's probably how they already got the geofenced operation without anyone behind the wheel... because they targetted well beyond what was needed for the geofenced area, so while they're still targeting much further ('higher') elsewhere, it was probably pretty clear to Waymo it was 'good enough' to release in nice dry, large roads, low volume of traffic Pheonix.
But to Tesla... presumably when Tesla releases FSD 'final release', it will be expected to go to a large number of owners (as to how many that is, you'll know the number better than I do!
), but presumably the expectation will be in the region of less than 1 disengagement (i.e. without driver that would mean accident!) per millions of journeys!
But just think about it... think about all those enthusiasts videos on you tube... how many excited youtube Tesla owners will need to be pumping out video after video after video, proclaiming "hey no disengagements!"... before you could have confidence that Tesla's wouldn't have any disengagements in millions of journeys?
There's going to be a lot of bored, frustrated Tesla FSD beta testers putting out youtube videos complaining "come on Tesla, surely this is ready to release now!?"
I mean, Tesla aren't going to release FSD final release just because one guy on youtube had his first drive without any disengagements!
And when you think about it like this, it really does give an idea how far Tesla still have to go yet, before they could realistically consider releasing FSD properly.
But to Tesla's credit, I will say one thing...
"I think its clear that the Tesla's system works best on well marked roads"
I actually disagree with this.
I think Tesla's 'gung-ho' approach to self driving, which I generally don't like as a general principle, does give them one advantage.
I actually think Tesla will probably 'cope' 'better' with obscured roads and non-existent road markings. Things like roads covered in leaves, or roads covered in snow and such like. At least in terms of deciding where the road should be.
Already in some of Tesla videos, the owners have already tested them in these scenarios, and even though the car (particularly in snow) really doesn't seem to have been developed to handle snow yet - it sure as hell ain't adjusting it's speed to suit! - but to give the Tesla some credit, it does still seem to be able to infer where the edge of the road is likely to be.
I mean, true, just like a human driver in such conditions, it's probably winging it to a large extent!
But Waymo on the other hand seem to be more focussed on robust recognition of what's around the vehicle. Really making sure, with the lidar, etc, that it is confident in what it's 'seeing'.
Waymo's approach is great for normal driving conditions, where the road surface is visible.
But my expectation is that Waymo will probably find obscured road surfaces, and poor road markings / poor road surfaces, more of a problem than Tesla would - at least in terms of deciding what route the car is expected to take - though I suspect Waymo will be better at knowing
how to drive in those situations (e.g. recognising snow and the need to slow down, etc), even though Tesla would likely be better at deciding where the edge of the road is.
That's not to say that Waymo, by targeting a stronger foundation for a higher skyscraper, won't be able to deal with these situations.
Similarly it's quite plausible that the thing that I believe is working to Tesla's advantage in these situations is perhaps what's hobbling Tesla in normal situations.
I mean, in my view Tesla might are trying to be too general - I think this is why Musk refuses to consider geofenced areas, etc - and this is perhaps why it's struggling, e.g. to get into the correct lane for turns, etc. I mean, you can tell from the Tesla visualisations that it often is unsure where the edge of the road is, even in clear situations that Waymo has absolutely no problem with. And that I believe is because Tesla is trying to be quite broad and general in it's road edge (and lane) determination, and that's undermining it in some situations where if you took Waymo's more thorough approach of detecting what's around the vehicle, detecting the edge and lanes should be easy.
In essence, in my view, Tesla is more of a jack of all trades when it comes to detecting the edge of the road, but a master of none.
In other words, when the road is clear, and the markings are clear, then I believe that Waymo has a clear lead vs Tesla.
But once the markings are almost gone, or substantially obstructed by leaves or washed over with mud, etc, I suspect at the moment Waymo's capability probably will drop off quite rapidly (I mean when it gets extreme), where Tesla might still manage an 'adequate' job of doing something 'acceptable' - though whether it would moderate it's behaviour to drive more slowly to adapt to the risk of things (stones, rocks, potholes, etc) potentially being hidden or potentially the poorer traction of such a surface, I very much have my doubts from the Tesla videos I've seen so far!