odysseus2000 wrote:BobbyD
Because you just claimed that
odysseus2000 wrote:
Tesla score is backed up by accident statistics.
It is an inference from observed behaviour.
If the accident statistics were terrible, the beta trials would have been made illegal, but they haven't hence the accident statistics must be acceptable to the authorities.
In investment and trading one often does not have specific information and one has to deduce it from what one knows.
Regards,
Tesla seems to have taken the approach that their FSD is just a 'driving aid' and therefore requires the driver to still be in control of the vehicle at all times. So I suspect that has possibly negated much (perhaps not all) need to apply for licences for 'self driving' car testing on public roads. It's officially not a self driving car - not even when the FSD beta is engaged.
As for their accident statistics, it's getting too late in the evening to go searching for the references now, but I seem to recall the national transportation and safety board has quite a lengthy list of incidents involving Tesla's that they're investigating. From what I gather, the regulators already forced Tesla to put more warnings about the driver needing to be in control, and the software checking the hands are on the steering wheel and such like.
Youtube has just recommended this video to me...
Needless to say, the Tesla owner who produced the video is praising how quickly Tesla were to push out a fix.
Personally, I think that speed of deployment is something of concern, not something to be praised.
The self driving stack is quite complex, and there is always the possibility that different layers might interact with each other in unexpected ways in response to even small tweaks.
So I would hope that every (software) build that they produce would undergo an awful lot of formal testing before being rolled out, even to beta testers.
Let's get real here - this software is being used in scenarios where people really could die if it doesn't work appropriately.
I know when I write software on relatively small scale applications in comparison, even the builds my team produce take a while to complete with all the unit tests etc.
So for Tesla to release that update "just 24hrs later" raises alarm bells for me.
Surely putting out a buggy release is of concern in itself, and calls into question the testing even then ... but then to respond to a buggy release by even more rapidly releasing another new version to supposedly 'fix' the issue...
It really is beginning to feel like they're winging it.
Was this just a one off mistake?
Or a symptom that they could now feeling the pressure of a stack that isn't up the job? Are attempts to squeeze more out of the stack now resulting in existing functionality breaking because the hardware isn't up to handling it all? (The skyscraper foundation analogy that I've used before)
Or worse, is this perhaps an indication that they've lost architectural / engineering integrity in their stack development - are changes in some places breaking existing functionality in others because architectural integrity has been lost? If they're rushing to change the architecture - e.g. to introduce temporal tracking of things, etc - irrespective of whether the hardware foundation could in theory cope with it, are they now taking too many short cuts and rushing, and losing control as a result?
OK, so far it's only once this has happened, so perhaps I'm prematurely jumping to possibilities.
Though I seem to recall on some of the previous Tesla videos, the commentators have said they've felt some aspects have gone backwards at times when new FSD beta releases have come out.
BTW, if anyone's interested, it looks like the Waymo team have re-started putting out technical talks / videos about their self driving efforts again... (they used to do this early on and was very interesting, but then seemed to go all quiet .. but happily it looks like they're doing them again)...
https://youtu.be/oJ96bgmSaW0?list=PLcvM ... B6dJU&t=77 Very interesting video... comparing what I've seen in that video, to what I've seen from Tesla presentations, I still have the distinct impression that Waymo are aiming for a skyscraper orders of magnitude higher than Tesla.
The things that Waymo are considering still seem to me to go way beyond what Tesla seem to be considering.
In my view, Tesla really are taking a very big gamble that they've pitched their stack appropriately.
If Tesla are right, then the additional costs for Waymo will make it no contest - clearly Tesla would definitely get the cost advantage.
But only if they can make it work, but I really do have doubts (as I've bored people with enough already
) that Tesla's current stack is up to the job.
Just look at the level of detail that the Waymo lidar is picking up in objects!
https://youtu.be/oJ96bgmSaW0?list=PLcvM ... 6dJU&t=490 Compare that to Tesla, with Tesla's fairly low resolution video being used to estimate depth!
That extra level of detail gives Waymo a huge lead in being able to predict behaviours.
I mean, Tesla's visualisations from what I've seen in the videos on youtube seem to just barely even be able to detect people half the time, but even when it does detect people, they seem to just displayed as almost fixed 'sprites' in terms of arm and leg positions, with the sprites jumping 90 degrees seemingly at random at times, and not always matching the direction the person is facing.
Whereas Waymo's detection of people includes detailed, stable data from the lidar as to the position and motion of the limbs, etc. If you want to predict if a person walking close to the edge of the road is likely to step out into the road, having that extra detail can make the difference between reliable prediction and guess work.
The waymo development goes into quite some depth on behaviour prediction of the things around the vehicle...
https://youtu.be/oJ96bgmSaW0?list=PLcvM ... dJU&t=1096 I don't recall seeing anything similar from the Tesla technical videos, though if anyone has any references to any that do show it, I'd definitely be interested to see them.