odysseus2000 wrote:Humans have been driving cars for over 100 years using two optical sensors called eyes. Clearly they are not that good at it, but one imagines that 6 eyes coupled with much faster reflexes and non-distractible would likely be better.
Computers certainly have some advantages as you mention. The non-distractible being probably the absolutely most significant.
But where they are seriously hindered is in their cognitive (in)abilities. They don't have the capability to reason or make inferences, (but see [1]). They don't understand what the things are that they see. At best, all they have is typical behaviour patterns for predetermined categories of entities that they've been developed to identify.
And that is a huge disadvantage for self driving cars vs humans. They can't analyse for example whether a load on a lorry that you're following is not tied on properly and posing a serious risk of falling off. It's unlikely that it would be able to predict a falling tree, or telegraph lines at risk of falling, e.g. in storms. It's unlikely that it would be able to infer that a road might be undermined if flood water is washing underneath it, or a bridge unsafe if part of the wall has started to fall into the river below. Or if a cyclist was about to hit an object in the road (small pipe, etc) that might throw them suddenly off balance, etc.
Don't get me wrong... the technology is amazing, and I do believe it will do better than humans overall, but I don't agree that the computer eyes are de-facto much better than humans. In most cases the extra speed of processing isn't all that beneficial. That's not to say there won't be some situations where it's quite literally a life saver, but these are likely to be infrequent edge cases. I would suspect that the lack of inference ability would be more of a hindrance.
Most importantly is being able to recognise what's around the car - as a minimum recognise solid things that could be hit. Fast reactions are useless if you don't spot the person stood there wearing black at night on a black road. Or if a group of school kids are huddled together looking at something together, such their faces aren't showing, and all wearing the same colour uniforms so that their outlines are obscured meaning the vision system doesn't recognise them as people, stood in front of a vehicle the same colour as their uniforms, such that the vision system doesn't even see there's an 'object' there that it needs to avoid irrespective of what it is, etc
Using cameras will always be susceptible to optical effects - camouflage (intentional or accidental), reflections, optical illusions and such like.
Even just establishing depth from 3d is not particularly accurate - I can't help feel this is perhaps why the FSD display on Tesla's tends to be somewhat jittery, and prone to changing its mind quite frequently in Tesla videos on youtube vs what the equivalent Waymo videos show. (Just watch the video I link to below... it's quite nauseating watching the cars on the visualisation jumping around... some even flip 90 degrees in an instant .. other times as the car comes become unobstruted it's very apparent that the distance judgement is way out, as the (in reality stationary) car moves several meters over a second or two as it comes into view ... you wouldn't have that problem with lidar!)
Establishing depth from images is also highly costly in terms of processing (think both power consumption and computational processing capacity).
When safety is so important, why wouldn't you use a dedicated sensor to give you distance to the things around you, a sensor which also gives you redundancy as well as an alternative channel which can continue providing valuable information when other channels are out of their comfort zone?
As for cost, Google / Waymo and others are putting significant investment into lidar development, and it looks highly likely that when mass produced, the costs will be significantly reduced.
odysseus2000 wrote:There are huge problems with lidar and radar as clearly outlined by the Tesla's Andrej Karpathy (senior director of their robotic driving AI team) in videos I have linked to on this site, even before one considers the cost, complexity and ugliness of having such additional systems on a car.
Everything depends on what Tesla can get to work, but so far Tesla, at least imho, after I have studied a lot of systems, is very close to having a viable system but most of the others are not.
Regards,
https://asia.nikkei.com/Business/Automo ... omous-cars
"But technological innovation has significantly undermined the assumption underlying Musk's argument.
Velodyne CEO Anand Gopalan told Nikkei that Musk's view of lidar technology is "five, six years out of date."
(... )
Velodyne has sharply reduced lidar production costs by developing solid-state lidars. This new approach has helped shrink the size of the sensors, eliminate moving parts in the optical mechanisms and enable the kind of mass manufacturing that has brought costs down.
(...)
Luminar Technologies, a U.S. vehicle sensor and software startup, has also developed low-priced lidar sensors priced at $500 to $1,000.
Luminar's high-performance lidar sensors can accurately detect objects ahead of the car out to 250 meters away and grasp the situation around the vehicle with a perception accuracy of several centimeters. The sensors can also detect dark objects, like black debris or a person wearing black clothes, even on roads with minimum reflectivity."
Much is made of Tesla's data gathering in the real world. But without lidar on their vehicles, how can they know whether the image recognition did indeed spot all the objects around the vehicle?
If in a couple of year, Waymo turned round and said "you know what, we've been using lidar all this time, and the past 24 months none of our cars have ever not spotted something optically, that was detected by the lidar, so we're confident we can get rid of the lidar"... then I'd think, ok, fine they've no ego behind the decision and they've got the data, so I'd trust them that it's safe to remove the lidar.
But when it comes to Tesla, headed by an ego that already dismissed lidar, and isn't therefore even getting real world data to inform the decision, ... what confidence can we have that they'd actually admit their mistake if they found it limiting to vision lacking? Would they even know? If they aren't even using lidar, how would they know that their vehicles wouldn't be better with it?
By the way, even Tesla engineers admitted to California authorities that they're a long way at getting past level 2 autonomy.
Waymo are already offering services in the real world with no safety driver in the driving seat.
Waymo - keep your hands off the wheel
Tesla - Keep you hands on the wheel
I don't see what grounds at all you have to think that "[Tesla] is very close to having a viable system but most of the others are not"
Prompted by writing this, I've just checked on youtube to see what the latest is, and pure coincidence, this Telsa video just posted seems very enlightening...
https://www.youtube.com/watch?v=NmMxH8j ... el=AIDRIVR
It starts by detailing some new features in the beta.
And you know what, I think it's ridiculous that they're calling it beta... the level of new technology they're even only just partially introducing, come off it... this is prototype stuff, not beta. Beta testing is about identifying bugs in a product that is otherwise believed to be pretty much finished. But what is getting released in these FSD 'beta's isn't just bug fixes... it's whole new predictive models and such like, etc.
And omg... @ 7 minutes, it pulls out right in front of an approaching vehicle, causing that vehicle to have to brake sharply! I haven't seen earlier versions do that!
From what I'm seeing of Tesla, it really does look like they potentially going to struggle to advance from here. Some of the things in that video seem to undo good behaviour from previous versions. Even the guy narrating the video admits that its behaviour can be very variable. And you can also see the jitteriness in the planned route... it's like a watching a plasma sphere.
Those are real bad signs that it's no where near production ready yet. If it were near production ready, it should be quite 'settled' in its behaviour. That it's still jittery, suggests to me that it's pushing right at the limits of what "it" is capable of. By "it" I mean the whole stack - cameras, vision, software architecture, etc.
I think I've used the skyscraper analogy before... when you build a skyscraper, you can't just keep adding more floors until you reach the height you want. ... you need to have the right foundation and the right structure in the floors below to support your target height.
If you misjudge and find you need to go higher than you built your foundations to support... well, let's put it this way... it's always been a matter of pride and competition between building designers and countries, etc, to get the worlds highest skyscrapers.
Pop quiz... when one country has leapt in front of another, how many times has the previous record holder just gone back and added a few more floors on top of their previous record holding building, to make it the new record holder once again? I don't know for sure, but I don't believe it's that many, if at all.
Almost invariably, when someone builds a new world record skyscraper, it's pretty much always a new dedicated building, build from scratch, specifically designed to reach the new record height.
For Tesla shareholder's sake, let's hope the jitteriness and quite inconsistent behaviour in otherwise similar situations, still present in the FSD beta, isn't indicating the Tesla FSD skyscraper has reached the limits of what it's foundations can support.
--
[1] I'm well aware that there are, and have been over the past several decades, numerous different attempts at developing computers that 'reason', but in the context of this post, to all practical intents are purposes these are not relevant; these aspects of AI have not yet had their 'revolution moment' the way AI image processing did approx a decade ago. So while simple, limited 'laboratory' reasoning is possible, it's scope is so limited that it is not relevant to the point I'm making in the body of my post.