Donate to Remove ads

Got a credit card? use our Credit Card & Finance Calculators

Thanks to Anonymous,longview,nimnarb,sg31,Rhyd6, for Donating to support the site

Musk endeavours

The Big Picture Place
odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#459517

Postby odysseus2000 » November 20th, 2021, 10:52 am

This is a super interesting 10 min 10 second video on what killed the first electric cars, well researched & accurate:

https://youtu.be/Xzu2EuaLOCY

Regards,

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#460084

Postby odysseus2000 » November 22nd, 2021, 12:41 pm

Alex Potter of Piper Sandler gives his view on Tesla (1.5 hours), very worth watching imho:

https://youtu.be/ln2czFNI6mk

Regards,

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#460269

Postby odysseus2000 » November 23rd, 2021, 10:36 am

All new buildings in UK will be required to have electric charging points in under 6 weeks:

https://cleantechnica.com/2021/11/22/uk ... g-in-2022/

Not sure if this includes private homes.

Also big hydrogen production facility, suggesting the backer had some good lobbyists!

Regards,

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#460304

Postby odysseus2000 » November 23rd, 2021, 1:07 pm

Third age of aviation, Rolls Royce electric plane breaks records:

https://youtu.be/kd-RDX1IjuM

More customers for Tesla batteries?

Regards,

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#460309

Postby odysseus2000 » November 23rd, 2021, 1:41 pm

Japan to double EV incentives to match Europe and US:

https://asia.nikkei.com/Business/Automo ... and-Europe

All good for Tesla cars!

Regards,

BobbyD
Lemon Half
Posts: 7187
Joined: January 22nd, 2017, 2:29 pm
Has thanked: 289 times
Been thanked: 679 times

Re: Musk endeavours

#460413

Postby BobbyD » November 23rd, 2021, 9:48 pm

BERLIN, Nov 23 (Reuters) - Employees at Tesla's (TSLA.O) huge new factory near Berlin will elect a works council to represent their interests, a German trade union said on Tuesday.

The IG Metall trade union said seven employees had taken the first step towards setting up a works council, planning to choose an election committee on Nov. 29.


- https://www.reuters.com/markets/europe/ ... 021-11-23/

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#460490

Postby odysseus2000 » November 24th, 2021, 9:13 am

BobbyD wrote:
BERLIN, Nov 23 (Reuters) - Employees at Tesla's (TSLA.O) huge new factory near Berlin will elect a works council to represent their interests, a German trade union said on Tuesday.

The IG Metall trade union said seven employees had taken the first step towards setting up a works council, planning to choose an election committee on Nov. 29.


- https://www.reuters.com/markets/europe/ ... 021-11-23/


Does this mean the employees will start to do some work making cars?

As things are a works council sounds like an oxymoron.

Regards,

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#460701

Postby odysseus2000 » November 24th, 2021, 9:51 pm

10.5 looks to be better:

https://youtu.be/4Jb-YJWFN-w

Regards,

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#460746

Postby odysseus2000 » November 25th, 2021, 8:38 am

This is an epic Munroe on what he sees as the future (46 mins) for BEV, legacy auto and geopolitics. If he is anything like right (and I believe he is) we are going into some extremely difficult times:

https://youtu.be/g63SJwFdGTQ

Regards,

onthemove
Lemon Slice
Posts: 534
Joined: June 24th, 2017, 4:03 pm
Has thanked: 726 times
Been thanked: 471 times

Re: Musk endeavours

#460962

Postby onthemove » November 25th, 2021, 9:57 pm

odysseus2000 wrote:10.5 looks to be better:

https://youtu.be/4Jb-YJWFN-w

Regards,


Still a very long way to go though... (these videos should all start at the (ahem) 'interesting' points)

https://youtu.be/vqDOYq51AzE?t=112

https://youtu.be/vqDOYq51AzE?t=734
"That was terrible ... that was the worst thing I've ever seen in my life"

https://youtu.be/OcRZrAlLjNU?t=554

And, imv, the worst...
https://youtu.be/OcRZrAlLjNU?t=898
Twice it starts to pull out in front of approaching cars. The driver gives up with the FSD and decides to take the junction manually.

The problem that the Tesla still clearly has, is that it can be taking the same junction great a number of times, then out of the blue, completely fluffs it - sometimes dangerously so. Or in the case of that last video, there's clearly a subtle difference about that junction vs others that is causing the FSD to have serious problems with it - I mean, pulling out in front of moving vehicles level of serious ... I mean, as Elon Musk says, at it's most basic, driving is about not crashing... so how on earth is the Tesla at beta 10.5 not recognising that pulling out in front of those cars would almost certainly result in a real and proper crash!?

Tesla clearly have got a very long way to go before the car can confidently plot a good path. By confident, I don't just mean being 'assertive', I mean plotting a good path and confidently (and appropriately!) sticking to it. I'm talking here about situations where a human driver would be totally at ease and fully confident that the planned route is good, stable and totally unambiguous - i.e. where the road edges and markings are reasonably clear, and there are few if any potential curveballs (pedestrians or cars likely to pull out).

It's just way, way, wayyyyy to jumpy, and this, imv, explains why it can be taking the same junction fine many times, and then out of the blue completely fluffs it up.

The Tesla clearly doesn't have a robust 'awareness' of the environment around it, and clearly just small changes in light, or marginal changes in position, etc that wouldn't make any material difference to a human's interpretation of the world around the car, clearly make big differences in how the Tesla interprets what's around it, even when a human seeing the same view would be totally relaxed and confident in their understanding of the world around the car.

I must admit, it's fascinating watching these videos... I seem to be hooked on watching a few of the Tesla owner's channels now, and youtube seems to have recognised this putting their videos at the very top of my home page each time they release a new one :)

But let's be realistic - these have a long way to go yet before Tesla could consider releasing what anyone would reasonably call a 'self driving' update to the general public.

A couple of elephants in the room that get over looked in the videos are the two 'interventions' that Tesla has very cleverly got everyone not counting as interventions...

Specifically the need to tap the accelerator to tell the car it's safe to go. In the videos, the drivers seem to do this frequently where the car isn't confident enough to move (or move assertively enough), but the drivers never seem to view these as 'interventions'. Yet if the driver weren't in the driving seat, where would that leave the car!?

I mean, just think about that for a minute... we can see situations where the drivers are having to intervene because the car is doing something dangerous or illegal, yet at other times, the car is too unsure whether to go or not. Just think about that ... on the one hand, the car is doing dangerous things some times, and at other times it doesn't have the confidence to take what should be easy and safe manoeuvres. To me this is just reinforcing that the car doesn't have a good 'model' of the world around the vehicle and how to drive in that model world.

The other one being the speed... it still seems like the drivers are 'dialling' in the 'cruising' speed because the car often wouldn't be going the speed the driver thinks is appropriate (sometimes faster, sometimes slower). But let's get real here... driving is just two things ... speed and direction... if the driver is having to provide input as to the speed, then that needs considering as an intervention. You can't have a full self driving car needing someone to keep adjusting the appropriate cruising speed - the car needs to decide that! (To be fair, I don't consider the choice of driving style, that you set for the whole ride, to be in that category)

When evaluating how well FSD is doing, any input at all that is required, needs to be viewed as an intervention, even if it doesn't result in the blue 'self driving' icon considering it a disengagement.

One final one that you might be interested in...

Waymo vs Tesla (10.4)!

https://youtu.be/1BsWFzgUBQY

Same start location and destination.

My observations would be...

- the Waymo had to deal with emergency services dealing with an accident, which the Tesla didn't encounter. And the Waymo correctly identified the emergency vehicles and their lights, etc.

- the Tesla driver admitted they had to press the accelerator (near the end) to make the Tesla go - it wasn't moving of it's own accord. To me, that should have been a fail - there was no-one in the Waymo to press the accelerator!

The video seems to compare the times, but I think this is rather unfair. The Waymo doesn't have any safety driver at all monitoring continuously ready to press the brakes if it does something dangerous (as was needed in a couple of the Tesla videos above!). The Waymos do have the ability to contact base, but Waymo base aren't monitoring them real time ready to hit the brakes if it does something stupid - the car has to come safely to a stop all by itself before the Waymo team can intervene remotely - and as the JJ Ricks videos showed, when base do intervene they call up the passengers to tell them that they are doing so (iirc I've seen two such instances where the Waymo required base to intervene), and they take a little time to evaluate the situation of the car before deciding how to proceed. They're definitely not 'in the (remote) driving seat' so to speak!

I mention this, because clearly the Tesla can be running far more assertively, and be taking far more risks, because at all times, if it stuffs up, there's a human there ready to hit the brakes.

So the presenter's stated aim to test 'which gets there quicker' is a little naïve and unfair in my view. If you want to make it fair, take the Tesla driver out of the driving seat. With the driver in the driving seat, all you are testing is Tesla FSD as a driver assistance technology - where it can take more risks and be more assertive - not a self driving technology. So of course the Tesla is likely to get there quicker.

It really can't be stressed enough that the Waymo's geofencing isn't a restriction or limitation compared to Tesla...

Absolutely the opposite ... Waymo's geofencing in allowing it to do something - operate without a safety driver - that Tesla cannot do anywhere.

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#461004

Postby odysseus2000 » November 26th, 2021, 12:27 am

Hi onthemove,

I concur.

Having to do something in a Tesla is an intervention.

It really can't be stressed enough that the Waymo's geofencing isn't a restriction or limitation compared to Tesla...

Absolutely the opposite ... Waymo's geofencing in allowing it to do something - operate without a safety driver - that Tesla cannot do anywhere.


Yes, excellent point.

My observation of many Tesla video is that Tesla self driving works best when the road markings are good. This was a point made a long time ago on this board that at the time I dismissed. Munroe has also re-iterated the point. I think its clear that the Tesla's system works best on well marked roads and I can remember when I was learning to drive that I much preferred to have good lane markings and I didn't like tarmac that had been freshly laid and was waiting for its markings. With more experience lane markings mattered less but it seems that at the stage Tesla is at, that lane markings are important to it. With good lane markings it tends to progress sensibly, but when the lane markings are poor or non existent it struggles. However, this is only part of Tesla's problem. The inability to find unambiguous route solutions and to have weights for the ones that are determined that are similar leads to the bad behaviour seen in the videos. If Tesla are to have robotics taxes then they need software that is a lot better than humans.

According to this:

https://www.bankrate.com/insurance/car/ ... tatistics/

in 2020 people in the US drove 2830 billion miles and there were 42060 fatal car crashes, or 42060/2830e9 = 14e-9 fatalities per mile. To be 10x better they need about 4206 crashes for the same 2830 billion miles.

In 2019 there were about 21,000 fatalities due to alcohol, speed and with pedestrians out of a total of 36,835. If alcohol, speed and pedestrian deaths can be avoided that would reduce the figure to 36,835- 21,000 = 15,835, still well shy of the 3684 needed to get the number down by 90%, but still worth having.

My guess is that many of the 15,835 deaths were due to lack of attention which a working AI should be able to fix if it can be made to avoid the "lack of attention" problems that are clear in many of the Tesla videos.

As of now, from these videos, it is clear that Waymo are ahead of Tesla and that the Tesla weighting algorithm is not sufficiently weighting safe routes away from unsafe ones and that it is currently worse than a human driver.

My guess is that the software will exceed human capabilities and before too long, but I do not know if the software is currently just needing refinement of if the sensor suite is not good enough. If it is software then perhaps neuromorphic chips with their mimicking of the human brain and low power consumption, may be needed:

https://youtu.be/BDrrjLB7lgE

There is no question in my mind that Tesla fsd is getting better but I do not know how much better it can get and as of now it isn't good enough, but with exponential phenomenon massive improvements can happen very quickly. We live in amazing times.

Regards,

BobbyD
Lemon Half
Posts: 7187
Joined: January 22nd, 2017, 2:29 pm
Has thanked: 289 times
Been thanked: 679 times

Re: Musk endeavours

#461176

Postby BobbyD » November 26th, 2021, 2:11 pm

BERLIN, Nov 26 (Reuters) - Tesla (TSLA.O) has withdrawn its application for state funding for its planned battery factory near Berlin, the electric vehicle maker said on Friday, adding that construction plans were unchanged.

The European Union in January approved a plan that included giving state aid to Tesla, BMW (BMWG.DE) and others to support production of electric vehicle batteries and help the bloc to reduce imports from industry leader China.

Tesla was expected to receive 1.14 billion euros ($1.28 billion) in EU funding for its battery plant in Gruenheide, Brandenburg under the plan, with a final decision likely by the end of the year.

The U.S. carmaker did not say why it had withdrawn its application for funding. The company is itself investing 5 billion euros in the battery plant, according to German economy ministry estimates.


- https://www.reuters.com/business/autos- ... 021-11-26/

onthemove
Lemon Slice
Posts: 534
Joined: June 24th, 2017, 4:03 pm
Has thanked: 726 times
Been thanked: 471 times

Re: Musk endeavours

#461409

Postby onthemove » November 27th, 2021, 2:09 pm

odysseus2000 wrote:There is no question in my mind that Tesla fsd is getting better but I do not know how much better it can get and as of now it isn't good enough, but with exponential phenomenon massive improvements can happen very quickly.


Development like FSD is the opposite of exponential - or rather the exponential is working against progress rather than helping.

I mean, take for example that video I linked where the Tesla wanted to pull out in front of an approaching car.

The driver tried to explain it by saying the approaching road isn't straight, so the cars are coming from a little further left than normal.

To be honest, I'm doubtful that that was simply the reason in that case, but it's good to illustrate my point...

Something about that junction was subtly different which has meant that what works normally, this time didn't.

But stop and think about that for a while... just think how subtle the difference was, and then try to think how many other variations you might encounter... not just in slight differences in the approach angle of the road, but also lighting conditions, weather conditions, what line approaching cars are taking and also what type and size of cars they might be.

This is the issue - the more you try to account for these less common situations, the more you realise that there are bleeping loads of them!

And this is what I mean about the 'exponential' working against you. To get basic driving .. e.g. simple, nice straight give ways, etc, with regular cars, are probably relatively 'easy' to manage. And that might get you 90% of the way there. But to get the next 5% of the way there now requires a huge increase in development effort compared to the prior 90%. And then the next even just 2% probably requires substantially more again.

Rather than being exponential, progress is more likely to be a logarithmic pattern...

https://en.wikipedia.org/wiki/File:Logarithm_plots.png

And intuitively that fits as well, because clearly no-one expects cars to reach a point where they race to the FSD finish... they're never going to be perfect... as I think everyone accepts, what is most important is getting them better than people. If the progress were exponential, the closer you get to perfecting FSD the quicker your progress would be, and you'd be slam dunk home with a perfect driver. Clearly that isn't anything like realistic!

Which does raise an interesting point from here on in with self driving cars....

How do you decide that it's 'ready'?

The FSD 10.5 beta testers seem quite happy, and you seem to be impressed.

But let's just consider the collection of videos that we've posted between us in this thread on 10.5. There's what 5 or 6 videos, covering roughly the same amount of journeys.

And in those videos, if the driver hadn't been there, there was the potential for at least one car to come off at a corner likely damaging the car quite badly (though the occupant would probably be OK, just a little shaken), but another incident where it was likely the car would have moved into the path of an approaching vehicle and been hit quite hard. Likely quite a more serious accident resulting in some degree of injury or worse.

Let's imagine... if Tesla decided this 10.5 was good to go and rolled it out to however many Tesla's there are out there. And told owners that they could leave the driver's seat unoccupied and sit in the back and go to sleep.

If every Tesla driver uses there car for more than just a couple of miles a couple of times a week, within possibly 1 to 2 months, probably almost every Tesla out there would likely as a bare minimum have had a bump of some sort requiring fixing in a garage. There'd be unlikely any undamaged Tesla's out there.

And a not insignificant proportion of owners would probably have suffered injury or even death as a result of the FSD.

When you think about it like this, it becomes clear why Tesla are only even releasing FSD beta as a 'driver assistance' aid to only a select number of owners.

But back to my point...

With the progress being more logarithmic than exponential, how do Tesla get from here, to releasing a proper self driving - 'hands off the wheel' - as per Waymo?

Clearly, any self driving car, is still going to need some interaction, even if just to ask the riders where they want the car to park - e.g. which driveway, etc. In these cases the car can still determine what's safe, and it could just ask a passenger without a driver's licence which of the safe options the passenger wants. So this isn't what I'm talking about in deciding whether to release an FSD.

I'm talking about the criteria for deciding whether the software can be trusted enough to roll out to millions of owners, such that they could go to sleep on the journey.

With the smaller and smaller each time increments in functionality - coupled with the risk that changes could break prior functionality - what's the road map from here?

On the Tesla videos on youtube, it's still a case of disengagements (or driver inputs) per journey, rather than jouney's before disengagement.

And this is just a select few showing only their journeys.

As already mentioned in my previous posts, I've used the analogy of building a skyscraper, and I believe that Waymo have targeted a much taller skyscraper, so to speak, with their approach and architecture so I believe that Waymo are less likely to suffer from the ever diminishing returns before they hit something 'acceptable' - they'll still be on a steeper upward part of the curve when they reach 'acceptable'. In fact that's probably how they already got the geofenced operation without anyone behind the wheel... because they targetted well beyond what was needed for the geofenced area, so while they're still targeting much further ('higher') elsewhere, it was probably pretty clear to Waymo it was 'good enough' to release in nice dry, large roads, low volume of traffic Pheonix.

But to Tesla... presumably when Tesla releases FSD 'final release', it will be expected to go to a large number of owners (as to how many that is, you'll know the number better than I do! :D ), but presumably the expectation will be in the region of less than 1 disengagement (i.e. without driver that would mean accident!) per millions of journeys!

But just think about it... think about all those enthusiasts videos on you tube... how many excited youtube Tesla owners will need to be pumping out video after video after video, proclaiming "hey no disengagements!"... before you could have confidence that Tesla's wouldn't have any disengagements in millions of journeys?

There's going to be a lot of bored, frustrated Tesla FSD beta testers putting out youtube videos complaining "come on Tesla, surely this is ready to release now!?"

I mean, Tesla aren't going to release FSD final release just because one guy on youtube had his first drive without any disengagements!

And when you think about it like this, it really does give an idea how far Tesla still have to go yet, before they could realistically consider releasing FSD properly.

But to Tesla's credit, I will say one thing...

"I think its clear that the Tesla's system works best on well marked roads"


I actually disagree with this.

I think Tesla's 'gung-ho' approach to self driving, which I generally don't like as a general principle, does give them one advantage.

I actually think Tesla will probably 'cope' 'better' with obscured roads and non-existent road markings. Things like roads covered in leaves, or roads covered in snow and such like. At least in terms of deciding where the road should be.

Already in some of Tesla videos, the owners have already tested them in these scenarios, and even though the car (particularly in snow) really doesn't seem to have been developed to handle snow yet - it sure as hell ain't adjusting it's speed to suit! - but to give the Tesla some credit, it does still seem to be able to infer where the edge of the road is likely to be.

I mean, true, just like a human driver in such conditions, it's probably winging it to a large extent!

But Waymo on the other hand seem to be more focussed on robust recognition of what's around the vehicle. Really making sure, with the lidar, etc, that it is confident in what it's 'seeing'.

Waymo's approach is great for normal driving conditions, where the road surface is visible.

But my expectation is that Waymo will probably find obscured road surfaces, and poor road markings / poor road surfaces, more of a problem than Tesla would - at least in terms of deciding what route the car is expected to take - though I suspect Waymo will be better at knowing how to drive in those situations (e.g. recognising snow and the need to slow down, etc), even though Tesla would likely be better at deciding where the edge of the road is.

That's not to say that Waymo, by targeting a stronger foundation for a higher skyscraper, won't be able to deal with these situations.

Similarly it's quite plausible that the thing that I believe is working to Tesla's advantage in these situations is perhaps what's hobbling Tesla in normal situations.

I mean, in my view Tesla might are trying to be too general - I think this is why Musk refuses to consider geofenced areas, etc - and this is perhaps why it's struggling, e.g. to get into the correct lane for turns, etc. I mean, you can tell from the Tesla visualisations that it often is unsure where the edge of the road is, even in clear situations that Waymo has absolutely no problem with. And that I believe is because Tesla is trying to be quite broad and general in it's road edge (and lane) determination, and that's undermining it in some situations where if you took Waymo's more thorough approach of detecting what's around the vehicle, detecting the edge and lanes should be easy.

In essence, in my view, Tesla is more of a jack of all trades when it comes to detecting the edge of the road, but a master of none.

In other words, when the road is clear, and the markings are clear, then I believe that Waymo has a clear lead vs Tesla.

But once the markings are almost gone, or substantially obstructed by leaves or washed over with mud, etc, I suspect at the moment Waymo's capability probably will drop off quite rapidly (I mean when it gets extreme), where Tesla might still manage an 'adequate' job of doing something 'acceptable' - though whether it would moderate it's behaviour to drive more slowly to adapt to the risk of things (stones, rocks, potholes, etc) potentially being hidden or potentially the poorer traction of such a surface, I very much have my doubts from the Tesla videos I've seen so far!

Howard
Lemon Quarter
Posts: 1781
Joined: November 4th, 2016, 8:26 pm
Has thanked: 631 times
Been thanked: 777 times

Re: Musk endeavours

#461417

Postby Howard » November 27th, 2021, 3:25 pm

I drove to central London and back yesterday. During the tube strike. Park Lane, Hyde Park Corner, Hammersmith and the M4.

It is inconceivable to me that a Tesla on FSD would ever handle that sort of traffic. Cars muscling in from both sides, taxis doing U turns in Knightsbridge, bus lanes with faded white line markers. Pedestrians nipping across the road from all angles and a plethora of motor scooter and motor bikes weaving in and out. Selecting the correct lane in advance needed knowledge built up from knowing the road.

The Model 3 I have driven had difficulty in correctly registering one incident on a clear motorway.

No video of a Tesla on FSD has ever got close to recording a drive round Hyde Park corner in yesterday's conditions.

regards

Howard

BobbyD
Lemon Half
Posts: 7187
Joined: January 22nd, 2017, 2:29 pm
Has thanked: 289 times
Been thanked: 679 times

Re: Musk endeavours

#461420

Postby BobbyD » November 27th, 2021, 3:59 pm

onthemove wrote:I mean, take for example that video I linked where the Tesla wanted to pull out in front of an approaching car.

The driver tried to explain it by saying the approaching road isn't straight, so the cars are coming from a little further left than normal.

To be honest, I'm doubtful that that was simply the reason in that case, but it's good to illustrate my point...


If I recall correctly from reading FSD threads there's an issue with Tesla's angle of view for cross traffic. This is on their website, but not linkable so via electrek.

Image

-https://electrek.co/2016/10/20/tesla-new-autopilot-hardware-suite-camera-nvidia-tesla-vision/

- https://www.tesla.com/autopilot

ISTR that the fov of that camera is 150°, which would mean it is very possible for a car to be approaching from 'too far left', in the area covered only by short range ultrasonics.


onthemove wrote:
How do you decide that it's 'ready'?


The answer is when the lawyers tell you you aren't going to have to refund the vast majority of your FSD takings on the grounds that it doesn't fulfill the requirements under which it was sold, but a better question might be do you ever decide that is ready...?

onthemove wrote:With the progress being more logarithmic than exponential, how do Tesla get from here, to releasing a proper self driving - 'hands off the wheel' - as per Waymo?


When they license a working system designed as a top down AD from a competitor and quietly retire their precoscious cruise control.

onthemove wrote:So this isn't what I'm talking about in deciding whether to release an FSD.

I'm talking about the criteria for deciding whether the software can be trusted enough to roll out to millions of owners, such that they could go to sleep on the journey.


There's long been a disagreement on this board about whether statistical safety will be enough to clear regulatory hurdles, and if Tesla were the only company to make it to FSD that might have a chance, but in the company of systems which have been built with regulation and oversight in mind from the beginning in contrast to Musk's cavalier attitude to regulation they are going to have real problems even if they can produce a functional product which their lawyers and insurers will let them release...

onthemove wrote:
I mean, Tesla aren't going to release FSD final release just because one guy on youtube had his first drive without any disengagements!


It wouldn't surprise me. Whilst looking for info on a question addressed below I came across another video on the Tesla website of a Tesla car driving on public roads with the disclaimer that the driver was only present for legal reasons emblazoned across the screen... It's here: https://www.tesla.com/autopilot

onthemove wrote:I actually think Tesla will probably 'cope' 'better' with obscured roads and non-existent road markings. Things like roads covered in leaves, or roads covered in snow and such like. At least in terms of deciding where the road should be.


Surely the advantage here lies with the much maligned mapping used by the likes of Waymo? Since they already know what the road should look like extrapolating what they can't see from what they can see should be easy enough.

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#461451

Postby odysseus2000 » November 27th, 2021, 7:18 pm

There are two aspects to a successful robotic car: Dealing with the asymptotic approach caused by various edge cases to what is considered acceptable and the rate of improvement of the software to get to the point where it can drive better than a human.

One can argue that the former is a logarithmic progress, but the experience of alpha-go in becoming world go champion suggests that the learning function is capable of exponential improvement. It was, prior to alpha go, believed that the rate of increase of the capability of the neural net would not allow it to quickly overcome the logarithmic nature of each decade being 10x more difficult to get through. This was why, as I understand it, there were so many predictions that computers would not reach human standard in Go for a very long time, but we know that didn’t happen. We also know that neural nets easily defeat computational algorithms in the game of chess, requiring less power and less computation to defeat the probabilistic approaches.

The neural net approach results are very like what one sees when humans try to learn something new. At the beginning progress is terrible, but with repeated practice the student gets better and better and finally achieves what is needed, their intelligence defeating the each decade gets 10x harder. If one believe in evolution one can clearly see how nature uses natural selection to allow small changes from what ever source to give one creature an advantage over its peers and over multiple iterations the genes that gave this advantage become dominant. As I understand the neural nets they do the same. When one computation of the weights of all the probable actions begins to be more successful it becomes more influential in determining weights. This is all handwaving but what we do know is that humans are capable of getting good at things and can learn to drive albeit with a probability of an accident that leads to 3000 dead or seriously injured per year. In my view the important point here is that humans with all their limits create a private car system that works well enough for insurers to take the market.

One of the arguments that has raged is whether Waymo with its radars and lidars is better than vision only Tesla. Given the potential of these to see through obstructions I would still think that waymo ought to do better even when the lines are gone, but that Tesla vision may be good enough and better with good lines as there are no lidar and radar overheads.

Regards,

onthemove
Lemon Slice
Posts: 534
Joined: June 24th, 2017, 4:03 pm
Has thanked: 726 times
Been thanked: 471 times

Re: Musk endeavours

#461501

Postby onthemove » November 28th, 2021, 12:43 am

odysseus2000 wrote:One can argue that the former is a logarithmic progress, but the experience of alpha-go in becoming world go champion suggests that the learning function is capable of exponential improvement. It was, prior to alpha go, believed that the rate of increase of the capability of the neural net would not allow it to quickly overcome the logarithmic nature of each decade being 10x more difficult to get through. This was why, as I understand it, there were so many predictions that computers would not reach human standard in Go for a very long time, but we know that didn’t happen. We also know that neural nets easily defeat computational algorithms in the game of chess, requiring less power and less computation to defeat the probabilistic approaches.


I haven't looked in depth at how they did the alpha go, although I have just done a quick google, and from quick skim reading it looks like the main advantage that the AlphaGo effort had was reinforcement learning through self-play.

That's something that you can do with games (clearly defined rules as to what a legal move is, narrow 'world' in which it operates, and a clearly defined 'win'), but doesn't work so well (understatement) with self driving cars.

I won't dwell too much on this specific topic as I'm not intimately familiar with the details of alphago, but I'm reasonably confident that the alphago team weren't getting 'exponential' improvement, at least nothing like in the way that I think you have in mind from what you're describing.

I believe the main things that results like these (and earlier ones with other games) show, is really demonstrating how much computing power the teams behind them have available.

For the alphago, it looks like it was (in part) a combination of being able to learn from 'self play' coupled with a huge amount of computing resources.

odysseus2000 wrote:The neural net approach results are very like what one sees when humans try to learn something new. At the beginning progress is terrible, but with repeated practice the student gets better and better and finally achieves what is needed, their intelligence defeating the each decade gets 10x harder. If one believe in evolution one can clearly see how nature uses natural selection to allow small changes from what ever source to give one creature an advantage over its peers and over multiple iterations the genes that gave this advantage become dominant. As I understand the neural nets they do the same. When one computation of the weights of all the probable actions begins to be more successful it becomes more influential in determining weights. This is all handwaving but what we do know is that humans are capable of getting good at things and can learn to drive albeit with a probability of an accident that leads to 3000 dead or seriously injured per year. In my view the important point here is that humans with all their limits create a private car system that works well enough for insurers to take the market.


Hmmm... this could be tricky to respond to... a lot of what you say there, on the face of it, I totally agree with it - it could be considered a good description of what is going on. At least the sentences taken in isolation.

But on the other hand, I think there is also a misunderstanding buried in there, or at least an absence of a crucial aspect that isn't mentioned.

It's true that neural nets are inspired by biological neurons, and work in a manner believed to be (crudely) similar to biological neural networks. ... It sounds like you're on the right sort of track with this ... "When one computation of the weights of all the probable actions begins to be more successful it becomes more influential in determining weights." .. although it's perhaps thinking about it the wrong way around... neural network training works (as you say) by applying the network, but rather the 'error' in the result is 'back-propogated' through the network, and the weights in the network that contributed to the error are adjusted a little to lessen the amount of error... and this process is repeated many, many times with different training examples, so that eventually the network settles on a set of weights that (hopefully) reduces the error in all cases as much as possible. So in a way, you're right when you say the more successful becomes more influential, but it's more by reducing error rather than enhancing success. But really this is more a case of well it just depends how you think about it.

But then there's the critical aspect that you haven't mentioned, and this is the more important one...

The similarity with how people learn only goes so far.

As this is where the massive divergence comes about, that you don't mention above.

Artificial neural networks (ANNs, CNNs, etc) all do the low level stuff very well, much as you describe as above.

But it's well recognised by most AI researchers, and I believe neuroscientists, etc, that there is something more going on in the human brain.

In fact, when I studied AI at university, the course was actually more oriented to what was (and probably still is) called 'good old fashioned AI'. This is in contrast to 'Connectionist AI' which deals with neural networks and such like.

And the difference here is crucial... 'Good old fashioned AI' is / was based on the recognition that people 'think' at quite a 'high' level... things like logical reasoning, inference, and so on. And there have been a whole host of techniques proposed over many decades now, that attempt to create AI at that sort of level. But while there may be a few useful things in a few narrow areas, there never has been any outstanding success. This area of AI is still very much in the domain of the nerds with the promise of useful results sometime never.

But I don't think it's particular contentious to say that everyone recognises that humans do very successfully use reasoning and (what some at least think to be) logic (even though when analysed formally what many people think is 'logical' doesn't actually turn out to be true; something that AI textbooks raise - should AI be properly Spock like logical or human-like 'flawed logical' .. but I digress).

The key point, is that when you're learning to drive, you have this higher level understanding of the world. For sure, your neural networks, particularly between the retina all the way through the various optic nerve pathways all the way through to the visual cortex at the back of the brain, are doing processing very, very much on a par with the processing being done by the neural networks in (I would expect) all the self driving car projects. (The text books I had at uni detailed the experiments that were done in the middle of the last century sticking electrodes into live monkey's brains and observing how the neurons respond to different patterns of light or motion, vertical lines, horizontal lines, etc in images presented to the monkeys ... :shock: ... and the observations tally quite well with what we see in the layers in at least the first layers in convolutional neural networks.. they tend to respond to similar types of low level features as is observed in the optic nerve, and pathways through to the visual cortex, etc)

But once we reach the visual cortex, that's where we hit a blank. As far as I'm aware, scientists aren't really any significantly closer to understanding how those signals that have been processed by the 'low level' neural networks, then get aggregated and processed by higher level 'reasoning and logic'. And none of the advancements in AI (or neuroscience) that I'm aware of, yet show any promise of remotely bridging that gap (it's still more like a big dark chasm).

So for now, and for the foreseeable future, self driving car development doesn't have the luxury that a human learner driver does, of being able to be told something once, and then use reasoning to apply that more generally.

For now, we're stuck with just throwing a very large number of training examples at a neural network, and hope that that process you describe above - of adjusting the weights, etc - does eventually settle on something that is desirable, rather than for example, learning to read the weather in photographs of tanks, instead of recognising the presence of a tank which is what you are really interested in .... you could tell a human in a few seconds that the tank is what you're interested in... but the only way to tell a neural network is to give it so many pictures that hopefully it eventually recognises that that is what you are after!

In a way, this is why I'm very surprised that Tesla, and also Waymo from what I've seen in some of the later Waymo technical videos, seem to be pushing to use AI all the way up the stack.

To be honest, I would have anticipated that the higher you get up the stack - i.e. the more processed the raw data becomes as it's being analysed - I would have expected it would transition to more traditional programming.

If nothing else, you would expect that they are going to want to be able to easily adapt these cars to different rules in different countries, and such like, and be able to quickly and reliably make changes when the rules change in a country in which they might already have cars on the road... and to be quite honest, I would have thought the programming the 'top level' rules via more regular software engineering would be the preferred method to achieve that. Sure, perhaps using weightings and inputs from the lower level neural networks, etc, but I wouldn't think they'd 'implement' the rules of the road via neural networks. I'd have thought the rules of the road would surely be integrated in a more symbolic / easily configurable manner, than a training neural network.


odysseus2000 wrote:One of the arguments that has raged is whether Waymo with its radars and lidars is better than vision only Tesla. Given the potential of these to see through obstructions I would still think that waymo ought to do better even when the lines are gone, but that Tesla vision may be good enough and better with good lines as there are no lidar and radar overheads.

Regards,


The thing is, I get the impression from watching the Tesla videos that the Tesla is focused more on identifying where it thinks the edge of the road is, and it looks like they have trained some quite generalised neural nets to do this. That (I believe) is why from what I've seen in some of the earlier Tesla videos, the Tesla still seems to be able to identify the edge of the road when it is just a step change in the level of the snow - it seems to be seeing a shadow roughly where it would expect the edge of the road, and seeing that shadow extend along where it would expect the edge to roughly be, so it seems to be thinking, OK that's probably the edge of the road.

I just get the impression from watching how the Waymo behaves in the JJ Ricks videos that the Waymo are very much more focused of identifying the line markings - perhaps not unreasonably, after all, the line marking generally 'label' the rules of how we should behave... which lane we should be in, whether we are permitted to cross the central line and overtake, etc. I just get the impression that the Waymo are focussed more heavily on that aspect.

I guess the impression I have, is that the Tesla approach seems to be more a case of identifying where it physically could drive, whereas Waymo seems (to me) to be focussed on more where it should drive.

And I guess that's why I'm left with the feeling that Waymo might struggle more, if the identifiers of where it should drive are obscured, leaving only an evaluation of where the car could physically drive to guide it.

Though as a previous poster said, Waymo does have the detailed maps.

Yup, that's true, but as I've mentioned before, they are only good up to a point. Yes, they're great for knowing if you're going to need to be in a different lane up ahead to go the route you want, and such like - something the Tesla's really could have done with in a few of those 10.5 videos!

But let's be realistic, an updated high definition map could tell you where a pot hole is - if something's been that route before and seen it. So you could know to avoid it if driving through water, snow or leaves that might be obstructing it at this time.

But, let's be realistic ... things change... there might now be a big stone hidden under those leaves, or a new pothole. Or there might be road accident ahead that means you no longer want to be in the normal lane that you thought you needed to be in.

Sure the Waymo could use a prior map of the road to help guide it, but road layouts do change. There is always the possibility that the road markings / road layout have been changed since any Waymo car was last there ...

...tell me about it... one day several years ago when going to work I nearly got caught out because overnight, without any notice, workmen had been out and shifted the curb a few inches into the road, narrowing the road just a little and making the pavement just a little wider, to make it easier for pedestrians to cross. There were no signs to say they'd done this, and I very nearly hit the curb! It took several takes looking back before I realised what had happened.

If a Waymo and Telsa both drove down that same bit of road, after that change but when it was now covered in snow, I suspect the Tesla would handle it better than the Waymo.

Realistically the Waymo cannot just assume that the road layout is as per it's map. It absolutely needs some degree of confirmation from the sensors (video, lidar, etc) at the time it is driving along it. There's absolutely no way a self driving car can drive 'blind' - i.e. solely on the basis of internal and potentially out of date mapping..

There are sometimes situations - particularly in snow - where the normal lanes change and drivers create their own lanes. For example you sometimes find on motorways, when they get covered in snow, that drivers might create two lanes in the snow, that don't quite align with the 3 lanes marked underneath the snow, for example to give a bit more separation between vehicles in case anyone loses traction and slides. The Tesla would probably recognise these. Would Waymo? Or would Waymo try to stay in one of the 3 lanes that it knows from it's map are marked underneath the snow?

Just to be clear, on the whole, I'm far more impressed by the Waymo poject than Tesla (though Telsa is still impressive, just Waymo more so :D ). I was just saying that in this one case, I get the feeling that Tesla's more gung ho (almost what feels at times like a 'best guess') approach could work in it's favour in degraded road conditions. The only concern is whether it could decide if the road situation is too degraded, and it would be more appropriate not to proceed at all, or whether it would just carry on gung ho regardless with its best guess..

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#461601

Postby odysseus2000 » November 28th, 2021, 1:22 pm

Some excellent posts with some of the best information and discussion that I have seen anywhere.

There are several themes which I will attempt to summaries and comment on.

No way can a robot ever manage central London traffic.

Yes, at the moment that is true, but a more interesting question is: Is it impossible from what we know that a robot will ever be able to drive in London traffic. As of now the answer seems to be that there is no fundamental reason why it can’t be done. Yes, it is practically difficult and seems impossible, but forbidden by some fundamental limitation of science, no.

The vision field of the Tesla system isn’t good enough.

Maybe, but humans do drive with a much reduced view of the road. May more or better cameras improve Tesla driving while keeping the processing of those cameras to an acceptable overhead? No idea, but the folk inside the Tesla fsd team don’t seem to think so.

Waymo will be more confused by leaves, snow etc obscuring lane markings than will Tesla which will adapt and attempt to work it out on the fly as humans do.

I have seen various videos of systems using lidar to look through foliage to detect hidden structure, often for archaeological investigations or military purposes, but I am unclear if the Waymo system has similar capabilities, or if the lane markings would only become visible if something like iron-oxide was mixed with the paint. However, for a truly general robotic driving the system would not need line markings which is what the Tesla system is designed to be.

The human brain is much more than a neural net

This I believe is the fundamental issue. Sir Roger Penrose argues that the brain is more than computation, that there are other things going on, which he suggests are quantum mechanical. Others have said that the neural net is easy, but beyond that there is something far more complex and impossible to recreate with current technology. I liked this argument a lot and was pretty convinced that humans would remain well ahead of machines for a long time, maybe for ever, but when alpha-go won at Go I had to ditch that idea. Clearly in at least this small subset of endeavours the human brain is inferior to a machine. There are many other examples, such as a pocket calculator being able to do sums super quickly that an unaided human brain would take a much longer time to do. One can cite very many other things, but can a machine think and reason better than a human? I would like to think no, to believe that humans will forever remain superior to machines in higher functions, but I am continually drawn back to alpha-go. This is not the mindless banter of talking heads, but a data point. The alpha-go that became world champion is now a long way ago in this rapidly evolving technology with neuromorphic chips now replicating the human brain and doing complicated tasks with low power consumption just as does the human brain. This technology has awesome potential, not all good and I am reminded of Churchill’s warning:

“If to these tremendous and awful powers is added the pitiless subhuman wickedness which we now see embodied in one of the most powerful reigning governments, who shall say that the world itself will not be wrecked, or indeed that it ought not to be wrecked? There are nightmares of the future from which a fortunate collision with some wandering star, reducing the earth to incandescent gas, might be a merciful deliverance.

https://winstonchurchill.org/publicatio ... t-suicide/

Tesla fsd is the canary for a lot of this. If it can be got to work then Tesla longs will make a fortune, but whether we will be able to enjoy the money I don’t know.

To my mind we can speculate as much as we want and it is fun to do so, but it is the experimental results that matter. As of now Tesla FSD or Waymo and probably some of the Chinese versions (but I am far from clear how good some of the claimed Chinese level 5 are) are telling me that humans are better at driving than machines. The experimental results also tell me that machines are better than humans as GP’s, lawyers and a whole host of well paid professionals governed by laws, but not yet better at manual tasks like say a dentist, gardener, assembly worker…

As I see things, the progress of Tesla fsd will tell us far more than most humans expect and is the canary we need to stay focused on.

Regards,

BobbyD
Lemon Half
Posts: 7187
Joined: January 22nd, 2017, 2:29 pm
Has thanked: 289 times
Been thanked: 679 times

Re: Musk endeavours

#461648

Postby BobbyD » November 28th, 2021, 5:08 pm

Volkswagen has hired two high-profile managers for its battery programme. One of them is said to be Apple’s head of battery development Soonho Ahn, while solid-state cell expert Jörg Hoffmann is also to join VW from BMW.

Soonho Ahn had been working for Apple since 2019, before that the South Korean manager had worked for the cell manufacturers Samsung SDI and LG Energy Systems (formerly LG Chem). Initially, Manager Magazin had reported on the personnel matter, but in the meantime the report has been confirmed by Volkswagen.


- https://www.electrive.com/2021/11/26/vw ... and-apple/

Soonho Ahn

CTO, Battery Division, VW Group Components


- https://de.linkedin.com/in/soonho-ahn-2 ... earch-card

odysseus2000
Lemon Quarter
Posts: 4314
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1163 times
Been thanked: 580 times

Re: Musk endeavours

#461884

Postby odysseus2000 » November 29th, 2021, 5:16 pm

VW have now retraced their share price to what it closed at in January:

https://twitter.com/0_ody/status/146536 ... 68326?s=20

What ever VW may say, investors are not showing much confidence in the company.

Regards,


Return to “Macro and Global Topics”

Who is online

Users browsing this forum: No registered users and 3 guests