odysseus2000 wrote:One can argue that the former is a logarithmic progress, but the experience of alpha-go in becoming world go champion suggests that the learning function is capable of exponential improvement. It was, prior to alpha go, believed that the rate of increase of the capability of the neural net would not allow it to quickly overcome the logarithmic nature of each decade being 10x more difficult to get through. This was why, as I understand it, there were so many predictions that computers would not reach human standard in Go for a very long time, but we know that didn’t happen. We also know that neural nets easily defeat computational algorithms in the game of chess, requiring less power and less computation to defeat the probabilistic approaches.
I haven't looked in depth at how they did the alpha go, although I have just done a quick google, and from quick skim reading it looks like the main advantage that the AlphaGo effort had was reinforcement learning through self-play.
That's something that you can do with games (clearly defined rules as to what a legal move is, narrow 'world' in which it operates, and a clearly defined 'win'), but doesn't work so well (understatement) with self driving cars.
I won't dwell too much on this specific topic as I'm not intimately familiar with the details of alphago, but I'm reasonably confident that the alphago team weren't getting 'exponential' improvement, at least nothing like in the way that I think you have in mind from what you're describing.
I believe the main things that results like these (and earlier ones with other games) show, is really demonstrating how much computing power the teams behind them have available.
For the alphago, it looks like it was (in part) a combination of being able to learn from 'self play' coupled with a huge amount of computing resources.
odysseus2000 wrote:The neural net approach results are very like what one sees when humans try to learn something new. At the beginning progress is terrible, but with repeated practice the student gets better and better and finally achieves what is needed, their intelligence defeating the each decade gets 10x harder. If one believe in evolution one can clearly see how nature uses natural selection to allow small changes from what ever source to give one creature an advantage over its peers and over multiple iterations the genes that gave this advantage become dominant. As I understand the neural nets they do the same. When one computation of the weights of all the probable actions begins to be more successful it becomes more influential in determining weights. This is all handwaving but what we do know is that humans are capable of getting good at things and can learn to drive albeit with a probability of an accident that leads to 3000 dead or seriously injured per year. In my view the important point here is that humans with all their limits create a private car system that works well enough for insurers to take the market.
Hmmm... this could be tricky to respond to... a lot of what you say there, on the face of it, I totally agree with it - it could be considered a good description of what is going on. At least the sentences taken in isolation.
But on the other hand, I think there is also a misunderstanding buried in there, or at least an absence of a crucial aspect that isn't mentioned.
It's true that neural nets are inspired by biological neurons, and work in a manner believed to be (crudely) similar to biological neural networks. ... It sounds like you're on the right sort of track with this ...
"When one computation of the weights of all the probable actions begins to be more successful it becomes more influential in determining weights." .. although it's perhaps thinking about it the wrong way around... neural network training works (as you say) by applying the network, but rather the 'error' in the result is 'back-propogated' through the network, and the weights in the network that contributed to the error are adjusted a little to lessen the amount of error... and this process is repeated many, many times with different training examples, so that eventually the network settles on a set of weights that (hopefully) reduces the error in all cases as much as possible. So in a way, you're right when you say the more successful becomes more influential, but it's more by reducing error rather than enhancing success. But really this is more a case of well it just depends how you think about it.
But then there's the critical aspect that you haven't mentioned, and this is the more important one...
The similarity with how people learn only goes so far.
As this is where the massive divergence comes about, that you don't mention above.
Artificial neural networks (ANNs, CNNs, etc) all do the low level stuff very well, much as you describe as above.
But it's well recognised by most AI researchers, and I believe neuroscientists, etc, that there is something more going on in the human brain.
In fact, when I studied AI at university, the course was actually more oriented to what was (and probably still is) called 'good old fashioned AI'. This is in contrast to 'Connectionist AI' which deals with neural networks and such like.
And the difference here is crucial... 'Good old fashioned AI' is / was based on the recognition that people 'think' at quite a 'high' level... things like logical reasoning, inference, and so on. And there have been a whole host of techniques proposed over many decades now, that attempt to create AI at that sort of level. But while there may be a few useful things in a few narrow areas, there never has been any outstanding success. This area of AI is still very much in the domain of the nerds with the promise of useful results sometime never.
But I don't think it's particular contentious to say that everyone recognises that humans do very successfully use reasoning and (what some at least think to be) logic (even though when analysed formally what many people think is 'logical' doesn't actually turn out to be true; something that AI textbooks raise - should AI be properly Spock like logical or human-like 'flawed logical' .. but I digress).
The key point, is that when you're learning to drive, you have this higher level understanding of the world. For sure, your neural networks, particularly between the retina all the way through the various optic nerve pathways all the way through to the visual cortex at the back of the brain, are doing processing very, very much on a par with the processing being done by the neural networks in (I would expect) all the self driving car projects. (The text books I had at uni detailed the experiments that were done in the middle of the last century sticking electrodes into live monkey's brains and observing how the neurons respond to different patterns of light or motion, vertical lines, horizontal lines, etc in images presented to the monkeys ...
... and the observations tally quite well with what we see in the layers in at least the first layers in convolutional neural networks.. they tend to respond to similar types of low level features as is observed in the optic nerve, and pathways through to the visual cortex, etc)
But once we reach the visual cortex, that's where we hit a blank. As far as I'm aware, scientists aren't really any significantly closer to understanding how those signals that have been processed by the 'low level' neural networks, then get aggregated and processed by higher level 'reasoning and logic'. And none of the advancements in AI (or neuroscience) that I'm aware of, yet show any promise of remotely bridging that gap (it's still more like a big dark chasm).
So for now, and for the foreseeable future, self driving car development doesn't have the luxury that a human learner driver does, of being able to be told something once, and then use reasoning to apply that more generally.
For now, we're stuck with just throwing a very large number of training examples at a neural network, and hope that that process you describe above - of adjusting the weights, etc - does eventually settle on something that is desirable, rather than for example, learning to read the weather in photographs of tanks, instead of recognising the presence of a tank which is what you are really interested in .... you could tell a human in a few seconds that the tank is what you're interested in... but the only way to tell a neural network is to give it so many pictures that hopefully it eventually recognises that that is what you are after!
In a way, this is why I'm very surprised that Tesla, and also Waymo from what I've seen in some of the later Waymo technical videos, seem to be pushing to use AI all the way up the stack.
To be honest, I would have anticipated that the higher you get up the stack - i.e. the more processed the raw data becomes as it's being analysed - I would have expected it would transition to more traditional programming.
If nothing else, you would expect that they are going to want to be able to easily adapt these cars to different rules in different countries, and such like, and be able to quickly and reliably make changes when the rules change in a country in which they might already have cars on the road... and to be quite honest, I would have thought the programming the 'top level' rules via more regular software engineering would be the preferred method to achieve that. Sure, perhaps using weightings and inputs from the lower level neural networks, etc, but I wouldn't think they'd 'implement' the rules of the road via neural networks. I'd have thought the rules of the road would surely be integrated in a more symbolic / easily configurable manner, than a training neural network.
odysseus2000 wrote:One of the arguments that has raged is whether Waymo with its radars and lidars is better than vision only Tesla. Given the potential of these to see through obstructions I would still think that waymo ought to do better even when the lines are gone, but that Tesla vision may be good enough and better with good lines as there are no lidar and radar overheads.
Regards,
The thing is, I get the impression from watching the Tesla videos that the Tesla is focused more on identifying where it thinks the edge of the road is, and it looks like they have trained some quite generalised neural nets to do this. That (I believe) is why from what I've seen in some of the earlier Tesla videos, the Tesla still seems to be able to identify the edge of the road when it is just a step change in the level of the snow - it seems to be seeing a shadow roughly where it would expect the edge of the road, and seeing that shadow extend along where it would expect the edge to roughly be, so it seems to be thinking, OK that's probably the edge of the road.
I just get the impression from watching how the Waymo behaves in the JJ Ricks videos that the Waymo are very much more focused of identifying the line markings - perhaps not unreasonably, after all, the line marking generally 'label' the rules of how we should behave... which lane we should be in, whether we are permitted to cross the central line and overtake, etc. I just get the impression that the Waymo are focussed more heavily on that aspect.
I guess the impression I have, is that the Tesla approach seems to be more a case of identifying where it physically
could drive, whereas Waymo seems (to me) to be focussed on more where it
should drive.
And I guess that's why I'm left with the feeling that Waymo might struggle more, if the identifiers of where it
should drive are obscured, leaving only an evaluation of where the car
could physically drive to guide it.
Though as a previous poster said, Waymo does have the detailed maps.
Yup, that's true, but as I've mentioned before, they are only good up to a point. Yes, they're great for knowing if you're going to need to be in a different lane up ahead to go the route you want, and such like - something the Tesla's really could have done with in a few of those 10.5 videos!
But let's be realistic, an updated high definition map could tell you where a pot hole is - if something's been that route before and seen it. So you could know to avoid it if driving through water, snow or leaves that might be obstructing it at this time.
But, let's be realistic ... things change... there might now be a big stone hidden under those leaves, or a new pothole. Or there might be road accident ahead that means you no longer want to be in the normal lane that you thought you needed to be in.
Sure the Waymo could use a prior map of the road to help guide it, but road layouts do change. There is always the possibility that the road markings / road layout have been changed since any Waymo car was last there ...
...tell me about it... one day several years ago when going to work I nearly got caught out because overnight, without any notice, workmen had been out and shifted the curb a few inches into the road, narrowing the road just a little and making the pavement just a little wider, to make it easier for pedestrians to cross. There were no signs to say they'd done this, and I very nearly hit the curb! It took several takes looking back before I realised what had happened.
If a Waymo and Telsa both drove down that same bit of road, after that change but when it was now covered in snow, I suspect the Tesla would handle it better than the Waymo.
Realistically the Waymo cannot just assume that the road layout is as per it's map. It absolutely needs some degree of confirmation from the sensors (video, lidar, etc) at the time it is driving along it. There's absolutely no way a self driving car can drive 'blind' - i.e. solely on the basis of internal and potentially out of date mapping..
There are sometimes situations - particularly in snow - where the normal lanes change and drivers create their own lanes. For example you sometimes find on motorways, when they get covered in snow, that drivers might create two lanes in the snow, that don't quite align with the 3 lanes marked underneath the snow, for example to give a bit more separation between vehicles in case anyone loses traction and slides. The Tesla would probably recognise these. Would Waymo? Or would Waymo try to stay in one of the 3 lanes that it knows from it's map are marked underneath the snow?
Just to be clear, on the whole, I'm far more impressed by the Waymo poject than Tesla (though Telsa is still impressive, just Waymo more so
). I was just saying that in this one case, I get the feeling that Tesla's more gung ho (almost what feels at times like a 'best guess') approach could work in it's favour in degraded road conditions. The only concern is whether it could decide if the road situation is
too degraded, and it would be more appropriate not to proceed at all, or whether it would just carry on gung ho regardless with its best guess..