Donate to Remove ads

Got a credit card? use our Credit Card & Finance Calculators

Thanks to johnstevens77,Bhoddhisatva,scotia,Anonymous,Cornytiv34, for Donating to support the site

AI endeavours

The Big Picture Place
Clitheroekid
Lemon Quarter
Posts: 2858
Joined: November 6th, 2016, 9:58 pm
Has thanked: 1385 times
Been thanked: 3771 times

Re: AI endeavours

#105190

Postby Clitheroekid » December 19th, 2017, 8:17 pm

Many thanks for an informative and enlightening post.

I've been in and out of a company called Blue Prism (PRSM) over the past few months, whose share price has been volatile to put it mildly. As my knowledge of AI is pretty much limited to what I've just read I'd be very interested to hear your views on them as a company.

Whilst I read a lot about how successful they are, when the usual metrics are applied the market valuation seems completely insane to me. I suspect that like cryptocurrencies there are many `investors' who haven't a clue about what it is that they're buying and are simply doing so because AI is fashionable and if everyone else is buying it must be a good company.

It also seems to me that it's a very competitive field and that PRSM are simply one amongst many.

Incidentally, I appreciate that you may know nothing at all about them, in which case feel free to ignore the question! ;)

onthemove
Lemon Slice
Posts: 540
Joined: June 24th, 2017, 4:03 pm
Has thanked: 722 times
Been thanked: 471 times

Re: AI endeavours

#105222

Postby onthemove » December 19th, 2017, 9:53 pm

Clitheroekid wrote:I've been in and out of a company called Blue Prism (PRSM) over the past few months, whose share price has been volatile to put it mildly. As my knowledge of AI is pretty much limited to what I've just read I'd be very interested to hear your views on them as a company.


I knew nothing about them :^)

But still curious I've just looked at their website. Robotic process automation, and probably 30 minute drive from where I work, I was surprised I hadn't heard of them.

I hope I've found the right company...
https://www.blueprism.com/whatwedo

What follows is purely my reaction from what is publicly available on their website without registering. I acknowledge that sometimes marketing departments who make up content for such websites might be completely separate from the technical teams, so this might not be a completely fair appraisal - but it's all I've got to go from.

Firstly, their use of the term 'robotics' is stretching it. All their 'robotics' is simply repetitive software tasks. There's no hardware that I can see. And they seem to go straight onto the defensive to explain they're talking about software only 'robotics'. They know they're stretching the term.

From what I can tell it just seems to be a marketing gimmick for automated scripts.

There doesn't actually seem to be all that much AI in there. In fact the impression I am left with is that their AI functionality is merely that they are a front end for Microsoft Azure AI services.

Their product seems to be aimed at automating user interaction. It just seems to be a (moderately) fancy variant of automated testing tools that software engineers usually use to regression test user interface functionality. But in this case they seem to be using it to automate mundane data entry, or other similar tasks.

According to their videos they're only 'looking at' implementing 'non-deterministic' things ... which rather implies they are basically just automatically scripting clear, straight forward deterministic tasks - yup, just a fancy automated testing tool being applied in a production rather than test environment.

Their dismissive reference in one video to "...unproven AI technologies at the other [end of the spectrum]" leads me to feel they don't feel comfortable with state of the art AI.

If "AI" is the reason you are investing, this company seems to be at best a v. cautious consumer of (other companies') AI, not an AI innovator or developer themselves.

All in all, me personally I wouldn't invest.

The whole business, solely in my own personal opinion from what I can infer from a very brief visit to their website seems to be about automating the usually human interaction between different, disparate software applications.

There may (or may not, I've no idea) be a big demand for that.

I suppose their market could go one of two ways.

Possibility 1:
The original application developers start to recognise where companies like this are automating things, and instead provide the automation direct within their applications .... after all, it's expensive and time consuming to go to all the trouble of developing a user interface for humans, if at the end of the day it's only going to be used by a 'software robot'! Far more cost effective, less development, etc, to bring the disparate functionality into a cohesive single suite that can function as a single entity. Then demand for this companies type of product / service disappears.

Possibility 2:
I guess if there genuinely is a need for huge flexibility, application developers could develop 'modular' functionality, and then companies could use this 'process automation' to tie together this functionality into a cohesive business process. And then as the business changes, this 'process automation' can then be used to adapt and change with it. But I'm not sure I see any real evidence of this. On the contrary, software engineering is forever progression to 'higher' and 'higher' levels. The sort of things these are doing with 'process automation' should be relatively cheap, quick simple tasks for software engineers these days. And such modularity / flexibility, if it becomes the norm, shouldn't be requiring scripting techniques that seem designed to cope with human interfaces.

In summary ...

They may have found a current gap in the market. They may be making good money out of it. But, in my view, it seems to be providing 3rd party glue for gaps - or 'process paths' - that first party developers may subsequently recognise and ultimately close off.

In terms of AI, (from what they say on their website) I wouldn't consider them an AI company.

There was only a single glimmer of substance that might change my mind ... they did mention in one video about looking for anomolies in financial transaction data. This is potentially something that AI could be useful for - but they didn't elaborate, and from everything else I've read, I suspect any such AI use is potentially just shipped off to Microsoft Azure, with Microsoft providing the AI itself.

Though it never ceases to amaze me, that no matter how much we identify disparate software as being an issue where I work, managers repeatedly do keep creating more and more disparate software without trying to bring everything cohesively together. If other places are like that, there may be plenty of business for this company in the years to come.

And they may be able to use AI in delivering on that business ... but their dismissal of "unproven AI technologies" leaves me with the feeling that they are a little cynical and cautious of AI technologies ... not great if you're looking to profit from an AI boom.

All purely my initial reaction to reading the website I've linked to above. Absolutely nothing more.

BTW - I've now just looked at their numbers...

According to Hargreaves Lansdown ...
http://www.hl.co.uk/shares/shares-searc ... nd-reports

Market Cap : £775million
Revenue 2016 : £9.6million

Net Assets : £3.8 million
Profilt / (Loss) : (£5 million) .. 5x higher loss than the year before!

the market valuation seems completely insane to me


I wouldn't disagree. Though I'm usually more of a HYP type investor, so any price for any company that is making loss and not paying any dividend is expensive in my view. :^)

But this company had revenue of £10million and made a loss of £5million.

I haven't read any further into their accounts to see what's gone on there!

But I think I'll leave it here.

johnhemming
Lemon Quarter
Posts: 3858
Joined: November 8th, 2016, 7:13 pm
Has thanked: 9 times
Been thanked: 609 times

Re: AI endeavours

#105224

Postby johnhemming » December 19th, 2017, 10:00 pm

I have been using Google's streaming voice recognition on phone conversations and it works reasonably well, but still has quite a few errors. Phone conversations have the disadvantage of a sampling rate of 8K rather than 16K where the quality of speech recognition is greater. It tends to peak at that, however. Also it is generally 8 bit encoding rather than 16 bit encoding.

I have put up a test system which runs a conference and then puts the transcriptions into a chat (as well as doing TTS from the chat into the conference). If anyone is interested I can give a link.

TUK020
Lemon Quarter
Posts: 2039
Joined: November 5th, 2016, 7:41 am
Has thanked: 762 times
Been thanked: 1175 times

Re: AI endeavours

#105226

Postby TUK020 » December 19th, 2017, 10:08 pm

Onthemove,
Thank you for a brilliant post. Much appreciate the insight.
Feels more like the introduction of electricity - productivity gains occurred over decades, because that was how long it took folks to reorganize things to properly take advantage of the new capabilities

Clitheroekid
Lemon Quarter
Posts: 2858
Joined: November 6th, 2016, 9:58 pm
Has thanked: 1385 times
Been thanked: 3771 times

Re: AI endeavours

#105229

Postby Clitheroekid » December 19th, 2017, 10:28 pm

Many thanks onthemove for taking so much time to give me such a comprehensive answer, I'm very grateful. Whilst I may continue to dip in and out I can't see it becoming a core holding any time soon!

odysseus2000
Lemon Half
Posts: 6364
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1530 times
Been thanked: 959 times

Re: AI endeavours

#106241

Postby odysseus2000 » December 26th, 2017, 9:50 pm

This is kind of interesting, not for the politics as the author goes on about, but for the people involved, begging the question as to what exactly needs such skill sets to over see it. Almost as if they have some kind of Manhatten like program on the go:

https://mrtopstep.com/why-is-alphabet-c ... f-defense/

Regards,

stewamax
Lemon Quarter
Posts: 2417
Joined: November 7th, 2016, 2:40 pm
Has thanked: 83 times
Been thanked: 782 times

Re: AI endeavours

#108025

Postby stewamax » January 4th, 2018, 10:03 pm

The real challenge for AI systems is not to use the speed of a special purpose computer:
- to execute rules in order to predict the outcome of a very large number of possible moves in order to ‘win a game’ (Deep Thought Deep Blue et al)
nor
- to play against itself a very large number of times and thus tune a deep neural net (e.g. AlphaGo)

but...
...when it has developed a successful system (neural net or whatever), to explain any underlying strategy it has ‘developed’

ReformedCharacter
Lemon Quarter
Posts: 3120
Joined: November 4th, 2016, 11:12 am
Has thanked: 3591 times
Been thanked: 1509 times

Re: AI endeavours

#108031

Postby ReformedCharacter » January 5th, 2018, 12:04 am

stewamax wrote:The real challenge for AI systems is not to use the speed of a special purpose computer:
- to execute rules in order to predict the outcome of a very large number of possible moves in order to ‘win a game’ (Deep Thought Deep Blue et al)
nor
- to play against itself a very large number of times and thus tune a deep neural net (e.g. AlphaGo)

but...
...when it has developed a successful system (neural net or whatever), to explain any underlying strategy it has ‘developed’


This article suggests the same:

https://www.nytimes.com/2017/11/21/maga ... tself.html

RC

odysseus2000
Lemon Half
Posts: 6364
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1530 times
Been thanked: 959 times

Re: AI endeavours

#108263

Postby odysseus2000 » January 5th, 2018, 11:27 pm

Long over due reply to onthemove’s article about AI, sorry so slow, but just been very busy.

The points made about how academia got AI wrong is worth noting as its just one of very many examples of the conservatism in academic research which makes much of it worthless. The future will be determined by the entrepreneurs and inventors who create it, what ever academia predicts will mostly be wrong. I write that as some one with a PhD and many years of academic research and I could cite many examples but don’t want to bore.

The current AI revolution in my humble opinion has begun from the computer vision example as cited by onthemove combined with vastly more powerful processors being churned about by Nvidia, vastly greater storage and vastly greater increased speed of idea dissemination plus the powerful use of AI by major corporations such as Amazon. Many of the leading companies in all industries now have substantial AI programs to direct sales and the logistics of delivery, control inventory and direct research efforts. The effects of these have been substantial. In the recent xmas period Amazon were able to ship a vast amount of stuff very timely. Of the things that I and various friends ordered there was universal early arrival, no late arrives and everything arrived perfectly. This in a country where many folk say that the infrastructure is broken and obsolete. Christmas 2017 said something very different.

Since we are only at the beginning of this revolution with robots (first ones probably will be machine driven cars) likely to have a transformation effect greater than any previous industrial revolution I believe it is impossible to predict what will happen in a years time, let alone 5. Tim Berners Lee has argued that the internet is potentially the nervous system of a super computer with the brains being just a few lines of code on top. The argument that AI will need to disseminate what it learns to humans who will then use it, seems a lot too slow given how fast computer operate and that they do so 24-7, 365.25 days per year. It may be that there is some fundamental limit that stops machines from advancing too much and requires that there will always be a need for human control, but for now I don’t see it. Indeed the ability of AI to look at complicated stuff like GO and develop new, previously un-realised, approaches gives an indication that AI will probably find ways of doing things that humans have missed or academic conservatism has weighed against. I suspect before too long AI will deal with all medical diagnostic data having robotically collected
it.

If we consider mice we have animals that are far less capable of doing things than we are, but which are nonetheless a considerable problem, self replicating, able to live in our dwellings and be difficult to clean out and major pests in agriculture. Clearly current AI is much less sophisticated than a mouse, but in specific applications, such as driving cars, AI can compete with humans something impossible for a mouse. This has all happened in about 6 years and the rate of AI innovation is exponential.

In terms of investment I would guess that most of the AI companies that have been born on this bandwagon of excitement will perish. There will likely be opportunities to make serious money within the current boom, but also opportunities to lose serious money, like the 1998 to 2000 period. AI looks to be difficult to protect with patents and such and so I expect a scenario very like 3D printing. A technology that was dismissed but which is now vital to many business and yet with few winners as the printers are commodities and sold at commodity margins which is great for buyers (I just ordered one) but bad for maker margins. Generally I do not like “picks and shovel” investments because most of the ones I have studied have not worked well. E.g. Several years ago I looked at many of Apple’s suppliers and the clear result was that the better bet was apple as most of the suppliers who generally create commodity products underperformed. However, for now Nvidia looks to be the best potential picks and shovel and has the other advantage that they are heavy into bit coin mining. Other potential winners look to be Amazon, Google, Walmart, Netflix. Some like MSFT too, but I feel their core software business is doomed and this could hit their cloud operations and moreover they don’t have the moats of Amazon et al. Tesla remains an enigma, loathed by many fund managers, but Musk has shown himself capable of extraordinary insights and business skill so i personally like Tesla.

What I expect to see shortly (next few years) are military AI robots capable of defeating human soldiers and Putin’s remarks that Drones will fight the next war doesn’t seem too daft at all. Atlas & other robots from Boston Dynamics (loads of videos on the Net) look quite primitive, but each iteration gets a little better and so like the first military planes that were of little use, that soon gave way to killing machines that won battles, I expect AI military to advance. If, as we see with machine driven cars, there is a clear objective and way of proceeding it is difficult for me to see how humans will be able to resist machines that can be protected with armour and deployed in ways too violent and or too exposed for humans. The arguments that machines will never be allowed to self kill humans seems weak to me. It was only a little while back that the US apparently bombed the hospitals of medicines without frontiers in the Afghanistan conflict and anyhow cruise missiles and drones now target and kill humans, albeit human directed, but does the General told to deal with some trouble makers care about anything other than killing the enemy?

None of what I think might happen, I am after all academically trained (see beginning) but the current industrial revolution is like nothing I have studied before. The scale of the internet far exceeds what I thought possible when it began, the speed and power of personal computers, mobile phones etc far exceeds what i thought could be done and the more I see of humans the more I feel they are prone to doing daft things.

Regards,

onthemove
Lemon Slice
Posts: 540
Joined: June 24th, 2017, 4:03 pm
Has thanked: 722 times
Been thanked: 471 times

Re: AI endeavours

#108425

Postby onthemove » January 6th, 2018, 10:02 pm

odysseus2000 wrote:...for now Nvidia looks to be the best potential picks...


I'd probably agree on that. I nearly wrote a follow up post to my initial post suggesting them. Where AI has been taught without the help of cloud computing, a lot of the time the libraries performing the number crunching now tends to get shipped off onto GPUs, and usually nVidia.

While the 'application' end - as opposed to the 'training' end of the process doesn't need such significant computing power, there is likely to become a market for dedicated, 'programmable' convolutional neural network ('deep learning') chips i.e. "AI chips", which can apply a pre-trained network very power efficiently.

I could imagine in future that one or two manufacturers - and nvidia seems well placed at the moment to be one of them - could become the standard, much like Intel has been for CPUs, etc.

"It may be that there is some fundamental limit that stops machines from advancing too much and requires that there will always be a need for human control, but for now I don’t see it. Indeed the ability of AI to look at complicated stuff like GO and develop new, previously un-realised, approaches gives an indication that AI will probably find ways of doing things that humans have missed or academic conservatism has weighed against."


The fundamental limit is engineering :^)

To be clear on the GO example, GO is very constrained. Although there may be a lot of strategies, the rules are clear and the domain very limited and fixed. There are a finite set of clearly quantised board positions. There is also a very clear turn taking methodology.

All this allows for an 'adversarial' approach to learning strategies. In effect the trainers have the networks play each other. But all the time, they are playing fully constrained in a very 'logical' clearly demarcated world. All the time, there is a 'controlling' program that is ensuring that they are playing to the rules of GO.

And that's the key point. To be useful, a deep learning network (or any other AI for that matter), needs to have a clear aim, a clear context. Just like any worker in a job - they need to understand what they are expected to do.

For example, if you took 20 arbitrary people, bought an empty factory building and just put those people inside without any instructions, it would be unlikely that you'd end up with a functioning business.

The people that developed that GO algorithm didn't use a deep learning algorithm that had learned to understand english and then told it "Go and learn to play GO". They provided an awful lot of 'scaffholding' in which the network was placed, so that it just learned to play GO. The network probably doesn't even know that it has learned to play go. All it has done is learned a mapping of inputs to outputs. It (the AI) doesn't understand what that mapping is for. It just knows how to do the mapping it has been trained to do. A mapping that is better than any human has managed. But still just a mapping none the less.

And this is where engineering comes in with AI.

The AI is providing a few more modules - functions - which are available to engineers to build into more complex systems.

For example, with self driving cars, it will be impossible in practise, to just take the appropriate sensors, feed them into a deep learning network, and then hey presto, magically that network does everything - your car will drive you from A to B.

That won't happen. Admittedly someone did try that, using neural networks, in the 1990's, with disastrous results. (I can't find the link now - with all the current buzz about self driving cars, it's getting lost in amongst all the current chatter ... but from what I recall, the car managed to stay on the road OK, until it reached a bridge ... it turned out the network had learned something like grass indicating the constraint of the edge of the road, and when it came to a bridge without grass there, well... they called it quits)

In reality, the AI is engineered into multiple layers - in traditional engineering style.

Google (now Waymo) provide an excellent talk here, which gives some good details on 'how a driverless car sees the road' ... hopefully this link should start at the most interesting part.... https://youtu.be/tiwVMrTLUWg?t=470

What you can see from that (for example at 8:17 into it) is how the 'system' has identified the entities around it - and classified them into categories. That is where the main function of the (new) deep learning AI has come in. The deep learning algorithms have enabled the engineers to identify the entities around the vehicle, with a reliability that now matches humans.

[What follows is me reading between the lines of what I can see in the video, and from other sources, combined with my AI background, and software engineering background, based on how I would approach the problem ... and which for the most part seems to be - in general terms at least - how google engineers are approaching the problem]

Then effectively there is a break in the AI. So you have a small - but very clever layer identifying the entities, but that is a fixed layer, with a clear, demonstrable remit. You can show it millions and millions of pictures of things you might find on the roads, and you can then judge how effective, how reliable, etc, it can do that job, This gives you a module/layer. And that is the layer you are seeing in the video at that point.

At this stage, there is no planning. At this stage there isn't necessarily any motion consideration. Just first identify what the sensors and cameras can see.

The next level could then add simply physics motion to each entity - potentially from the lidar sensors, or comparing sequences of video frames, etc. This would be regular collision avoidance of moving bodies. Basic Newtons laws of physics taught in every secondary school.

Basically, without any further intelligent input ('behaviour') from any of the identified entities, is anything on a collision course?

At this point, if there is a stationary car in front, and you are heading straight for it and about to run out of braking distance, the rest of the system detailed below can be short circuited, and the emergency brake applied at this moment. I would expect any self driving car to have a lot of shortcuts of this type, that constantly monitor if a direct simple collision is likely, and that don't allow the car to knowingly put itself into a position that requires positive action from someone else to avoid a collision. (i.e. very defensive driving).

Then I believe (from what I've seen of the google videos) there are then separate deep learning algorithms that predict the behaviour of the entities the first layer has identified.

From an engineering point of view - i.e. the ability to be able to develop and test the system - it is probably unlikely that google (or any other serious player) would merge the two together into a single network and just hope the AI will sort it out.

Each layer needs to have a clear remit, with testable scope. And the way the presentation in the video builds on each layer, I believe, is reflecting how their systems are probably actually doing it in the real world.

Separately, a parallel layer would involve analysing the static entities - the traffic lights, the road signs, etc. And then pulling information from them. Where it has identified a road sign, what limit is it indicating. Where it has identified a traffic light what colour is currently showing. Where it has identified lane markings, what are these markings - effectively categorising them according to the markings in the highway code.

This would be engineered as a separate layer to the 'moving' entity analysis layer.

From an engineering perspective, although an AI algorithm might read the status of a traffic lights, might identify a give way road marking, etc, I believe that these would then be fed to a more traditionally engineered algorithm, which can provide a clear, demonstrable output of the system's understanding of the road and associated rules around it. And I believe this is the kind of information that you can see in that - and other similar - videos, where they show the world around the car with annotations indicating what is going on.

As an aside, a more recent video I saw, showed how google were looking at presenting this information to the occupants of the car in a way that helps give the occupants confidence in the capability of the car - as a passenger, you can see for yourself that the car is able to see and identify all the things around it that you can see, and it can show you with an arrow the path it intends to take, the status of lights, etc.... the talk that I saw, said this was very important because if the car stops, such a display will allow the user to understand _why_ the car has decided to stop.

From an engineering point of view, having this as a traditional (non-AI) layer, is probably going to be critical to enabling cars to be updated quickly and reliably if the rules of the road ever change - which they will!

Once the rules have been analysed, the permitted paths for the car can be identified. Once the possibilities are determined, then a separate AI can perform path planning as to which of the permitted routes to take. This may be using deep learning algorithms, or could simply now use more traditional planning algorithms, or other custom algorithms. The point is, because the whole system is engineered into layers, this path planning layer, can use completely separate techniques to the deep learning algorithms used to identify other cars, pedestrians etc.

--

I suppose, what I'm really trying to emphasize, is that in the real world - to make things that you can sell to consumers, with a 12 month guarantee, or even to which they will entrust their life, or trust in the operation of you business, inherently they are going to have to be constrained, with a clearly defined, and testable remit.

And that requires engineered modularisation.

To take your example of Amazon and others providing deliveries ... they may use some AI for scheduling the deliveries. But that isn't opaque. The planning aspect is still monitored by people. People can overrule it if felt necessary. The AI isn't top to bottom running the business excluding people from it.

Sure, the top level business might use deep learning to predict likely workload for planning staffing. That might consider the weather, the news, the economy, and so forth. And managers might trust the output of such algorithms and commit to bringing in enough staff to cope with the predicted demand.

There may be AI used in looking for suspicious behavours in workers. Looking for potential criminal behaviour in video surveilance. Or looking at individual's CV histories to spot anomolies, etc.

There may be AI in the sat navs of drivers, listening for their instructions to turn the volume up, or telling the unit the road is blocked.

There may be AI monitoring the cars to detect when they may need a service before they breakdown and are unable to finish their delivery.

But these are all separate AI components, each doing a discrete task, and only glued together into an 'organisation' by people managing those systems.

Those people may also use AI to help them manage those systems, to monitor all the different levels, but ultimately it will be a partnership.

Just like with self driving cars, all businesses need to adhere to regulations. If you tried to run your business with a single deep learning AI (or other general AI) that learned your whole business as a single black box, you'd be unlikely able to respond to regulation changes - even simple things like working time directives, etc.

And from a cost perspective, it is prohibitive to build general purspose humanoid replacements. If you want an AI to monitor the quality of your product coming off a production line, you won't go to a vendor selling a general purpose two armed, two legged, humanoid android, terminator style that could decide to take over the world.

"This has all happened in about 6 years and the rate of AI innovation is exponential."


While the current AI boom is exciting, be careful not to take it out of context.

Ultimately all it is doing it showing that computers can now do some more tasks as well or better than humans.

But that isn't something new to the past 6 years.

Ever since the invention of the computer, computers have been able to perform calculations much faster, with bigger numbers, etc, than humans ever could.

Spreadsheets then allowed businesses to organise and manage accounts better than they could with pen and paper.

Computers could draw charts of that data faster and more efficiently than humans ever could.

Regarding the rate of AI innovation, I don't agree it's expontential. Quite the opposite. I'd say there has been a substantial step change, but like GPS (global positioning system), the current change will make a transformative change, but it is limited.

Just like 20years ago, GPS was going to be everywhere - even you toaster would know where it is - the current buzz with AI is a substantial step in a particular technique.

Sure, there's currently a rush to apply this anywhere and everywhere and that will change the way we live and do business. But it is limited in scope. Once all the nooks and crannies have deep learning in them, we're back to waiting for the next advance - and they don't happen often!

And yes, I'm sure the current deep learning AI will somehow even make it into your toaster!.

But your toaster isn't going to turn into the next Adolf Hitler taking over the world...

... but you could end up with one like this ... https://www.youtube.com/watch?v=LRq_SAuQDec ... but only because someone specifically designed it like that for a laugh.

At the end of the day, all this AI buzz is really just engineering.

Very exciting and interesting engineering.

But it is just engineering.

Could be used for good or for evil (war or peace) - but that will be down to how the engineers, engineer it.

onthemove
Lemon Slice
Posts: 540
Joined: June 24th, 2017, 4:03 pm
Has thanked: 722 times
Been thanked: 471 times

Re: AI endeavours

#108431

Postby onthemove » January 6th, 2018, 10:44 pm

I should have clicked 'Up Next' on the video I linked to in my previous post, before posting :)

Which would have shown me this video...

https://www.youtube.com/watch?v=URmxzxYlmtg

It's a video explaining - in detail - nVidia's self driving car platform.

It shows very well how how the system is broken into modular compenents. And he even shows each module operating separately ... lane detection, 'safe to drive' area detection, and so on.

I hadn't realised they (nVidia) had already developed this, although to be honest I suspect there will be many more iterations before it finally appears in real world cars. But it does show that they are really trying to put themselves at the forefront of self driving cars.

Though how this will compete with Waymo, etc, I'm not sure. I believe that in terms of the sensing and computation, Waymo are building all their components from scratch, in house, so won't likely be using this platform.

odysseus2000
Lemon Half
Posts: 6364
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1530 times
Been thanked: 959 times

Re: AI endeavours

#108432

Postby odysseus2000 » January 6th, 2018, 11:50 pm

Hi Onthemove,

Thank you for your most interesting post.

I agree that AI is currently set up to do specific jobs with well defined boundary conditions as in the Go example. Where I would diverge is in considering this to be different to what humans do. E.g. an accountant works in a job with very well defined boundary conditions which can change due to legislation but that creates another well defined boundary. Similarly for very many other jobs and the more sophisticated the job, the more tightly defined are the boundary conditions. E.g. if your a fighter pilot you have very tight rules of engagement, if your a physicist you have very tight fundamental laws that you have to work within. So rather than AI being as touted likely to hit unskilled and semi-skilled jobs I suspect it will hit highly skilled jobs first.

Sure computers since their invention have been much faster than humans, the difference now is that at some level they can think or perhaps more correctly mimic thought. That is new. By exponential I was meaning in terms of application growth, in that as soon as business X has success with AI, then business Y notices and begins its own AI etc.

Regarding the Google self drive system it is not clear to me if that is practical. Using radar and its other sophisticated sensors may be better, but is it the betamax against the VHS of Tesla’s system which is much simpler relying on cameras, not radar, and which by all the feedback from the hundreds of thousands of Tesla cars on the roads is building its own data base covering all driving experiences that these cars see and which is being integrated into Tesla cars to do other jobs. The most recent being how the AI turns on the wipers when needed with out there being the need for rain or other sensors. Tesla also has or will shortly have according to Musk the ability to make its own AI chips rather than relying on Nvidia. It has already broken its collaboration with the Israeli company Motioneye (or similar name) to go it alone.

Whether all of the AI stuff is hype with out any real substance is at least for the moment reasonably clear in that much of what has been suggested is hype. One sees in ones own computers and software how challenged they are, fine following human instructions, otherwise useless. But as I see the companies that are using it effectively such as Amzn, Google, Facebook, Apple, Walmart, Netflix… there is a clear separation between them and their less Ai powered competitors such that at some level AI seems to be doing something powerful and new. Sure this is still run by people, but it is the AI that is guiding them and as they are working in a tight rule based environment it seems, at least to me, not impossible to believe that many of the managers could before too long be relegated to the role of checking the AI and later to no role at all.

I want to believe that AI will be nothing new, that humans will still be needed, but extrapolating forwards I am not so sure. Still I have been wrong many times and so it will be interesting to see what happens.

Regards,

ReformedCharacter
Lemon Quarter
Posts: 3120
Joined: November 4th, 2016, 11:12 am
Has thanked: 3591 times
Been thanked: 1509 times

Re: AI endeavours

#108433

Postby ReformedCharacter » January 7th, 2018, 12:33 am

odysseus2000 wrote:
Regarding the Google self drive system it is not clear to me if that is practical. Using radar and its other sophisticated sensors may be better, but is it the betamax against the VHS of Tesla’s system which is much simpler relying on cameras, not radar...

Regards,

No:

To make sense of all of this data, a new onboard computer with over 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software.


https://www.tesla.com/en_GB/autopilot

RC

odysseus2000
Lemon Half
Posts: 6364
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1530 times
Been thanked: 959 times

Re: AI endeavours

#108434

Postby odysseus2000 » January 7th, 2018, 12:50 am

Reformed Character


No:

To make sense of all of this data, a new onboard computer with over 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software.


https://www.tesla.com/en_GB/autopilot

RC


Thank you for the correction. I wasn't aware of the forward radar, or the amount of ultrasonic sensors. Although still as I understand it, a lot simpler than the the Google system.

Regards,

Itsallaguess
Lemon Half
Posts: 9129
Joined: November 4th, 2016, 1:16 pm
Has thanked: 4140 times
Been thanked: 10023 times

Re: AI endeavours

#108440

Postby Itsallaguess » January 7th, 2018, 6:49 am

onthemove wrote:
From an engineering perspective, although an AI algorithm might read the status of a traffic lights, might identify a give way road marking, etc, I believe that these would then be fed to a more traditionally engineered algorithm, which can provide a clear, demonstrable output of the system's understanding of the road and associated rules around it.


I'm not sure what sort of road-infrastructure the current level of self-driving-car testing has been carried out on, but given that around 98% of the roads around my area look to have had their road-markings painted in chalk around the turn of the 19th century, I fail to see how a high-level roll out of this technology can ever be achieved in the UK without a very expensive country-wide programme of road-improvements first being carried out. This issue can't be unique to this country either....

I agree that nVidia looks prime-placed to benefit from the technology side of things (although prime-movers are well-known to drop by the wayside quite dramatically in the tech arena...), but if we're talking about pick-and-shovel makers then I think the companies upgrading and maintaining the required road-infrastructure will be clear beneficiaries of any wide-scale roll out of this technology.

Interesting thread, thanks for taking the time guys, although I've got to say that I think Ody is over-egging the pudding somewhat, and forgetting that whilst any given technological AI solutions might become 'possible and available', there then, ultimately, comes the decidedly thorny issue of public-acceptance....

Google glasses, anyone?

Cheers,

Itsallaguess

tjh290633
Lemon Half
Posts: 8209
Joined: November 4th, 2016, 11:20 am
Has thanked: 913 times
Been thanked: 4097 times

Re: AI endeavours

#108452

Postby tjh290633 » January 7th, 2018, 9:46 am

What happens when two self guided vehicles meet in a narrow lane? Which one backs up to the nearest passing place? What about any other vehicles behind, which may also be self guided?

It would never work in our lane.

TJH

onthemove
Lemon Slice
Posts: 540
Joined: June 24th, 2017, 4:03 pm
Has thanked: 722 times
Been thanked: 471 times

Re: AI endeavours

#108453

Postby onthemove » January 7th, 2018, 9:48 am

Itsallaguess wrote:
I'm not sure what sort of road-infrastructure the current level of self-driving-car testing has been carried out on, but given that around 98% of the roads around my area look to have had their road-markings painted in chalk around the turn of the 19th century, I fail to see how a high-level roll out of this technology can ever be achieved in the UK without a very expensive country-wide programme of road-improvements first being carried out. This issue can't be unique to this country either....



Hopefully this will start at the right points...

https://youtu.be/URmxzxYlmtg?t=884

https://youtu.be/URmxzxYlmtg?t=946

https://youtu.be/URmxzxYlmtg?t=413

These parts of the video are demonstrating how the car knows where it is safe to drive... including country lane with no lane markings, even at night. And even going onto rough ground off road, when directed by road work cones.

I will acknowledge, that video is demostrating nVidia's platform, and is probably not as advanced as waymo's. In other words, I think what is being shown in this video is more prototype that is aimed at testing the individual component modules. Waymo seem to be focussing on demonstrating their combined systems when fully functional

Realistically, self driving cars will have to handle faded - and non existant - road markings. For example for the final stage of parking onto a drive. Or a rough, potholed car park. Or the many thousands of miles of single track road in the UK (similar to that the car in the above video driving on)

You can't have a self driving car just stop because the lane markings are worn.

In one video, (not the above) they say that actually navigating in snow isn't really an issue (handling in snow however is a completely different issue - in practise that will be a while yet).

I suspect that is because in essence, the first layer of autonomy is simply understanding what objects are around you, and knowing to avoid big solid things. You can do that in snow (road handling not withstanding). And rain where the reflections on the wet road are obscuring the road markings.

It all comes back round to identifying where it's physically safe to drive, independent of what lane markings tell you (see third video link above). Afterall, just because a lane is marked, doesn't mean it's safe to drive in it - it may have a sinkhole opened up and not yet coned off. Or a tree might have fallen across it. Or there may be a spilled load from a lorry all over it.

So even where a lane is marked, you may have to actually deviate from it to avoid an obstruction.

Those abilities to handle issues within lanes, are in essence the same abilities used to decide where it is safe to drive when the lane markings are worn or non-existant.

TUK020
Lemon Quarter
Posts: 2039
Joined: November 5th, 2016, 7:41 am
Has thanked: 762 times
Been thanked: 1175 times

Re: AI endeavours

#108455

Postby TUK020 » January 7th, 2018, 9:58 am

The description from onthemove (another great post by the way) of multiple layers on AI reminds me a lot about the Multiple Drafts theory of Consciousness about how the human brain works. Except the human brain can decide if it needs new layers, and invent them.

It appears that this all means that AI/engineered systems have become much more capable in certain defined activities (eg vision recognition) in recent years.

In addition to self driving cars, what other fields of economic activity will this have a profound transformative effect?

onthemove
Lemon Slice
Posts: 540
Joined: June 24th, 2017, 4:03 pm
Has thanked: 722 times
Been thanked: 471 times

Re: AI endeavours

#108459

Postby onthemove » January 7th, 2018, 10:06 am

tjh290633 wrote:What happens when two self guided vehicles meet in a narrow lane? Which one backs up to the nearest passing place? What about any other vehicles behind, which may also be self guided?


I suspect a self driving car will probably be far better at reversing than a normal human driver.

I suspect that a self driving car will also be faster at applying the brakes when meeting an oncoming car on a single track lane.

It is then simply a higher level navigation issue as to how to deal with the deadlock.

There are already deadlock detection strategies in computer programming. For example wait a random time, and if the obstruction hasn't moved, then at that point invoke a reverse.

Deciding whether the obstruction is likely to be just another car that wants to pass, or whether it's a permanent obstruction like an accident which means you need to find another route entirely, is probably slightly more challenging - but not insurmountable. Clearly any obstruction where the is width to pass would indicate permanent obstruction. So it should be easy to identify if it is on a narrow lane that might require passing places, etc. Also clearly if as soon as you start reversing, the oncoming car moves closer towards you, it is probably an issue of wanting to pass.

With the deadlock timeout above, it would probably be beneficial for all cars to be autonomous - autonomous cars won't stubbornly dig their heel in and have a stand off demanding the other move back. You are more likely to find a queue of 10 or 20 autonomous cars, all reversing comfortably backwards, than you are to find 10 to 20 human car drivers all willing to reverse together any distance - and having the driving skill for them all to do that without reversing into a hedge!

johnhemming
Lemon Quarter
Posts: 3858
Joined: November 8th, 2016, 7:13 pm
Has thanked: 9 times
Been thanked: 609 times

Re: AI endeavours

#108468

Postby johnhemming » January 7th, 2018, 11:09 am

TUK020 wrote:The description from onthemove (another great post by the way) of multiple layers on AI reminds me a lot about the Multiple Drafts theory of Consciousness about how the human brain works. Except the human brain can decide if it needs new layers, and invent them.

It appears that this all means that AI/engineered systems have become much more capable in certain defined activities (eg vision recognition) in recent years.

In addition to self driving cars, what other fields of economic activity will this have a profound transformative effect?


Speech recognition is now a lot better. That is already resulting in people buying speech interfaces for their homes (Alexa, Google Home etc). Speech interfaces to computer will have an impact in the call centre area.

Image analysis is already relevant in the medical world and textual analysis is being used in complex legal cases.

In other areas it is more a question of further development of the use of tech rather than AI per se.


Return to “Macro and Global Topics”

Who is online

Users browsing this forum: Sorcery and 16 guests