Clitheroekid wrote:I've read various articles about how the US, Europe and the UK are concerned about the progress of AI, and their proposals to regulate it. However, countries such as Russia and China will obviously not be bound by any such restrictions, so as I have very little understanding of the subject I'd be interested to know how risky is it for Western countries to control development of AI if no such controls are being applied elsewhere?
On the face of it this would allow `bad actors' to develop systems that could potentially be more powerful than those in the West, but is this a realistic concern? Would they be held back by the fact that their level of technology is inferior, and if so could they overcome this - ironically with the assistance of AI?
Sorry if these questions seem naive, but it seems to me that the weaponisation of AI might pose a greater threat to us than conventional military weapons.
There is a general international political consensus that AI is dangerous and most nations are paying lip service to the ideas of controls at least for commercial systems. Probably all national militaries are developing strategic AI for last resort use or as a first strike, likely in very similar ways to how nukes were quickly acquired by the East after the West and this lead to the ideas of Mutually Assured Destruction if anyone started a nuclear war.
It is a complicated subject in that the technology both offers immense capability for good and for bad so no nation dare ignore it. The US has begun to restrict sales of compute chips to China, but these kinds of boycotts hardly ever work and China is developing its own semi-conductor industry.
There is also the question of alignment. Chat GPT (General Purpose Transformer) is a large (billions of parameters model), that is optimised with somethings of a woke bias whereas Grok (Tesla) is more centralist, but the main point is that AI can be aligned with political thought so that should it come to a conflict situation one could have say capitalist AI aligned verse communist AI aligned.
Currently the primary winner in this technology space is Nvidia with the CEO (Jensen Huang) apparently worth >$43b:
https://www.forbes.com/profile/jensen-h ... b1bc333a6cAs of now the models are relatively primitive compared to what is likely coming, although still scoring IQ of over 150, but most of them are not stable enough for critical stuff and can get many things right and then go off the rails. AI has many resembles of young intelligent children who have not yet learned how to concentrate and focus on problems, but as with children this seems to be happening albeit slowly.
The range of what AI can do is extraordinary and the more knowledge a person has, the easier it is for AI to replace them. For now dynamic process like non geofenced car driving are not yet solved, although I expect its close and will happen, but in games like Chess and Go, AI are supreme, able to simultaneously defeat multiple grand masters. AI have also worked out most of the possible protein structures and discovered many previously unknown crystals.
It is possible that Ai may reach some kind of plateau and be less effective than humans, but there is, as far as I know, no evidence or even suggestions that AI is limtied and the possibility of super human capabilities evolving currently seem very high.
How humans will react is still unknown as most people have no idea what AI can do and those that have played with chats regularly cite the unreliability, but it is imho very like the internet when most thought it a fad and now it is omnipotnent in most business and social activities.
If current trends continue it is likely that many humans jobs will not be economically viable. E.g. a GP costs something like £200 k per year for 40 hours a week. An AI that has similar to better capabilities can operate 168 hours per week and cost a fraction of what a `GP costs. For now the only jobs AI can't tackle are manual ones either low skilled like labouring or high skilled like a brain surgeon, but those jobs are unlikely to be beyond AI for long.
If all of this happens the entire stucture behind most economic models falls apart and some kind of universal basic income will be needed for most folk. This may be a time of unprecedented quality of life or unprecedented despair when folk realise they can only offer much less than what AI can do.
Interesting times!
Regards,