Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
https://www.youtube.com/watch?v=VcVfceTsD0A
RC
Thanks to gvonge,Shelford,GrahamPlatt,gpadsa,Steffers0, for Donating to support the site
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
odysseus2000 wrote:mc2fool
Me: And which of those applies to you?
ChatGPT: As an AI language model, I possess aspects of intelligence, but I do not possess consciousness or sentience. While I can understand and generate human-like responses, analyze information, and perform tasks that require cognitive abilities, I do not have subjective experiences, emotions, or self-awareness. I am a sophisticated tool designed to assist users in generating human-like text based on patterns and knowledge learned from training on a large dataset
It looks to be simply repeating what it has been told to say.
I do not see how these AI systems can be kept from developing sentience & consciousness.
There was once a time when animals were considered to have no consciousness, no self awareness & were simply unfeeling creatures to be used as humans wished without any concern for their well being.
Clearly some farm animals are still treated like that, but mostly, especially in the uk, we treat animals as conscious living creatures with intelligence & feelings.
It will be something very interesting if AI does not have all the attributes of life: sentience, consciousness et al.
At the moment people are in the disbelief phase as regards AI, somehow wanting to remain superior & for AI to remain like every previous machine: Slaves without rights. Despite over whelming evidence the human slave trade was official government policy all over the planet often based on arguments that slaves were inferior creatures.
Regards,
ReformedCharacter wrote:OpenAI \ ChatGPT is lousy at chess, so far
RC
ChatGPT Broke Chess
ReformedCharacter wrote:ReformedCharacter wrote:OpenAI \ ChatGPT is lousy at chess, so far
RC
It seems to be getting much better:ChatGPT Broke Chess
Slight exaggeration there.
https://www.youtube.com/watch?v=9LDaY7X2qGk
RC
mc2fool wrote:ReformedCharacter wrote:It seems to be getting much better:
Slight exaggeration there.
https://www.youtube.com/watch?v=9LDaY7X2qGk
RC
Wow! Very entertaining, and quite hilarious at times! Thanks!
An early computer program built in the 1960s has beaten the viral AI chatbot ChatGPT at the Turing test, designed to differentiate humans from artificial intelligence.
Researchers from UC San Diego in the US tested the early chatbot ELIZA, created in the mid-1960s by MIT scientist Joseph Weizenbaum, against modern versions of the technology.
They found that ELIZA outperformed OpenAI’s GPT-3.5 AI, which powers the company’s free version of ChatGPT.
The Turing test has been the benchmark for determining a machine’s ability to imitate human conversation ever since it was first conceived in 1950 by British computer scientist Alan Turing.
The latest study required 652 human participants to judge whether they were talking to another human or an AI chatbot over the internet.
.
.
.
ELIZA was able to pass itself off as a human 27 per cent of the time, while GPT-3.5 had a success rate of just 14 per cent.
AI expert Gary Marcus described the success of ELIZA as “embarrassing” for modern tech companies working on AI chatbots, however other academics argued that ChatGPT was not designed to perform well in the Turing test.
ursaminortaur wrote:Embarrassing - Mid 1960's ELIZA program beats free version of ChatGPT in Turing test.
https://www.independent.co.uk/tech/chatgpt-turing-test-failed-ai-b2459930.htmlAn early computer program built in the 1960s has beaten the viral AI chatbot ChatGPT at the Turing test, designed to differentiate humans from artificial intelligence.
Researchers from UC San Diego in the US tested the early chatbot ELIZA, created in the mid-1960s by MIT scientist Joseph Weizenbaum, against modern versions of the technology.
They found that ELIZA outperformed OpenAI’s GPT-3.5 AI, which powers the company’s free version of ChatGPT.
The Turing test has been the benchmark for determining a machine’s ability to imitate human conversation ever since it was first conceived in 1950 by British computer scientist Alan Turing.
The latest study required 652 human participants to judge whether they were talking to another human or an AI chatbot over the internet.
.
.
.
ELIZA was able to pass itself off as a human 27 per cent of the time, while GPT-3.5 had a success rate of just 14 per cent.
AI expert Gary Marcus described the success of ELIZA as “embarrassing” for modern tech companies working on AI chatbots, however other academics argued that ChatGPT was not designed to perform well in the Turing test.
kempiejon wrote:27 percent of people can't spot ELIZA. That's not as embarrassing for computer science as it is for society.
One of the reasons noted in the study for participants mistaking ELIZA for a human was that it was “too bad” to be a current AI model, and therefore “was more likely to be a human intentionally being uncooperative”.
ursaminortaur wrote:The article mentions one possible somewhat ironic explanation
kempiejon wrote:ursaminortaur wrote:The article mentions one possible somewhat ironic explanation
That's excellent, humans think it could be a bluff by humans trying to crappy AI. I have now read the article.
kempiejon wrote:ursaminortaur wrote:The article mentions one possible somewhat ironic explanation
That's excellent, humans think it could be a bluff by humans trying to crappy AI. I have now read the article.
ChatGPT appears to have broken, providing users with rambling responses of gibberish.
In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.
There is no clear indication of why the issue happened. But its creators said they were aware of the problem and are monitoring the situation
ursaminortaur wrote:Has ChatGPT had a mental breakdown ?
https://www.independent.co.uk/tech/chatgpt-status-reddit-down-gibberish-messages-latest-b2499816.htmlChatGPT appears to have broken, providing users with rambling responses of gibberish.
In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.
There is no clear indication of why the issue happened. But its creators said they were aware of the problem and are monitoring the situation
Sora is an AI model that can create realistic and imaginative scenes from text instructions.
ReformedCharacter wrote:Creating video from text:Sora is an AI model that can create realistic and imaginative scenes from text instructions.
https://openai.com/sora
RC
Users browsing this forum: No registered users and 5 guests