Donate to Remove ads

Got a credit card? use our Credit Card & Finance Calculators

Thanks to Wasron,jfgw,Rhyd6,eyeball08,Wondergirly, for Donating to support the site

AI endeavours

The Big Picture Place
Bubblesofearth
Lemon Quarter
Posts: 1111
Joined: November 8th, 2016, 7:32 am
Has thanked: 12 times
Been thanked: 452 times

Re: AI endeavours

#636246

Postby Bubblesofearth » December 26th, 2023, 10:04 am

AI is the subject of this years Royal Institution xmas lectures. 8pm today BBC 4.

BoE

CliffEdge
Lemon Quarter
Posts: 1561
Joined: July 25th, 2018, 9:56 am
Has thanked: 459 times
Been thanked: 434 times

Re: AI endeavours

#636344

Postby CliffEdge » December 26th, 2023, 11:30 pm

ReformedCharacter wrote:
Bubblesofearth wrote:
The human brain is essentially a processing machine. A complex one of course and still only poorly understood but there is no a priori reason that I can think of why AI could not eventually surpass it in all aspects.

Unless you are religious and believe in a soul then what is there about the brain that cannot be replicated by AI?

BoE

Consciousness perhaps? But there seems to be a lack of consensus about what consciousness is.

Christof Koch: Consciousness | Lex Fridman Podcast #2

https://www.youtube.com/watch?v=piHkfmeU7Wo

RC


I will accept that due to the distribution of computing devices all over the place, interconnected at the speed of light and considering space/time geometry and non-simultaneity and the differing velocities of those devices relative to each other

then

it would not be possible (even theoretically) to build a mechanical device capable of duplicating such a system.

Which does open up possible emergent properties. (Completely irrelevant to the information filter that is so called AI of today of course.)

murraypaul
Lemon Slice
Posts: 785
Joined: April 9th, 2021, 5:54 pm
Has thanked: 225 times
Been thanked: 265 times

Re: AI endeavours

#637605

Postby murraypaul » January 2nd, 2024, 10:15 am

ReformedCharacter wrote:
GPT Pilot - Build Full Stack Apps with a SINGLE PROMPT (Made for Devs)

More evidence that the nature of programming is undergoing something of a revolution. Admittedly, I'm not a skilled programmer, but it seems that anything I could do can now be undertaken in a few seconds or minutes by an AI. My 'skills' are now redundant.


Programming AI is at the state that what you could previously outsource to a low-skilled code farm, you can now 'outsource' to an AI.

So if you have a simple task, that has been done thousands of times before, and have defined what the input and outputs should be, you'll probably get something that works most of the time, but isn't exactly what you want.

I would imagine there are already people freelancing on coding websites and just feeding the tasks to AI.

But that doesn't begin to cover what a skilled programmed does.

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#640709

Postby odysseus2000 » January 16th, 2024, 11:30 pm

Google AI a better Dr, than a human?:

https://www.nature.com/articles/d41586- ... 4-52366044

Regards,

Tedx
Lemon Quarter
Posts: 2075
Joined: December 14th, 2022, 10:59 am
Has thanked: 1849 times
Been thanked: 1489 times

Re: AI endeavours

#641030

Postby Tedx » January 18th, 2024, 9:59 am

I read a snippet from an interview yesterday with someone from Microsoft.

He basically implied that one AI app on your phone (from Microsoft/Open AI presumably) will replace all the other apps in one go. Basically, he said, it would be the end of the Google and Apple app stores.

It was likened to having a fully functional personal assistant in your pocket 24/7. Managing your mail, your tv viewing, doing your taxes, taking notes, responding to messages, keeping an eye on the weather for you, autheticating your id, managing your shopping lists, making your online purchases, managing your energy accounts and Gawd knows what else. And the AI wont be interacting with humans, it'll be interacting with other AI programs

The problem with personal assistants (an a human form) is that your become overdependent on them and they leave you at the worst possible time. I guess we dont have that problem with AI - unless it gets kidnapped (stolen) or hacked or something.

Its a brave new world out there.

:shock:

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#641232

Postby odysseus2000 » January 19th, 2024, 11:22 am

Tedx wrote:I read a snippet from an interview yesterday with someone from Microsoft.

He basically implied that one AI app on your phone (from Microsoft/Open AI presumably) will replace all the other apps in one go. Basically, he said, it would be the end of the Google and Apple app stores.

It was likened to having a fully functional personal assistant in your pocket 24/7. Managing your mail, your tv viewing, doing your taxes, taking notes, responding to messages, keeping an eye on the weather for you, autheticating your id, managing your shopping lists, making your online purchases, managing your energy accounts and Gawd knows what else. And the AI wont be interacting with humans, it'll be interacting with other AI programs

The problem with personal assistants (an a human form) is that your become overdependent on them and they leave you at the worst possible time. I guess we dont have that problem with AI - unless it gets kidnapped (stolen) or hacked or something.

Its a brave new world out there.

:shock:


Microsoft have been saying how wonderful Co-pilot is, what it can do, why it is essential… everything except for sales.

Either it is selling so strongly that they don’t even mention it, or it ain’t selling.

Reagards,

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#651143

Postby odysseus2000 » March 4th, 2024, 11:18 am

Interesting 14 minute video on China’s troubles & opportunities due to US sanctions on AI chips giving a good over view of the main players & their performance:

https://x.com/anastasiintech/status/176 ... DCpgdbFBxg

Regards,

ReformedCharacter
Lemon Quarter
Posts: 3142
Joined: November 4th, 2016, 11:12 am
Has thanked: 3650 times
Been thanked: 1522 times

Re: AI endeavours

#651146

Postby ReformedCharacter » March 4th, 2024, 11:32 am

odysseus2000 wrote:Microsoft have been saying how wonderful Co-pilot is, what it can do, why it is essential… everything except for sales.

Either it is selling so strongly that they don’t even mention it, or it ain’t selling.

Reagards,

There is a free version but I asked how many paying customers co-pilot has...

Co-pilot wrote:Microsoft’s GitHub Copilot has seen remarkable growth in its user base. As of now, there are over 1 million paid Copilot users across more than 37,000 organizations that subscribe to Copilot for business. This widespread adoption extends beyond the United States, indicating significant traction globally. The recent addition of Copilot Chat has also been well-received, benefiting both digital natives like Shopify and leading enterprises such as Maersk and PWC, enhancing the productivity of software developers. It’s exciting to witness the impact of generative AI in the tech world!

And if you’re curious about pricing, Microsoft Copilot is still free for users with Microsoft 365 E3 and E5, as well as Business Premium and Business Standard packages. Additionally, specific Copilot solutions, like Copilot for Sales and Service, are available at a cost of $50 per month per user. So whether you’re a seasoned developer or just starting out, Copilot continues to evolve and empower users in their coding journey!

https://copilot.microsoft.com/

RC

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#651156

Postby odysseus2000 » March 4th, 2024, 12:03 pm

ReformedCharacter wrote:
odysseus2000 wrote:Microsoft have been saying how wonderful Co-pilot is, what it can do, why it is essential… everything except for sales.

Either it is selling so strongly that they don’t even mention it, or it ain’t selling.

Reagards,

There is a free version but I asked how many paying customers co-pilot has...

Co-pilot wrote:Microsoft’s GitHub Copilot has seen remarkable growth in its user base. As of now, there are over 1 million paid Copilot users across more than 37,000 organizations that subscribe to Copilot for business. This widespread adoption extends beyond the United States, indicating significant traction globally. The recent addition of Copilot Chat has also been well-received, benefiting both digital natives like Shopify and leading enterprises such as Maersk and PWC, enhancing the productivity of software developers. It’s exciting to witness the impact of generative AI in the tech world!

And if you’re curious about pricing, Microsoft Copilot is still free for users with Microsoft 365 E3 and E5, as well as Business Premium and Business Standard packages. Additionally, specific Copilot solutions, like Copilot for Sales and Service, are available at a cost of $50 per month per user. So whether you’re a seasoned developer or just starting out, Copilot continues to evolve and empower users in their coding journey!

https://copilot.microsoft.com/

RC


How interesting. MSFT have said nothing about copilot sales in their recent calls. If this 1 million figure is correct it then becomes a question of is this paid as in already paying for 365 or additional revenue. If it is 1 million at $50 that is $50 million per month, $600m per year which seems enough for them to mention it in their calls, but small compared to what they have paid for open AI which is now under legal considerations following Musk suit against open AI for violation of its terms of incorporation.

Presumably MSFT is getting more from folk who are using Open AI's Chats, but the figures on this do not always seem consistent.

I wonder if there are way too many AI business for the current market users, but NVIDIA is selling like crazy so some feel there is coin to be made.

Regards,

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#654506

Postby odysseus2000 » March 19th, 2024, 11:55 am

Interesting presentation by the ceo of Nvidia. The rate of progress on AI chips & robotics is now mind blowing, the fastest Industrial Revolution by a long way:

https://youtu.be/WXIKs_6WyqE?si=F4YsaXLtKUev9mBk

Regards,

Sorcery
Lemon Quarter
Posts: 1242
Joined: November 4th, 2016, 6:38 pm
Has thanked: 148 times
Been thanked: 377 times

Re: AI endeavours

#654574

Postby Sorcery » March 19th, 2024, 4:59 pm

odysseus2000 wrote:Interesting presentation by the ceo of Nvidia. The rate of progress on AI chips & robotics is now mind blowing, the fastest Industrial Revolution by a long way:

https://youtu.be/WXIKs_6WyqE?si=F4YsaXLtKUev9mBk

Regards,


Apart from creating a new (but useless) variable type called FP4. I cannot imagine how one can usefully use an FP4 type, it's 4 bits long, lose 1 bit for the sign, lose another for a very small exponent of 1 bit and that gives 2 bits of precision. It does have the advantage that it exaggerates the FLOPS/second by a factor of 2, which is why it was introduced ;-) Nvidia have to got to be "desperate" in some way to pull a trick like that. As far as I understand them, they have no reason to with a full order book until 2025.

Disclosure : I hold Nvidia.

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#654768

Postby odysseus2000 » March 20th, 2024, 2:54 pm

Sorcery wrote:
odysseus2000 wrote:Interesting presentation by the ceo of Nvidia. The rate of progress on AI chips & robotics is now mind blowing, the fastest Industrial Revolution by a long way:

https://youtu.be/WXIKs_6WyqE?si=F4YsaXLtKUev9mBk

Regards,


Apart from creating a new (but useless) variable type called FP4. I cannot imagine how one can usefully use an FP4 type, it's 4 bits long, lose 1 bit for the sign, lose another for a very small exponent of 1 bit and that gives 2 bits of precision. It does have the advantage that it exaggerates the FLOPS/second by a factor of 2, which is why it was introduced ;-) Nvidia have to got to be "desperate" in some way to pull a trick like that. As far as I understand them, they have no reason to with a full order book until 2025.

Disclosure : I hold Nvidia.


As I understand it, but please correct if wrong, Blackwell uses machine selected precision, so that the processor only uses what memory is needed for the specific calculation:

https://www.networkworld.com/article/20 ... cture.html

Blackwell has 20 petaflops of FP4 AI performance on a single GPU. FP4, with four bits of floating point precision per operation, is new to the Blackwell processor. Hopper had FP8. The shorter the floating-point string, the faster it can be executed. That’s why as floating-point strings go up – FP8, FP16, FP32, and FP64 – performance is cut in half with each step. Hopper has 4 Pflops of FP8 AI performance, which is less than half the performance of Blackwell.


Perhaps the reason NVDIA is going this route is to suppress articles by potential competitors using similar metrics to indicate how good their machines are, so that there is now a base case of this is as fast as possible for comparison to competitors.

Nvidia are so far ahead in practical machines that all the competitors can only make money if NVDIA can’t supply & a purchaser is forced away from Nvidia. There are loads of small competitors with different architectures, but only NVDIA seems to have scale & has now branched heavily into biped robots which is perhaps the biggest market ever.

What bothers me most about NVDIA is whether their customers can make enough money to keep buying processors. Copilot sales have still not been released by Microsoft, so the flood of cash into Nvidia may wane, but perhaps I have this wrong & the various purveyors of large language models are profitable. I doubt it, but I can be wrong. Please post if you know of big buyers of Nvidia hardware who are making lots of coin.

Regards,

Sorcery
Lemon Quarter
Posts: 1242
Joined: November 4th, 2016, 6:38 pm
Has thanked: 148 times
Been thanked: 377 times

Re: AI endeavours

#655084

Postby Sorcery » March 21st, 2024, 5:23 pm

odysseus2000 wrote:
Sorcery wrote:
Apart from creating a new (but useless) variable type called FP4. I cannot imagine how one can usefully use an FP4 type, it's 4 bits long, lose 1 bit for the sign, lose another for a very small exponent of 1 bit and that gives 2 bits of precision. It does have the advantage that it exaggerates the FLOPS/second by a factor of 2, which is why it was introduced ;-) Nvidia have to got to be "desperate" in some way to pull a trick like that. As far as I understand them, they have no reason to with a full order book until 2025.

Disclosure : I hold Nvidia.


As I understand it, but please correct if wrong, Blackwell uses machine selected precision, so that the processor only uses what memory is needed for the specific calculation:

https://www.networkworld.com/article/20 ... cture.html

Blackwell has 20 petaflops of FP4 AI performance on a single GPU. FP4, with four bits of floating point precision per operation, is new to the Blackwell processor. Hopper had FP8. The shorter the floating-point string, the faster it can be executed. That’s why as floating-point strings go up – FP8, FP16, FP32, and FP64 – performance is cut in half with each step. Hopper has 4 Pflops of FP8 AI performance, which is less than half the performance of Blackwell.


Perhaps the reason NVDIA is going this route is to suppress articles by potential competitors using similar metrics to indicate how good their machines are, so that there is now a base case of this is as fast as possible for comparison to competitors.

Nvidia are so far ahead in practical machines that all the competitors can only make money if NVDIA can’t supply & a purchaser is forced away from Nvidia. There are loads of small competitors with different architectures, but only NVDIA seems to have scale & has now branched heavily into biped robots which is perhaps the biggest market ever.

What bothers me most about NVDIA is whether their customers can make enough money to keep buying processors. Copilot sales have still not been released by Microsoft, so the flood of cash into Nvidia may wane, but perhaps I have this wrong & the various purveyors of large language models are profitable. I doubt it, but I can be wrong. Please post if you know of big buyers of Nvidia hardware who are making lots of coin.

Regards,


In any representation to call 4 bits a float doesn't make sense. As an integer it would be OK. It does double the recorded petaflops(?) to call 4 bits a float. Why stop at FP4? What about FP2 and FP1? It might be useful in some circumstances to have an FP4 or an FP2 or an FP1 variables especially if they can also be used as integers, however it's a stretch to call them floats and use that to claim double the performance in petaflops.

There is another claim that's suspect in my eyes; that the B200 is two H100s stuck together. with a bit of software that makes them look like one card with more resources. That brings the speedup to one quarter of what's claimed. It's no different to AMD saying use a pair of MI300s for similar performance.

Some reviewers think it's overegged as well https://www.tomshardware.com/pc-compone ... opper-h100

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#655132

Postby odysseus2000 » March 21st, 2024, 9:38 pm

I don’t understand what fp4 actually means.

If the number is represented as sign, exponent, mantissa, then fp4 would be very limited as mantissa (00,01, 11) or 0,1,3, exponent 0 or 1.

This would be, as you say, be not much use, whereas 4 bits as an unsigned integer would give binary 0000 to 1111, maximum of decimal 1+2+4+8= 15.

However is the sign indicated at a block level, if so then fp4 could store 7, still not great.

This guy argues that fp4 is indeed limited to 3, but says it is 1.58 bits:

https://x.com/danielhanchen/status/1769 ... DCpgdbFBxg

Which makes no sense to me & he even suggests fp3 might be valid. My biggest problem is in terms of understanding what is actually done in large language models & similar ai applications

The idea of using shorter length variables was presented by Tesla dojo as configurable length variables, but they stopped at fp8:

https://cdn.motor1.com/pdf-files/535242 ... nology.pdf

I can see the advantages of lower precision in terms of saving memory space when higher precision is not needed, but I can’t decide if fp4 is just marketing hype or a practical tool in for example large language models.

Maybe some one knows & can enlighten me.

Regards,

Sorcery
Lemon Quarter
Posts: 1242
Joined: November 4th, 2016, 6:38 pm
Has thanked: 148 times
Been thanked: 377 times

Re: AI endeavours

#655233

Postby Sorcery » March 22nd, 2024, 11:53 am

odysseus2000 wrote:I don’t understand what fp4 actually means.

If the number is represented as sign, exponent, mantissa, then fp4 would be very limited as mantissa (00,01, 11) or 0,1,3, exponent 0 or 1.

This would be, as you say, be not much use, whereas 4 bits as an unsigned integer would give binary 0000 to 1111, maximum of decimal 1+2+4+8= 15.

However is the sign indicated at a block level, if so then fp4 could store 7, still not great.

This guy argues that fp4 is indeed limited to 3, but says it is 1.58 bits:

https://x.com/danielhanchen/status/1769 ... DCpgdbFBxg

Which makes no sense to me & he even suggests fp3 might be valid. My biggest problem is in terms of understanding what is actually done in large language models & similar ai applications

The idea of using shorter length variables was presented by Tesla dojo as configurable length variables, but they stopped at fp8:

https://cdn.motor1.com/pdf-files/535242 ... nology.pdf

I can see the advantages of lower precision in terms of saving memory space when higher precision is not needed, but I can’t decide if fp4 is just marketing hype or a practical tool in for example large language models.

Maybe some one knows & can enlighten me.

Regards,



There is this trick which gets one bit extra of precision :
With this information we can begin to understand the decoding of floats. Floats use an base-two exponential format so we would expect the decoding to be mantissa * 2^exponent. However in the encodings for 1.0 and 2.0 the mantissa is zero, so how can this work? It works because of a clever trick. Normalized numbers in base-two scientific notation are always of the form 1.xxxx*2^exp, so storing the leading one is not necessary. By omitting the leading one we get an extra bit of precision – the 23-bit field of a float actually manages to hold 24 bits of precision because there is an implied ‘one’ bit with a value of 0x800000.
from
https://randomascii.wordpress.com/2012/ ... nt-format/

I imagine it would work with FP4 and FP8 too. Sorry only a partial answer, FP4 is still new to me.

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#655250

Postby odysseus2000 » March 22nd, 2024, 12:46 pm

Sorcery wrote:
odysseus2000 wrote:I don’t understand what fp4 actually means.

If the number is represented as sign, exponent, mantissa, then fp4 would be very limited as mantissa (00,01, 11) or 0,1,3, exponent 0 or 1.

This would be, as you say, be not much use, whereas 4 bits as an unsigned integer would give binary 0000 to 1111, maximum of decimal 1+2+4+8= 15.

However is the sign indicated at a block level, if so then fp4 could store 7, still not great.

This guy argues that fp4 is indeed limited to 3, but says it is 1.58 bits:

https://x.com/danielhanchen/status/1769 ... DCpgdbFBxg

Which makes no sense to me & he even suggests fp3 might be valid. My biggest problem is in terms of understanding what is actually done in large language models & similar ai applications

The idea of using shorter length variables was presented by Tesla dojo as configurable length variables, but they stopped at fp8:

https://cdn.motor1.com/pdf-files/535242 ... nology.pdf

I can see the advantages of lower precision in terms of saving memory space when higher precision is not needed, but I can’t decide if fp4 is just marketing hype or a practical tool in for example large language models.

Maybe some one knows & can enlighten me.

Regards,



There is this trick which gets one bit extra of precision :
With this information we can begin to understand the decoding of floats. Floats use an base-two exponential format so we would expect the decoding to be mantissa * 2^exponent. However in the encodings for 1.0 and 2.0 the mantissa is zero, so how can this work? It works because of a clever trick. Normalized numbers in base-two scientific notation are always of the form 1.xxxx*2^exp, so storing the leading one is not necessary. By omitting the leading one we get an extra bit of precision – the 23-bit field of a float actually manages to hold 24 bits of precision because there is an implied ‘one’ bit with a value of 0x800000.
from
https://randomascii.wordpress.com/2012/ ... nt-format/

I imagine it would work with FP4 and FP8 too. Sorry only a partial answer, FP4 is still new to me.


Thank you, that makes a lot more sense.

During one of the Tesla Dojo presentations one of the engineers said the reason they were introducing variable length floating points was that many applications did not need much precision, so perhaps the fp4 is a little more useful than it appears.


Regards,

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#655258

Postby odysseus2000 » March 22nd, 2024, 12:53 pm

This is an interesting post, saying that current AI can not correct what they have created, but newer versions can:

https://x.com/andrewyng/status/17708976 ... DCpgdbFBxg

No idea if this is true, but it would explain some of the stupid stuff that comes out of chat bots.

Regards,

odysseus2000
Lemon Half
Posts: 6449
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1565 times
Been thanked: 978 times

Re: AI endeavours

#655294

Postby odysseus2000 » March 22nd, 2024, 2:47 pm

Several, including Elon Musk, have described Apple has having missed the AI boat.

I am trying to work out if Apple have indeed missed out on AI, or displayed masterful inactivity.

The problem I have with AI is that it seems something of a commodity. No AI that I have discovered is so much better than all the others that it could be called a leader & the amount of money that any AI vendor charges does not look to be significant compared to their spend with Nvidia. Am I missing something here?

Regards,

NotSure
Lemon Slice
Posts: 920
Joined: February 5th, 2021, 4:45 pm
Has thanked: 685 times
Been thanked: 316 times

Re: AI endeavours

#655298

Postby NotSure » March 22nd, 2024, 3:10 pm

odysseus2000 wrote:Several, including Elon Musk, have described Apple has having missed the AI boat.

I am trying to work out if Apple have indeed missed out on AI, or displayed masterful inactivity.

The problem I have with AI is that it seems something of a commodity. No AI that I have discovered is so much better than all the others that it could be called a leader & the amount of money that any AI vendor charges does not look to be significant compared to their spend with Nvidia. Am I missing something here?

Regards,



https://fortune.com/2024/02/01/ai-boost-microsoft-earnings-customers-might-not-reap-benefits/

Microsoft reaps its AI rewards. Its customers? Not so much

.....“It took a decade for Azure to get to $10 billion. At Microsoft AI, it’s at $4.4 billion in one year, so things are happening very fast. I think that it’s year one—I can’t wait to see what year two, year three, and year four bring for Microsoft AI,” Brent Bracelin, Piper Sandler Equity Research Analyst on Cloud, told Yahoo Finance.

The much-anticipated earnings displayed AI’s potential as a major revenue driver for juggernauts like Microsoft. For the companies trying to integrate these AI technologies to serve customers and employees, however, the technology’s impact is far more fraught. Many of these companies are staring down a line of tough decisions about new security risks, what the technology means for their workforces, how to actually implement the technology, and how to reimagine their products and overall businesses in the name of AI. .....


Eventually, AI should boost productivity as well as generate fees/charges? Not yet though, maybe......

Sorcery
Lemon Quarter
Posts: 1242
Joined: November 4th, 2016, 6:38 pm
Has thanked: 148 times
Been thanked: 377 times

Re: AI endeavours

#655316

Postby Sorcery » March 22nd, 2024, 4:43 pm

odysseus2000 wrote:Several, including Elon Musk, have described Apple has having missed the AI boat.

I am trying to work out if Apple have indeed missed out on AI, or displayed masterful inactivity.

The problem I have with AI is that it seems something of a commodity. No AI that I have discovered is so much better than all the others that it could be called a leader & the amount of money that any AI vendor charges does not look to be significant compared to their spend with Nvidia. Am I missing something here?

Regards,


I think Apple might be licencing Google's IP. https://finance.yahoo.com/news/tech-gia ... 26526.html


Return to “Macro and Global Topics”

Who is online

Users browsing this forum: No registered users and 18 guests