Re: AI endeavours
Posted: December 26th, 2023, 10:04 am
AI is the subject of this years Royal Institution xmas lectures. 8pm today BBC 4.
BoE
BoE
Shares, Investment and Personal Finance Discussion Forums
https://www.lemonfool.co.uk/
ReformedCharacter wrote:Bubblesofearth wrote:
The human brain is essentially a processing machine. A complex one of course and still only poorly understood but there is no a priori reason that I can think of why AI could not eventually surpass it in all aspects.
Unless you are religious and believe in a soul then what is there about the brain that cannot be replicated by AI?
BoE
Consciousness perhaps? But there seems to be a lack of consensus about what consciousness is.
Christof Koch: Consciousness | Lex Fridman Podcast #2
https://www.youtube.com/watch?v=piHkfmeU7Wo
RC
ReformedCharacter wrote:GPT Pilot - Build Full Stack Apps with a SINGLE PROMPT (Made for Devs)
More evidence that the nature of programming is undergoing something of a revolution. Admittedly, I'm not a skilled programmer, but it seems that anything I could do can now be undertaken in a few seconds or minutes by an AI. My 'skills' are now redundant.
Tedx wrote:I read a snippet from an interview yesterday with someone from Microsoft.
He basically implied that one AI app on your phone (from Microsoft/Open AI presumably) will replace all the other apps in one go. Basically, he said, it would be the end of the Google and Apple app stores.
It was likened to having a fully functional personal assistant in your pocket 24/7. Managing your mail, your tv viewing, doing your taxes, taking notes, responding to messages, keeping an eye on the weather for you, autheticating your id, managing your shopping lists, making your online purchases, managing your energy accounts and Gawd knows what else. And the AI wont be interacting with humans, it'll be interacting with other AI programs
The problem with personal assistants (an a human form) is that your become overdependent on them and they leave you at the worst possible time. I guess we dont have that problem with AI - unless it gets kidnapped (stolen) or hacked or something.
Its a brave new world out there.
odysseus2000 wrote:Microsoft have been saying how wonderful Co-pilot is, what it can do, why it is essential… everything except for sales.
Either it is selling so strongly that they don’t even mention it, or it ain’t selling.
Reagards,
Co-pilot wrote:Microsoft’s GitHub Copilot has seen remarkable growth in its user base. As of now, there are over 1 million paid Copilot users across more than 37,000 organizations that subscribe to Copilot for business. This widespread adoption extends beyond the United States, indicating significant traction globally. The recent addition of Copilot Chat has also been well-received, benefiting both digital natives like Shopify and leading enterprises such as Maersk and PWC, enhancing the productivity of software developers. It’s exciting to witness the impact of generative AI in the tech world!
And if you’re curious about pricing, Microsoft Copilot is still free for users with Microsoft 365 E3 and E5, as well as Business Premium and Business Standard packages. Additionally, specific Copilot solutions, like Copilot for Sales and Service, are available at a cost of $50 per month per user. So whether you’re a seasoned developer or just starting out, Copilot continues to evolve and empower users in their coding journey!
ReformedCharacter wrote:odysseus2000 wrote:Microsoft have been saying how wonderful Co-pilot is, what it can do, why it is essential… everything except for sales.
Either it is selling so strongly that they don’t even mention it, or it ain’t selling.
Reagards,
There is a free version but I asked how many paying customers co-pilot has...Co-pilot wrote:Microsoft’s GitHub Copilot has seen remarkable growth in its user base. As of now, there are over 1 million paid Copilot users across more than 37,000 organizations that subscribe to Copilot for business. This widespread adoption extends beyond the United States, indicating significant traction globally. The recent addition of Copilot Chat has also been well-received, benefiting both digital natives like Shopify and leading enterprises such as Maersk and PWC, enhancing the productivity of software developers. It’s exciting to witness the impact of generative AI in the tech world!
And if you’re curious about pricing, Microsoft Copilot is still free for users with Microsoft 365 E3 and E5, as well as Business Premium and Business Standard packages. Additionally, specific Copilot solutions, like Copilot for Sales and Service, are available at a cost of $50 per month per user. So whether you’re a seasoned developer or just starting out, Copilot continues to evolve and empower users in their coding journey!
https://copilot.microsoft.com/
RC
odysseus2000 wrote:Interesting presentation by the ceo of Nvidia. The rate of progress on AI chips & robotics is now mind blowing, the fastest Industrial Revolution by a long way:
https://youtu.be/WXIKs_6WyqE?si=F4YsaXLtKUev9mBk
Regards,
Sorcery wrote:odysseus2000 wrote:Interesting presentation by the ceo of Nvidia. The rate of progress on AI chips & robotics is now mind blowing, the fastest Industrial Revolution by a long way:
https://youtu.be/WXIKs_6WyqE?si=F4YsaXLtKUev9mBk
Regards,
Apart from creating a new (but useless) variable type called FP4. I cannot imagine how one can usefully use an FP4 type, it's 4 bits long, lose 1 bit for the sign, lose another for a very small exponent of 1 bit and that gives 2 bits of precision. It does have the advantage that it exaggerates the FLOPS/second by a factor of 2, which is why it was introduced Nvidia have to got to be "desperate" in some way to pull a trick like that. As far as I understand them, they have no reason to with a full order book until 2025.
Disclosure : I hold Nvidia.
odysseus2000 wrote:Sorcery wrote:
Apart from creating a new (but useless) variable type called FP4. I cannot imagine how one can usefully use an FP4 type, it's 4 bits long, lose 1 bit for the sign, lose another for a very small exponent of 1 bit and that gives 2 bits of precision. It does have the advantage that it exaggerates the FLOPS/second by a factor of 2, which is why it was introduced Nvidia have to got to be "desperate" in some way to pull a trick like that. As far as I understand them, they have no reason to with a full order book until 2025.
Disclosure : I hold Nvidia.
As I understand it, but please correct if wrong, Blackwell uses machine selected precision, so that the processor only uses what memory is needed for the specific calculation:
https://www.networkworld.com/article/20 ... cture.html
Blackwell has 20 petaflops of FP4 AI performance on a single GPU. FP4, with four bits of floating point precision per operation, is new to the Blackwell processor. Hopper had FP8. The shorter the floating-point string, the faster it can be executed. That’s why as floating-point strings go up – FP8, FP16, FP32, and FP64 – performance is cut in half with each step. Hopper has 4 Pflops of FP8 AI performance, which is less than half the performance of Blackwell.
Perhaps the reason NVDIA is going this route is to suppress articles by potential competitors using similar metrics to indicate how good their machines are, so that there is now a base case of this is as fast as possible for comparison to competitors.
Nvidia are so far ahead in practical machines that all the competitors can only make money if NVDIA can’t supply & a purchaser is forced away from Nvidia. There are loads of small competitors with different architectures, but only NVDIA seems to have scale & has now branched heavily into biped robots which is perhaps the biggest market ever.
What bothers me most about NVDIA is whether their customers can make enough money to keep buying processors. Copilot sales have still not been released by Microsoft, so the flood of cash into Nvidia may wane, but perhaps I have this wrong & the various purveyors of large language models are profitable. I doubt it, but I can be wrong. Please post if you know of big buyers of Nvidia hardware who are making lots of coin.
Regards,
odysseus2000 wrote:I don’t understand what fp4 actually means.
If the number is represented as sign, exponent, mantissa, then fp4 would be very limited as mantissa (00,01, 11) or 0,1,3, exponent 0 or 1.
This would be, as you say, be not much use, whereas 4 bits as an unsigned integer would give binary 0000 to 1111, maximum of decimal 1+2+4+8= 15.
However is the sign indicated at a block level, if so then fp4 could store 7, still not great.
This guy argues that fp4 is indeed limited to 3, but says it is 1.58 bits:
https://x.com/danielhanchen/status/1769 ... DCpgdbFBxg
Which makes no sense to me & he even suggests fp3 might be valid. My biggest problem is in terms of understanding what is actually done in large language models & similar ai applications
The idea of using shorter length variables was presented by Tesla dojo as configurable length variables, but they stopped at fp8:
https://cdn.motor1.com/pdf-files/535242 ... nology.pdf
I can see the advantages of lower precision in terms of saving memory space when higher precision is not needed, but I can’t decide if fp4 is just marketing hype or a practical tool in for example large language models.
Maybe some one knows & can enlighten me.
Regards,
Sorcery wrote:odysseus2000 wrote:I don’t understand what fp4 actually means.
If the number is represented as sign, exponent, mantissa, then fp4 would be very limited as mantissa (00,01, 11) or 0,1,3, exponent 0 or 1.
This would be, as you say, be not much use, whereas 4 bits as an unsigned integer would give binary 0000 to 1111, maximum of decimal 1+2+4+8= 15.
However is the sign indicated at a block level, if so then fp4 could store 7, still not great.
This guy argues that fp4 is indeed limited to 3, but says it is 1.58 bits:
https://x.com/danielhanchen/status/1769 ... DCpgdbFBxg
Which makes no sense to me & he even suggests fp3 might be valid. My biggest problem is in terms of understanding what is actually done in large language models & similar ai applications
The idea of using shorter length variables was presented by Tesla dojo as configurable length variables, but they stopped at fp8:
https://cdn.motor1.com/pdf-files/535242 ... nology.pdf
I can see the advantages of lower precision in terms of saving memory space when higher precision is not needed, but I can’t decide if fp4 is just marketing hype or a practical tool in for example large language models.
Maybe some one knows & can enlighten me.
Regards,
There is this trick which gets one bit extra of precision :
With this information we can begin to understand the decoding of floats. Floats use an base-two exponential format so we would expect the decoding to be mantissa * 2^exponent. However in the encodings for 1.0 and 2.0 the mantissa is zero, so how can this work? It works because of a clever trick. Normalized numbers in base-two scientific notation are always of the form 1.xxxx*2^exp, so storing the leading one is not necessary. By omitting the leading one we get an extra bit of precision – the 23-bit field of a float actually manages to hold 24 bits of precision because there is an implied ‘one’ bit with a value of 0x800000.
from
https://randomascii.wordpress.com/2012/ ... nt-format/
I imagine it would work with FP4 and FP8 too. Sorry only a partial answer, FP4 is still new to me.
odysseus2000 wrote:Several, including Elon Musk, have described Apple has having missed the AI boat.
I am trying to work out if Apple have indeed missed out on AI, or displayed masterful inactivity.
The problem I have with AI is that it seems something of a commodity. No AI that I have discovered is so much better than all the others that it could be called a leader & the amount of money that any AI vendor charges does not look to be significant compared to their spend with Nvidia. Am I missing something here?
Regards,
Microsoft reaps its AI rewards. Its customers? Not so much
.....“It took a decade for Azure to get to $10 billion. At Microsoft AI, it’s at $4.4 billion in one year, so things are happening very fast. I think that it’s year one—I can’t wait to see what year two, year three, and year four bring for Microsoft AI,” Brent Bracelin, Piper Sandler Equity Research Analyst on Cloud, told Yahoo Finance.
The much-anticipated earnings displayed AI’s potential as a major revenue driver for juggernauts like Microsoft. For the companies trying to integrate these AI technologies to serve customers and employees, however, the technology’s impact is far more fraught. Many of these companies are staring down a line of tough decisions about new security risks, what the technology means for their workforces, how to actually implement the technology, and how to reimagine their products and overall businesses in the name of AI. .....
odysseus2000 wrote:Several, including Elon Musk, have described Apple has having missed the AI boat.
I am trying to work out if Apple have indeed missed out on AI, or displayed masterful inactivity.
The problem I have with AI is that it seems something of a commodity. No AI that I have discovered is so much better than all the others that it could be called a leader & the amount of money that any AI vendor charges does not look to be significant compared to their spend with Nvidia. Am I missing something here?
Regards,