How intelligent are large language models (LLMs)?

  • #1
A.T.
Science Advisor
12,429
3,569
TL;DR Summary
François Chollet argues that LLMs are not ever going to be truly "intelligent" in the usual sense -- although other approaches to AI might get there.
  • Informative
Likes PeroK
Computer science news on Phys.org
  • #2
LLVM's are essentially a Magic 8-Ball, just with more outputs. How intelligent are they?
 
  • Haha
Likes jedishrfu
  • #3
Vanadium 50 said:
LLVM's are essentially a Magic 8-Ball, just with more outputs. How intelligent are they?
That's an excellent analogy @Vanadium 50 !

Magic 8 balls have a multi-sided die with writing from humans on each side and the probability of the shake determines which side will appear in the window.

PS: I once gave that analogy to Garrett Lisy to use when explaining his E8 theory with the multi-sided die representing his hyperdimensional particle that would sometimes appear as one particle or another depending on the circumstances.
 
Last edited:
  • Like
Likes Vanadium 50
  • #4
A.T. said:
TL;DR Summary: François Chollet argues that LLMs are not ever going to be truly "intelligent" in the usual sense -- although other approaches to AI might get there.

Astrophysicist Sean Caroll interviews AI researcher François Chollet:
https://www.preposterousuniverse.co...eep-learning-and-the-meaning-of-intelligence/
The fundamental problem, IMO, with Chollet's argument is that he exaggerates human intelligence. The majority of humans cannot perform objective reasoning and data analysis. Instead, most people do something akin to what ChatGPT does, only with limited, biased data, biased a priori reasoning and a good measure of dishonesty thrown in.

I saw an interesting video recently where someone asked one of the LLM's to rate all post-war UK Prime Ministers. It was a stunningly more intelligent, unbiased and objective analysis than almost any human could produce. As most humans are driven by their largely unsubstantiated political biases.

On many issues, humans are in a state of denial and our own lack of intelligence is one of them. One of the biggest dangers is the arrogance of Chollet and his assumed superiority of human thought. And his pointless quibbling over what intelligence really is. On a practical level, there is a real danger that these systems could out-think us and outwit us - using our obvious human failings against us. Especially as the human race remains divided into factions who distrust or even hate each other. On a practical level, we cannot unite to prevent catastrophic climate change. And, on a practical level, we may be susceptible to being usurped by AI.

This last point is essentially what Geoffrey Hinton has been saying. For example:



I would stress the parallel with climate change. If catastrophic climate change is a risk, then there is no point in trying to convince yourself that it can't happen. You have assume the risk is real and work on that basis. The same is true with the existential threat from AI. Who wants to bet that Chollet is right and pretend there is nothing to worry about? Like climate change, by the time we realise it is actually happening, then it's too late!
 
  • Like
Likes berkeman

Similar threads

  • Computing and Technology
Replies
2
Views
987
  • Computing and Technology
Replies
1
Views
608
  • General Discussion
Replies
1
Views
584
Replies
10
Views
2K
Replies
2
Views
931
  • Computing and Technology
Replies
10
Views
2K
  • Biology and Medical
Replies
12
Views
2K
  • General Discussion
Replies
4
Views
903
  • Biology and Medical
Replies
2
Views
3K
Back
Top