How intelligent are large language models (LLMs)?

In summary, large language models (LLMs) are highly intelligent natural language processing systems that are capable of generating human-like text and completing various language tasks with high accuracy. They are trained on massive amounts of data and use advanced algorithms to understand and generate language. While LLMs have shown impressive performance, there are concerns about their potential biases and ethical implications. Further research and development are needed to fully understand the capabilities and limitations of LLMs.
  • #36
Filip Larsen said:
with an artificial selection pressure much more focused on optimizing towards behavior (output) we will consider intelligent.
But the output is just text, as is the input. Text is an extremely impoverished form of input and output. Much of what we humans do that makes us consider ourselves intelligent has nothing to do with processing text.

Filip Larsen said:
if LLM's already now can exhibit the emergence reasoning by analogy (which I understand is both surprising and undisputed)
I don't think it's undisputed. It's a claim made by proponents but not accepted by skeptics. I think it's way too early to consider any kind of claim along these lines as established.
 
Computer science news on Phys.org
  • #37
Filip Larsen said:
without also having the selective pressures for a body to survive and compete in a physical environment.
There has long been a school of thought in AI that holds that no entity can really be intelligent if it is not embodied and does not have to deal with all the issues involved in directly perceiving and acting on an external, physical environment. I don't think LLMs as they currently exist do anything to refute such a position.
 
  • #38
pbuk said:
We seem to have this rabbit-hole discussion about once a month
Everything that needs to be said has been said, but not everyone has said it yet.
 
  • Like
Likes Hornbein and pbuk
  • #39
PeterDonis said:
I think it's way too early to consider any kind of claim along these lines as established.
Coming from you without at the same time insisting that I show you references I take as indication that we have roughly the same understanding of the current state regarding emergent reasoning by analogy which is good enough for me.

PeterDonis said:
There has long been a school of thought in AI that holds that no entity can really be intelligent if it is not embodied and does not have to deal with all the issues involved in directly perceiving and acting on an external, physical environment.
Yes, but to my knowledge this idea originates from the AI dry periods before LLM's where people was looking for what could be missing. I first heard about in the start of 90'ties when a local professor at my university held a presentation to share a realization he had about embodiment most likely being required for the brain to be able to build and maintain a model of the world (i.e. learn). It is not a bad idea and seems to be true for the evolution of the human brain so why not AI as well, but it is also an idea that so far (to my knowledge) has had much less evidence than emergent behaviors in LLM's so if you are sceptical about the latter why are you not also sceptical of the former?

Anyway, I mainly joined in the discussion to express I am sceptical towards statements similar to "LLM's cannot achieve human-level intelligence because they are just crunching numbers", not to revisit every argument along the way if we are all just going to hold our position anyway.
 
  • Like
Likes PeroK
  • #40
I have decided that I'm not going to participate in such debates until there is agreement on the definition of "intelligence" . That will never happen, thus freeing up time for all sorts of other things.
 
  • #41
Filip Larsen said:
it is also an idea that so far (to my knowledge) has had much less evidence than emergent behaviors in LLM's
Evidence of what? If you are saying we have evidence of intelligence from emergent behaviors in LLMs, I'm not sure I agree. Indeed, the argument @PeroK gave earlier in this thread was that the behavior of LLMs does not show intelligence; it shows better performance on some tasks involving text than the average human, but the average human, according to @PeroK, is not intelligent at those tasks.

As for embodied AI, I would suggest, for example, looking up the Cog project at the MIT AI lab.
 
  • #42
PeterDonis said:
Evidence of what? If you are saying we have evidence of intelligence from emergent behaviors in LLMs, I'm not sure I agree.
I should have been more clear. I am saying
  1. that we have evidence that LLM's can provide reasoning via analogies without having seen that particular analogy directly in the training set,
  2. this form of reasoning is considered as a trait of intelligent behavior (i.e. the ability to recognize one set of concepts corresponds to or are analog to another set of concepts, like the visual pattern Raven matrices often used in IQ-tests),
  3. this behavior that the LLM exhibits is emergent in the sense there was no explicit effort or special handling to ensure the network picked this up during training, and finally
  4. if one such trait can emerge from LLM training it to me seem likely that more traits usually linked to intelligent behavior can emerge as LLM's are scaled up or simply added as "embodied" mechanisms (e.g. memory, fact checking, numerica and symbolic calculations, etc.).
Point 4 is my (speculative) claim that I posed earlier. If 4 is not true then I assume there must be some trait essential for general intelligent behavior that we never will be able to get to emerge from a LLM no matter how big we scale the model, no matter what simple "determinstic" build-in mechanisms we add, and no matter what material we train it with. And if this again is claimed by others to be the case then I counter-ask: how can the human brain possibly have evolved to allow human level intelligent if it is impossible to evolve in an artificial network?

I guess one easy way out for the sake of setting the discussion is say, "oh well, it will be possible to evolve intelligence artifically but then it is not a LLM anymore and we only talked about LLM in their current design and if you add this or that mechanism it is a totally different acronym and we are good characterizing that acronym with signs of intelligence". OK, I'm fine with that resolution. Or people can continue to claim that the ability for humans to behave intelligent is based on some illusive or stranglely irreproducible nerophysical mechanism that forever will "ensure" machines cannot be as intelligent as humans. Im not fine with that and will insist on hearing a very good argument.

Yes, I know I possibly repeat discussion points mulled over elsewhere ad nausium and yes I agree it still somewhat pointless to keep going one more time, but you did ask and perhaps someone is able to present a compelling argument for why humans will alway be more intelligent than a machine and I will have learned something new and can stop worrying that much about yet another technology experiment on steorids with potential nuclear consequences we all are enrolled in to satisfy the goldrush of a few.

Yeah, I should definately stop now.
 
  • Like
Likes mattt
  • #43
PeterDonis said:
As for embodied AI, I would suggest, for example, looking up the Cog project at the MIT AI lab.
I don't really see that project produced any results that even indicate that physical embodiment is required for emergence of intelligence? It is also (not surprising) pre-LLM so there should have been plenty of oppertunities for other to carry the idea over to modern networks, but I assume noone has?

One neat example of embodiment to learn "intelligent" motion is the small walking soccer bots that learn to play soccer all by themselves. I seem to recall this was also done (with wheeled bots) with GA and subsumption architectures back when that idea was news.
 
  • #44
Filip Larsen said:
we have evidence that LLM's can provide reasoning via analogies
No, we don't. The term "reasoning" is not accepted by skeptics as a valid description of what LLMs are doing.
 
  • #45
Filip Larsen said:
I don't really see that project produced any results that even indicate that physical embodiment is required for emergence of intelligence?
There could never be any such evidence, since evidence can never prove a negative.

But the Cog project is evidence that embodiment gives rise to behaviors that are seen by human observers as being intelligent. Many people made that observation on seeing Cog operate. And, a key point for this discussion, those behaviors had nothing to do with processing text. They were the sorts of behaviors involving perception and action in the world that, when we see animals do them, we take as at least indications of the animals possibly being intelligent.

Filip Larsen said:
It is also (not surprising) pre-LLM so there should have been plenty of oppertunities for other to carry the idea over to modern networks, but I assume noone has?
The fact that people have not, as far as we know, used LLMs to drive a robot could simply mean that people who have worked on making robots exhibit behaviors that we take as indications of at least potential intelligence do not see LLMs as a useful tool for that endeavor. Which is not surprising to me given that, as I said above, those behaviors have nothing to do with processing text and processing text is all that LLMs do.
 
  • #46
PeterDonis said:
No, we don't. The term "reasoning" is not accepted by skeptics as a valid description of what LLMs are doing.
I wrote "provide reasoning by analogy" in point 2, i.e. meaning the LLM is able to produce output that when read by a human corresponds to reasoning by analogy. I have exclusively been talking about the characteristics of the output (relative to the input of course) of LLM's without consideration on the exact internal mechanism even if I perhaps have missed writting that full qualification out every single time I have referred reasoning by analogy in a sentence. My point (to repeat yet again) is that the training of the examined LLM's picked up on a trait that allows the LLM to answer some types of questions often used in aptitude tests for humans. The interesting part for me, which I have been trying to draw attention to in pretty much every reply, is the emergence of a trait associated with intelligent.

And I still have no clue why you are dismissal of associating intelligence traits with "textual output". If a human individual (perhaps after studying long in some domain) when given novel problems consistently is able to produce solutions that by consensus are considered both novel and intelligent solutions to the posed problems I assume we would have no issue agreeing it would be right to characterized this indidual as being intelligent. But, if a machine was able to exhibit the same behavior as the human you would now instead classify it as devoid of intelligence because it was only trained on the content of those books?

Or are you instead saying that it will prove impossible to evolve such a machine into existance because we eventual hit some physical limit? I suspect you are not saying that, but now I may as well ask. I still only have limited power input and output (heat managment) as the only physical contraints that potentially may have an significant impact on the evolutionary path towards a (potential) GAI, but considering the current amount of research into efficient ANN/LLM hardware it is likely these power contraints will only limit the rate of evolution and much less limit the scale of the models and its equivalent processing power (e.g. FLOPS), a bit similar to how CPU processing power has been following Moore's law for decades.
 
  • #47
Filip Larsen said:
I wrote "provide reasoning by analogy" in point 2, i.e. meaning the LLM is able to produce output that when read by a human corresponds to reasoning by analogy.
In other words, you are making no claim about what the LLM is actually doing, only about how humans subjectively interpret its output. In that case I don't see the point. But this discussion has probably run its course.
 
  • #48
Filip Larsen said:
I still have no clue why you are dismissal of associating intelligence traits with "textual output".
Because, as I've already said, text is an extremely impoverished kind of input and output.

Filip Larsen said:
when given novel problems consistently is able to produce solutions that by consensus are considered both novel and intelligent solutions to the posed problems
You can't do this with just text input and output unless the "problems" are artificially limited to text and the "solutions" are artificially limited to producing text. In other words, by removing all connection with the real world. But of course the real world knows no such limitations. Plop your LLM down in the middle of a remote island and see how well it does at surviving using text, even if all the things it needs for survival are actually present on the island. Most real world problems are far more like the latter than they are like solving artificial textual "problems".
 
  • #49
PeterDonis said:
But the Cog project is evidence that embodiment gives rise to behaviors that are seen by human observers as being intelligent. Many people made that observation on seeing Cog operate. And, a key point for this discussion, those behaviors had nothing to do with processing text.
There are many, many 'projects' by now (since by now it's on the level of a common' home lab') where simulated 'entities' gives rise of behaviors seen intelligent by human observers in a simulated environment.

Let's modify the interface of the simulated environment to text based ('Bumped into a wall at 38 degree, with 15 km/h speed. Not feeling well. ' kind of, for example).

... and then let's not delve any deeper into this metaphysical rabbit hole about the meaning and means of 'reality'.
 
  • #50
PeterDonis said:
you are making no claim about what the LLM is actually doing, only about how humans subjectively interpret its output. In that case I don't see the point.
To me that was the point. But my position is also clearly still in the "intelligence lies in the eyes of the beholder" camp.

PeterDonis said:
But this discussion has probably run its course.
Yeah, sadly discussions on non-trivial topics on PF often seem to go into that state after enough common ground has been established but before there is real chance any of us has to learn something new. For me discussions here often seem to spend a lot or even all its energy on rhetorical maneuvering and rarely get to the juicy constructive flow I can have with my engineering colleges during alternate brainstorm/critique discussions.
 
  • #51
PeterDonis said:
You can't do this with just text input and output unless the "problems" are artificially limited to text and the "solutions" are artificially limited to producing text. In other words, by removing all connection with the real world. But of course the real world knows no such limitations. Plop your LLM down in the middle of a remote island and see how well it does at surviving using text, even if all the things it needs for survival are actually present on the island. Most real world problems are far more like the latter than they are like solving artificial textual "problems".
I have no idea why the ability to survive on a remote island is a prerequisite test for intelligence.

Stephen Hawking couldn't have survived plopped down on a remote island. His interface with the world was severely limited. And, yet, he maintained his intelligence.

There is nothing to be gained from debating against absurdities. I'm out, as they say.
 
  • #52
PeroK said:
I have no idea why the ability to survive on a remote island is a prerequisite test for intelligence.
The point is the ability to perceive one's environment and figure out how to get one's needs met in it.

PeroK said:
Stephen Hawking couldn't have survived plopped down on a remote island. His interface with the world was severely limited.
Yes, and he had to learn how to use that limited interface to get his needs met. Which does indeed count as intelligence. Try getting an LLM to do that.
 
  • #53
Is a pocket calculator intelligent? It can do one thing better than a person.
How is an LLM different? It can do one thing, not as well as a person.

For extra credit: was Clever Hans intelligent?
 
  • #54
russ_watters said:
I mean....a LLM figuring out a language seems like a task pretty well in its wheelhouse.
Yes, but many people have said that all the LLM's are doing is parroting back what they have been trained. The fact that have solved a problem which no one had solved before and which couldn't have been in their training data proves that they are doing more than that.
 
  • #55
phyzguy said:
There are cases where LLMs have decoded ancient text that no human had ever decoded before.
How is it known that the decoding is valid?
 
  • #56
PeterDonis said:
How is it known that the decoding is valid?
I don't think that means "translated a text that has eluded translation". I think it means "translated a text that no human bothered to translate and was not in the training set".
 
Back
Top