ChatGPT spin off from: What happens to the energy in destructive interference?

  • Thread starter Anachronist
  • Start date
  • #1
Anachronist
Gold Member
119
58
rocknrollkieran said:
I asked Chat GPT this but could'nt get a satisfactory answer.
Why? Why do people do this?

You never get a satisfactory answer when asking for factual information that normally requires some research or study to find. ChatGPT gives you hallucinations instead.

I asked it to give examples of books published by university presses that discuss fringe views, and it gave me either books that didn't fit my criterion or books that don't exist.

I asked it for a simple bit of information: which is the nearest door number to United Airlines baggage claim carousel #6 at the San Francisco airport? I can look it up on a map, but ChatGPT basically hemmed and hawed and gave me canned responses containing only general information, not what I specifically asked.

I asked it to produce some OpenSCAD code that makes a specific 3D shape, and it gave me non-functional code with syntax mixed together from multiple languages.

And on and on. Every time I ask ChatGPT something factual, I ask it something that I can check myself, and the answer is almost always factually incorrect.

What ChatGPT excels at is making stuff up. Writing poetry, writing a paragraph, inventing a compelling title for a particular essay on some subject, creating a paragraph that includes particular meanings, and so on. But that sort of inventiveness is undesirable when looking for scientific facts.
 
  • Like
  • Sad
Likes Astronuc, DrClaude, russ_watters and 4 others
Physics news on Phys.org
  • #2
Anachronist said:
You never get a satisfactory answer when asking for factual information that normally requires some research or study to find.
I don't beleive that is a true statement.
Anachronist said:
ChatGPT gives you hallucinations instead.
The usual criticism of ChatGPT is that it is a "glorified search engine". Wouldn't hallucinations would be a sign of creative intelligence?
Anachronist said:
I asked it for a simple bit of information: which is the nearest door number to United Airlines baggage claim carousel #6 at the San Francisco airport?
How is anybody or anything supposed to figure that out? It's a ridiculous question.
Anachronist said:
I asked it to produce some OpenSCAD code that makes a specific 3D shape, and it gave me non-functional code with syntax mixed together from multiple languages.
I'll believe that.
Anachronist said:
Every time I ask ChatGPT something factual, I ask it something that I can check myself, and the answer is almost always factually incorrect.
I don't believe that.
Anachronist said:
What ChatGPT excels at is making stuff up. Writing poetry, writing a paragraph, inventing a compelling title for a particular essay on some subject, creating a paragraph that includes particular meanings, and so on. But that sort of inventiveness is undesirable when looking for scientific facts.
I don't believe that's a valid assessment of its capabilities.
 
  • Skeptical
  • Like
Likes russ_watters and phinds
  • #3
Anachronist said:
You never get a satisfactory answer when asking for factual information that normally requires some research or study to find. ChatGPT gives you hallucinations instead.
Here's something that essentially disproves your false claim:

How many planets are in the solar system?

There are eight planets in the solar system. In order from the Sun, they are:
  1. Mercury
  2. Venus
  3. Earth
  4. Mars
  5. Jupiter
  6. Saturn
  7. Uranus
  8. Neptune
Pluto was previously considered the ninth planet, but it was reclassified as a "dwarf planet" by the International Astronomical Union (IAU) in 2006.
 
  • Like
Likes AlexB23
  • #4
Here's another Q&A:

Where is carousel #6 for United Airlines at San Francisco airport?

At San Francisco International Airport (SFO), Carousel #6 for United Airlines baggage claim is typically located in Terminal 3, where United operates most of its domestic flights. Terminal 3 has multiple baggage carousels, and you can usually find clear signage directing you to the specific carousel for your flight.

If you're unsure, you can always check the monitors near the baggage claim area, which display the flight numbers and corresponding carousels, or ask a staff member for assistance.
 
  • #5
I agree with @Anachronist generally. ChatGPT is a Large Language Model. It has no fact model. When it happens to get facts right it is generally a matter of luck. That makes it very unreliable for any factual content.
 
  • Like
  • Skeptical
Likes DrClaude, russ_watters, phinds and 1 other person
  • #6
Dale said:
I agree with @Anachronist generally. ChatGPT is a Large Language Model. It has no fact model. When it happens to get facts right it is generally a matter of luck. That makes it very unreliable for any factual content.
In a general knowledge contest between you and ChatGPT you would not have a hope! It would be completely unbeatable even by the best human quiz expert.
 
  • Skeptical
  • Like
Likes russ_watters, phinds and Hornbein
  • #7
PeroK said:
In a general knowledge contest between you and ChatGPT you would not have a hope! It would be completely unbeatable even by the best human quiz expert.
I am not certain that is true. If the human had access to Google and time were not a factor then I doubt that ChatGPT would outperform a human.

But on PF we are not talking about general knowledge, we are talking about specific technical knowledge.
 
  • Like
Likes russ_watters, AlexB23 and Baluncore
  • #8
Dale said:
I am not certain that is true. If the human had access to Google and time were not a factor then I doubt that ChatGPT would outperform a human.
This is a fantasy. And, time is a factor.

Dale said:
But on PF we are not talking about general knowledge, we are talking about specific technical knowledge.
Not on this thread. This thread is about claims that ChatGPT can almost never give a factually correct answer:

Anachronist said:
You never get a satisfactory answer when asking for factual information that normally requires some research or study to find. ChatGPT gives you hallucinations instead.

Anachronist said:
Every time I ask ChatGPT something factual, I ask it something that I can check myself, and the answer is almost always factually incorrect.

Anachronist said:
What ChatGPT excels at is making stuff up. Writing poetry, writing a paragraph, inventing a compelling title for a particular essay on some subject, creating a paragraph that includes particular meanings, and so on. But that sort of inventiveness is undesirable when looking for scientific facts.
Dale said:
I agree with @Anachronist generally. ChatGPT is a Large Language Model. It has no fact model. When it happens to get facts right it is generally a matter of luck. That makes it very unreliable for any factual content.
There is no mention of science or "technical" knowledge there.
 
  • #9
PeroK said:
And, time is a factor.
I wouldn’t accept that as being a fair comparison.
 
  • Like
Likes russ_watters and Tom.G
  • #10
Dale said:
I wouldn’t accept that as being a fair comparison.
You could do the payroll calculations for a large corporation. A computer might do it in an hour and you might take 10 years. That makes a critical difference. That's why rooms full of clerks were replaced by computers in the 1970s.

Anyway, we are never going to agree. I get that you don't like ChatGPT, but imagining you can do everything it can do is as fanciful as imagining it can do everything you can do.
 
  • Skeptical
Likes russ_watters
  • #11
Here is a good general knowledge one. “difference between a sauce and a dressing”



AI answer: "The main difference between a sauce and a dressing is their purpose: sauces add flavor and texture to dishes, while dressings are used to protect wounds,"
 
  • Haha
  • Like
Likes Astronuc, nsaspook, russ_watters and 2 others
  • #12
PeroK said:
That makes a critical difference.
And one I am willing to concede without experimentation. What AI can do it can do faster than a human. That is conceded in advance.

Whether it’s factual statements are reliable is a different claim, and such a comparison with a human should not be tied to time.
 
  • #13
If AI really is so hopelessly inferior to human intelligence, then why is there any issue with students using AI to do homework? The work would lack any grasp of context by the author, be full of factual errors and be so obviously nonsensical as to immediately discredit the student. But, AI output is none of these things.

There's a problem because AI can produce superior work. It does understand context and can marshall facts in a convincing manner. To deny this is simply not a credible position. AI is a growing problem in academia because it is so good.

If what the mentors on PF believed were true, there would be no issue with AI yet. It would be making no impact on universities and academia.
 
  • Skeptical
Likes russ_watters, weirdoguy and Dale
  • #14
The general issue with current AI is that it is factually unreliable but that it writes language with a confident tone and convincing structure. The issue is that the language is good but the facts are not.

The specific issue with students and homework is that the purpose of education is for them to learn.
 
  • Like
Likes Hornbein and russ_watters
  • #15
As far as I know, ChatGPT has passed various exams from law and business schools. That's incompatible with the hypothesis that most or almost all of its answers are factually wrong.
 
  • Like
  • Skeptical
Likes russ_watters and phinds
  • #16
PeroK said:
As far as I know, ChatGPT has passed various exams from law and business schools. That's incompatible with the hypothesis that most or almost all of its answers are factually wrong.
It has also almost gotten lawyers who used it disbarred for generating false case law and precedent.

https://www.bbc.com/news/world-us-canada-65735769

Success on standardized exams is not a strong demonstration of factualness. Many standardized exams are well published and answers are available online. They are designed to challenge humans. They are not designed to test for an AI’s ability to give factual output
 
  • Like
  • Haha
Likes Astronuc, DrClaude, Tom.G and 2 others
  • #17
Facts is facts!
 
  • #18
PeroK said:
Facts is facts!
Yup!

It depends on what you are after.

Lets say you want to dig into what gravity is. Which book is more likely to answer your questions?

Gravity & Grace: How to Awaken Your Subtle Body And the Healing ...​

by Peter Sterios (yoga teacher and trainer)

GRAVITATION​

by Kip Thorne (physicist at California Institute of Technology)

'Nuff said.
 
  • #19
Tom.G said:
'Nuff said.
Seems to me you just set up a silly strawman and I don't get your point at all.
 
  • Like
Likes PeroK
  • #20
As often this seems to come down to an argument over definitions and hyperbole. Advocates of LLMs being "AI" or "intelligent" tend to cast a wide net. Still:
PeroK said:
The usual criticism of ChatGPT is that it is a "glorified search engine". Wouldn't hallucinations would be a sign of creative intelligence?
I prefer "glorified autofill" and no, I don't consider making stuff up that isn't true but is claimed to be, to be a sign of intelligence. Creative or otherwise. Seriously, you do?
This is a fantasy. And, time is a factor.
Google and Excel are faster than I am, and I would not consider either to be "intelligence". Would you?
You could do the payroll calculations for a large corporation. A computer might do it in an hour and you might take 10 years. That makes a critical difference. That's why rooms full of clerks were replaced by computers in the 1970s.
You're claiming a 1970s computer is "AI"? That's a broader definition than I have ever heard, and IMO cheapens it to pointlessness. Every computer is now "AI". And probably an abacus (though i've never used one).
If AI really is so hopelessly inferior to human intelligence...
OP being hyperbolic doesn't make it OK for you to be.
 
  • Like
  • Sad
Likes BillTre, weirdoguy and PeroK
  • #21
russ_watters said:
You're claiming a 1970s computer is "AI"? That's a broader definition than I have ever heard, and IMO cheapens it to pointlessness. Every computer is now "AI". And probably an abacus (though i've never used one).
OP being hyperbolic doesn't make it OK for you to be.
The question was not about intelligence but whether the time to give an answer was important. The IT systems of the 70's in general did nothing that wasn't already being done by humans. The critical thing was that the computer systems could do it faster. Or, perhaps more to the point, cheaper.

One of the problems that PF has is that we (even collectively) can take a long time to respond to a homework thread, for example. Someone asks a question at 8am, gets a first response at 8:45 etc. Even if one of us can give a better answer ultimately that ChatGPT, we have a long lag time. In that sense we cannot compete with ChatGPT. And it doesn't help to pretend that this lag time doesn't matter. And claim that a human armed with Google can do what ChatGPT can do - and, eventually, an hour later come back with the same answer.

Pretending that ChatGPT is "almost always factually incorrect" is not a realistic attitude to the emergence of LLM's. Pretending that we, as individuals, have as good a general knowldege as a LLM is likewise not a realistic attitude.

russ_watters said:
I prefer "glorified autofill" ...
It doesn't matter how it does it. Insulting something won't make it go away - or make the rest of the world believe what you choose to believe.

LLM's are an extraordinary advance on anything IT has previously done. Ridiculing them, claiming they can't get anything factually right, or that they can't do anything that a user armed with Google can't do is missing the point entirely.
 
  • Like
Likes phinds and Borg
  • #22
I never use ChatGPT for anything factual. It's really good at writing ad copy, a very stylized craft. Recently I was exposed to stadium rock generated by an ai. It was better than the real thing. I was very impressed.

ChatGPT is at best Model T Ford technology. Try to imagine what the F-35 version will be able to do. I can't.
 
  • Like
Likes Dale
  • #23
I am with @PeroK on this one. ChatGPT is a tool that can greatly aid a user - if they have the skill and understanding to use it properly. Developing those takes time and a skeptical eye on the responses. Complaining that you can't hammer nails with a drill or that the hole was too big because you used the wrong bit isn't a failure of the drill's capabilities.

One example from this weekend of how I had to adjust what I was asking. I was doing research on attractions in Paris and Brussels. I asked the 4.0 model to generate a CSV file with the top 20 attractions in Brussels with these headers - name, address, web page and distance of the attraction from the Grand Place. It generated this pretty flawlessly except for non-UTF-8 characters in the names and addresses. I then asked it to do the same for the top 20 attractions in Paris and list distances from the Musée d'Orsay. No matter how I asked, it kept using the distance to Brussels (probably the Grand Place) as the distance to d'Orsay. I eventually had to open a new conversation so that it wouldn't have any history of the Brussels information.

The point of this example is that the model will use as much information from a conversation as possible and that can sometimes cause problems. In the extreme cases where you never create new conversations or frequently keep long, meandering conversations that jump from topic to topic, it can get even more confused. This is just one of the many ways that I've had to adjust my thinking when using ChatGPT. The better that I do that, the better the responses that I get in return.
 
Last edited:
  • Like
  • Informative
Likes Tom.G, Dale and PeroK
  • #24
Borg said:
I then asked it to do the same for the top 20 attractions in Paris and list distances from the Orley Museum.
That would have been a challenge: there is no such place. Perhaps you are confusing the Musée d'Orsay with the Parisian suburb of Orly, where the international Orly Airport is (mainly) situated?

Borg said:
No matter how I asked, it kept using the distance to Brussels (probably the Grand Place)
More likely the Rue van Orley.

And here you have exposed the problem with the use of ChatGPT and similar LLMs - no answer they give can be relied upon without independent verification.
 
  • Like
Likes russ_watters and Dale
  • #25
pbuk said:
That would have been a challenge: there is no such place. Perhaps you are confusing the Musée d'Orsay with the Parisian suburb of Orly, where the international Orly Airport is (mainly) situated?
Yes, that's the one (corrected above). I did use the correct spelling over the weekend. I tried to type it from memory and got it wrong - therefore nobody should ever trust anything I write ever again. :wink:
 
Last edited:
  • #26
Dale said:
When it happens to get facts right it is generally a matter of luck.

What % of questions would the model need to get right to convince you that it's not just luck... 90%, 95%, 99%, more?

Because some of the newer models are getting close. Here is some data for the MMLU benchmark.
https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu
 
  • Like
Likes phinds, Borg and PeroK
  • #27
PeroK said:
Pretending that ChatGPT is "almost always factually incorrect" is not a realistic attitude to the emergence of LLM's. Pretending that we, as individuals, have as good a general knowldege as a LLM is likewise not a realistic attitude.
I disagree on both of these points. Current LLM’s are only language models. It has no fact model. It does not have general knowledge or factual knowledge at all. Those simply are not part of its design. To expect a machine can reliably do something that is not part of its design is wrong.

It is not that LLMs do not understand what they are saying. It is that they are not even designed to be able to understand.

A human has both a world model and a language model, and we build and refine both together. So our language model is always anchored in facts and experience through our world model. So when a normal human puts language together those words have meaning by virtue of the associated world model.

A LLM simply does not have that. The LLM produces language entirely by associating words with other words.

ergospherical said:
What % of questions would the model need to get right to convince you that it's not just luck... 90%, 95%, 99%, more?
I misspoke saying it was luck. It is word correlation.

ergospherical said:
Because some of the newer models are getting close.
I don’t know how the newer models are designed. Do they contain fact models or world models now?

Frankly, if something like Watson, which is designed with an underlying fact model, were scoring even at average human levels then I would attribute understanding to the AI. But a LLM is simply not designed to understand nor is it designed to produce factually correct responses. You cannot expect a machine to do something it is not designed to do.
 
Last edited:
  • Like
Likes russ_watters
  • #28
Dale said:
You are wrong on both of these points. Current LLM’s are only language models. It has no fact model. It does not have general knowledge or factual knowledge at all. Those simply are not part of its design. To expect a machine can reliably do something that is not part of its design is wrong.
You obviously have some problem accepting what, with the evidence of your own eyes, you can see ChatGPT do. There's no point in arguing any further.
 
  • Skeptical
  • Like
Likes russ_watters and phinds
  • #29
Dale said:
You cannot expect a machine to do something it is not designed to do.
It's not a question of what I expect ChatGPT to be able to do. It's what I can see it doing. The evidence overrides your dogmatic assertions.
 
  • Like
Likes phinds
  • #30
@Dale I don't agree with your characterization. By design, LLMs don't explicitly store factual information, but they do implicitly store it in the model parameters (accumulated during pre-training). This is what gives them demonstrably high accuracy scores across benchmarks including STEM questions.
 
  • Like
Likes PeroK
  • #31
PeroK said:
You obviously have some problem accepting what, with the evidence of your own eyes, you can see ChatGPT do
With my eyes I have seen it frequently fail to get facts correct. I have seen it manufacture references, and I have seen it contradict itself factually. My observations of its limitations coincide with my understanding of its design.
 
  • Like
Likes russ_watters, pbuk, DrClaude and 1 other person
  • #32
My own (admittedly somewhat limited) aligns completely with @PeroK 's view. I have had it make egregious mistakes and I have seen it make stuff up and I have seen it give repeatedly different but all false answers when I tried to refine a question or I pointed out that it was wrong.

BUT ... all of that was rare and I have seen it give informative, lucid, and most importantly, correct, answers to numerous factual questions.

I have also had it produce admittedly relatively simple blocks of VB.NET code that were not only correct, they were also very intelligently commented, which is more than I can say for many of the programmers who worked for me over the years.

@Dale I am puzzled by your vehement opposition to what I see as a very useful tool that is only getting better and better over time. I do NOT argue that it is in any way intelligent, only that it does very useful stuff.
 
  • Like
Likes Borg, BillTre and PeroK
  • #33
Dale said:
With my eyes I have seen it frequently fail to get facts correct. I have seen it manufacture references, and I have seen it contradict itself factually.
Sounds a bit like what my high-school History teacher used to say at parents evening. :)
 
Last edited:
  • #34
phinds said:
@Dale I am puzzled by your vehement opposition to what I see as a very useful tool that is only getting better and better over time. I do NOT argue that it is in any way intelligent, only that it does very useful stuff
In general, I am a pretty firm believer of using the right tool for the job. When people use a tool for things that the tool was not designed to do then the job can be ruined and other unnecessary hazards and costs can result.

ChatGPT is a useful tool for language. Not for facts. It is simply not designed for that purpose.

In particular, as a mentor I see a lot of the junk physics that ChatGPT produces. It produces confident but verbose prose that is usually unclear and often factually wrong. This is more frequent in e.g. relativity where the facts are difficult and small changes in wording make a big difference in meaning.

In my day job in a highly-regulated and safety-critical industry it makes instructions that are more difficult to understand than the original manual and often changes the order of different steps or merges steps from different processes. I have yet to see AI generated documentation summaries that would not get my company in trouble with regulatory agencies.

The developers of ChatGPT have publicly described its design in quite some detail. The description as an enhanced autocomplete is accurate. They are very clear that they did not design it with any fact model. Contrast this with an AI like Watson, whose designers did include a fact model. Facts that ChatGPT gets right are not right by design.

I am open to future AI that is designed with facts in mind and gets facts right by design. But LLMs are simply not designed to do that, and the demonstrable results coincide with that lack of design. The tool is not being used for its designed purpose.
 
  • Like
Likes Astronuc, nsaspook, DrClaude and 1 other person
  • #35
PeroK said:
The question was not about intelligence but whether the time to give an answer was important. The IT systems of the 70's in general did nothing that wasn't already being done by humans. The critical thing was that the computer systems could do it faster. Or, perhaps more to the point, cheaper.
That isn't an answer to my question. I asked if you're claiming a 1970s computer is AI because it can do math faster (and more accurately) than a human. Though you also didn't answer the question you posed there -- though you did seem to answer it elsewhere: speed is a feature of AI. Disagree. I don't think Turing would have objected to his test being conducted via pen pal.
One of the problems that PF has is that we (even collectively) can take a long time to respond to a homework thread, for example. Someone asks a question at 8am, gets a first response at 8:45 etc. Even if one of us can give a better answer ultimately that ChatGPT, we have a long lag time. In that sense we cannot compete with ChatGPT. And it doesn't help to pretend that this lag time doesn't matter. And claim that a human armed with Google can do what ChatGPT can do - and, eventually, an hour later come back with the same answer.
I certainly agree that's a problem for PF and definitely explains why we've lost traffic, but as far as I can tell it doesn't have anything to do with whether ChatGPT is AI....except maybe for that speed thing you've referred to elsewhere. The general problem, though, has existed since PF started: users can google the answers instead of asking us.
Pretending that ChatGPT is "almost always factually incorrect" is not a realistic attitude to the emergence of LLM's.
Hyperbole mirror ignored.
Pretending that we, as individuals, have as good a general knowldege as a LLM is likewise not a realistic attitude.
We can't compete with wikipedia, but is that a reasonable/realistic definition/criteria for AI? Is speed an important criteria or not? Depth/breadth of knowledge? I would argue not. Moreover, is "knowledge" the same as "intelligence"? Again, I think advocates of AI tend to cast a wide net, and are very careless with definitions/criteria.
It doesn't matter how it does it. Insulting something won't make it go away - or make the rest of the world believe what you choose to believe.

LLM's are an extraordinary advance on anything IT has previously done. Ridiculing them, claiming they can't get anything factually right, or that they can't do anything that a user armed with Google can't do is missing the point entirely.
And ad hominem won't make it AI. Er...maybe it will, if the AI could do it?

And yes, it does matter how it does it. That's much of what the problem/question is. Take this example of the bar exam. Could @Dale pass the bar exam? Could you or I? Given as much time to work on it as we want? ChatGPT has "studied" wikipedia. Doesn't it make sense that we should be able to, to pass the test? Further, memory is definitely knowledge, but is it intelligence? ChatGPT has already searched and analyzed wikipedia. If I can search wikipedia for the answer, is that demonstration of intelligence or just access to somebody else's archived knowledge?

I'll submit this:
Memory is not intelligence.
Knowledge is not intelligence.
Complex reasoning is intelligence.

ChatGPT is trained in writing coherently and in summarizing things that it has "read". That's nice(very nice -- seriously), but that's just an interface, not "intelligence". I judge ChatGPT not on the regurgitated knowledge it gets right, but the complex reasoning it gets wrong.
 
  • Like
Likes pbuk, Astronuc and Dale

Similar threads

Replies
6
Views
334
Replies
0
Views
45
  • Sticky
Replies
2
Views
497K
Replies
1
Views
2K
Replies
1
Views
3K
Back
Top