In summary: We should pay more attention and be a little more concerned, because honestly I didn't believe it would reach this point yet. Not because of any "AI singularity" nonsense but because it seems like it is still learning and getting better.
  • #71
That's just a statement that you can pre-train your program on a large number of questions. I've already said it was much slower than real time. It doesn't make any difference to what the program does. It does, however, make a difference to the illusion of intelligence,.

As discussed, ChatGPT doesn't even try to output what is correct. It tries to output what is written often. There is some home that there is a correlation between that and correctness, but that's not always true and it was not hard to come up with examples.

ChatGPT is the love child of Clever Hans and the Mechanical Turk.
 
  • Like
  • Haha
  • Love
Likes physicsworks, PeterDonis and nsaspook
Computer science news on Phys.org
  • #72
> As discussed, ChatGPT doesn't even try to output what is correct.

Exactly. It also tries to err on the side of providing an answer, even when it has no idea what the right answer is. I used Stable Diffusion to generate pictures of composite animals that don't exist, then asked ChatGPT multiple times to identify them. The AI *never*, not even once said, "I can't identify that" or "I don't know what that is," nor did it suspect that it wasn't a real animal. It's guesses were at least related to the broad class of animal the composites resembled, but that was it.

There is no there, there.
 
  • Like
Likes dextercioby, russ_watters and AndreasC
  • #73
Oscar Benavides said:
It also tries to err on the side of providing an answer
It doesn't even "try"--it will always output text in response to a prompt.

Oscar Benavides said:
even when it has no idea what the right answer is
It never does, since it has no "idea" of any content at all. All it has any "idea" of is relative word frequencies.
 
  • Like
Likes Math100, Motore, Vanadium 50 and 3 others
  • #74
I'm not even sure how you could measure uncertainty in the output based on word frequency. "Some people say Aristotle was Beligian" will throw it off.
 
  • Like
Likes Oscar Benavides
  • #75
I tried using it a couple of times and for me it is really not usefull. For complex code, I found out it's faster to go to Stack Overflow, because there I get more understanding of the code beside the code itself.
The only thing that is really good at is writing language based question (write me a song, interpret this and this text, an email, ...) which some people will find usefull.
For research or factual questions it's to unreliable. It's just faster to use Wiki.
 
  • Like
Likes weirdoguy
  • #76
I know someone who has the paid version and says it's a lot more reliable. Previously, using the free version, a request for scientific references on a topic produced 40 authentic looking but completely unreal references. The paid version produced real references that all checked out.
 
  • #77
bob012345 said:
I know someone who has the paid version and says it's a lot more reliable.
Is there any reference online about this paid version and how it differs from the free version?
 
  • Like
Likes russ_watters
  • #79
bob012345 said:
Thanks! It looks like, at the very least, the paid version includes searching the Internet for actual answers to prompts, so it is not the same thing as the free version that my Insights article (and the Wolfram article it references) discuss.
 
  • Like
Likes Math100, russ_watters and bob012345
  • #80
OpenAI explain thd differences between ChatGPT 3, 3.5 and 4 (and indicate the plans and timeline for 5) on their website.
 
  • #81
AndreasC said:
How do we know at what point it "knows" something? There are non-trivial philosophical questions here... These networks are getting so vast and their training so advanced that I can see someone eventually arguing they have somehow formed a decent representation of what things "are" inside them. [...]

I think that's the point exactly. At some point we'll be unable to tell the difference, and the person who calls you trying to convince you to change your phone company, electricity company or whatever, might be a machine. But if you can't tell the difference than what is the difference?!

---------------------------------------------------------------

Filip Larsen said:
Stochastic parrot. Hah! Very apt.
russ_waters said:
Maybe the intent was always to profit from 3rd parties using it as an interface [...]

pbuk said:
Ya think?[...]

And then we enter the land of sarcasm. :)

---------------------------------------------------------------

This ChatGPT thingy really gets people riled up. I suspect especially the teaching part of the community here. ;P

... still reading....
 
  • #82
What it means "to know" is philosophy.

However, an epistomologist would say that an envelope that contaiend the phrase "It is after 2:30 and before 2:00" does not posess knowledgem eve though it is correct about as often as ChatGPT.
 
  • Like
Likes Bystander
  • #83
I'm not convinced that human intelligence is so effective. This site in many ways is a gross misrepresentation of human thought and interactions. For all the right reasons! Go anywhere else on the Internet or out in the street, as it were, and there is little or no connection between what people think and believe and objective evidence.

Chat GPT, if anything, is more reliable in terms of its objective assessment of the world than the vast majority of human beings.

Chat GPT doesn't have gross political, religious or philosophical prejudices.

If you talked to an Oil Company Executive, then there was no climate change and the biggest threat to humanity was the environmental movement.

Most humans beings deliberately lie if it is in their interests. With Chat GPT at least you know it isn't deliberately lying to you.

I don't know where AI is going, or where we are heading, but I could make a case that Chat GPT is more rational, intelligent and truthful than 99% of the people on this planet.
 
  • Skeptical
  • Like
Likes mattt and Bystander
  • #84
PeroK said:
Chat GPT, if anything, is more reliable in terms of its objective assessment of the world
ChatGPT does not have any "objective assessment of the world". All it has is the relative word frequencies in its training data.

Wolfram Alpha, ironically, would be a much better thing to describe with the phrase you use here. It actually does contain a database (more precisely multiple databases with different entry and lookup criteria) with validated information about the world, which it uses to answer questions.

PeroK said:
Chat GPT doesn't have gross political, religious or philosophical prejudices.
Only for the same reason a rock doesn't.
 
  • Like
  • Skeptical
  • Haha
Likes dextercioby, Bystander, russ_watters and 3 others
  • #85
PeterDonis said:
ChatGPT does not have any "objective assessment of the world". All it has is the relative word frequencies in its training data.

Wolfram Alpha, ironically, would be a much better thing to describe with the phrase you use here. It actually does contain a database (more precisely multiple databases with different entry and lookup criteria) with validated information about the world, which it uses to answer questions.Only for the same reason a rock doesn't.
In a practical sense, you could live according to what answers ChatGPT gives you. Wolfram Alpha is a mathematical engine. It's not able to communicate on practical everyday matters. Nor can a rock.

How any software works is not really the issue if you are an end user. The important thing is what it outputs.

You are too focused, IMO, on how it does things and not what it does.
 
  • Skeptical
Likes Motore
  • #86
PeroK said:
In a practical sense, you could live according to what answers ChatGPT gives you.
For your sake I sincerely hope you don't try this. Unless, of course, you only ask it questions whose answers you don't really care about anyway and aren't going to use to determine any actions. Particularly any actions that involve risk of harm to you or others.

PeroK said:
Wolfram Alpha is a mathematical engine. It's not able to communicate on practical everyday matters.
Sure it is. You can ask it questions in natural language about everyday matters and it gives you answers, if the answers are in its databases. Unlike ChatGPT, it "knows" when it doesn't know an answer and tells you so. ChatGPT doesn't even have the concept of "doesn't know", because it doesn't even have the concept of "know". All it has is the relative word frequencies in its training data, and all it does is produce a "continuation" of the text you give it as input, according to those relative word frequencies.

Granted, Wolfram Alpha doesn't communicate its answers in natural language, but the answers are still understandable. Plus, it also includes in its answers the assumptions it made while parsing your natural language input (which ChatGPT doesn't even do at all--not just that it doesn't include any assumptions in its output, but it doesn't even parse its input). For example, if you ask Wolfram Alpha "what is the distance from New York to Los Angeles", it includes in its answer that it assumed that by "New York" you meant the city, not the state.

PeroK said:
You are too focused, IMO, on how it does things and not what it does.
Huh? The Insights article under discussion, and the Wolfram article it references, are entirely about what ChatGPT does, and what it doesn't do. Wolfram also goes into some detail about the "how", but the "what" is the key part I focused on.
 
  • #87
PeroK said:
You are too focused, IMO, on how it does things and not what it does.
Could you make the same argument for astrology? Yesterday it told me to talk to a loved one and it worked!
 
Last edited:
  • Like
Likes PeterDonis, pbuk, dextercioby and 1 other person
  • #88
PeterDonis said:
For your sake I sincerely hope you don't try this. Unless, of course, you only ask it questions whose answers you don't really care about anyway and aren't going to use to determine any actions.
I don't personally intend to, no. But, there are worse ways to get answers.
 
  • #89
PeroK said:
there are worse ways to get answers.
So what? That doesn't make ChatGPT good enough to rely on.
 
  • Like
Likes Motore
  • #90
PeroK said:
I don't personally intend to, no.
Doesn't that contradict your previous claim here?

PeroK said:
In a practical sense, you could live according to what answers ChatGPT gives you.
If you're not willing to do this yourself, on what basis do you justify saying that someone else could do it?
 
  • #91
Vanadium 50 said:
Could you make tyhe same argument for astrology? Yesterday it told me to talk to a loved one and it worked!
There's no comparison. Chat GPT, however imperfectly, is working on a global pool of human knowledge. There's a rationale that it's trying to produce an unprejudiced, balanced answer.

Perhaps it will fail to develop. But, ten years from now, who knows how many people will be using it or its competitors as their mentor?
 
  • #92
PeterDonis said:
Doesn't that contradict your previous claim here?If you're not willing to do this yourself, on what basis do you justify saying that someone else could do it?
People can do what they want. It's an option, for sure. In fact, we've seen some evidence on here that significant numbers of people are using it to learn about physics.

If some people choose to live by astrological charts, them others can choose to live by ChatGPT. I choose to do neither. For the time being.
 
  • #93
PeroK said:
Chat GPT, however imperfectly, is working on a global pool of human knowledge.
No, it's working on a global pool of text. That's not the same as "knowledge". ChatGPT has no information about the connection of any of the text in its training data with the actual world. It doesn't even make use of the text in itself; it only makes use of the relative word frequencies in the text.

PeroK said:
ten years from now, who knows how many people will be using it or its competitors as their mentor?
Not in its current form. The next obvious step in the evolution of such models--connecting them to actual real world data--is already being taken, at the very least with the paid version of ChatGPT (mentioned in earlier posts), which includes actual lookups in various data sources (web search, for one, and for another, ironically, Wolfram Alpha) for generating responses. In other words, to do the key things that the current free version, which is what this Insights article discussed, does not. Ten years from now, I expect that further steps along those lines will have been taken and will have made these tools reliable in a way that the current ChatGPT is not.
 
  • #94
PeroK said:
People can do what they want.
Sure, but now you're backing away from your previous claim. People are free to choose to do stupid things, of course; but previously you were saying that relying on ChatGPT for practical information was not stupid. Now you're back-pedaling and saying, well, yes, it is stupid, just like relying on astrology, but there will always be people who choose to do stupid things.
 
  • #95
So I tried it, but it was non committal!

Should we climb the Moine Ridge on Thursday this week?

To make an informed decision about climbing the Moine Ridge on Thursday, I recommend checking weather forecasts, consulting with experienced climbers or local mountaineering authorities, and assessing your own skills and experience. Additionally, consider factors such as trail conditions, safety equipment, and the overall fitness and preparedness of your climbing team.
Mountain environments can be unpredictable and potentially dangerous, so it's essential to prioritize safety and make well-informed decisions.
 
  • #96
PeterDonis said:
So what? That doesn't make ChatGPT good enough to rely on.
People already rely on a steady diet of lies and misinformation from human sources. ChatGPT is at least honest. I would trust ChatGPT more than I would the US Supreme Court, for example.
 
  • Skeptical
Likes physicsworks and russ_watters
  • #97
PeroK said:
Chat GPT, however imperfectly, is working on a global pool of human knowledge.
Not even that, it just predicts words. It doesn't care if the sentence it makes actually describes anything real. It cannot.
An example:
Q: How long does a ferry ride from Istanbul to Trieste take?
ChatGPT:
A direct ferry ride from Istanbul to Trieste is not available, as these two cities are located in different countries and are quite far apart. Istanbul is in Turkey, while Trieste is in northeastern Italy.

To travel between Istanbul and Trieste, you would need to consider alternative transportation options such as flights, trains, or buses...


Of course, there is a route from Istanbul to Trieste (at least that's what google tells me).

Sure more data, more parameters will make it better, but it's still not reliable.
 
  • Like
Likes dextercioby
  • #98
PeroK said:
ChatGPT is at least honest.
No, it's not. "Honest" requires intent. ChatGPT has no intent.

PeroK said:
I would trust ChatGPT more than I would the US Supreme Court, for example.
I don't see how you would even compare the two. The US Supreme Court issues rulings that say what the law is. You don't "trust" or "not trust" the US Supreme Court. You either abide by its rulings or you get thrown in jail.
 
  • Like
Likes Motore
  • #99
PeroK said:
There's no comparison.
Didn't I just make one? :smile:
PeroK said:
Chat GPT, however imperfectly, is working on a global pool of human knowledge.
Actually, it is working on pool of human writing,

The idea is that writing is a good enough proxy for knowledge and that word frequency distributions* are a good enough proxy for understanding. The thread as well as some past ones highlight many cases where this does not work.

FWIW, I think ChatGPT could write horoscopes as well as the "professionals". But probably not write prescriptions.

* But not letter frequency distributions, which we had 40 years ago doing much the same thing. That would just be crazy talk.
 
  • Like
Likes berkeman, Motore, BWV and 1 other person
  • #100
Motore said:
Not even that, it just predicts words. It doesn't care if the sentence it makes actually describes anything real. It cannot.
An example:
Q: How long does a ferry ride from Istanbul to Trieste take?
ChatGPT:
A direct ferry ride from Istanbul to Trieste is not available, as these two cities are located in different countries and are quite far apart. Istanbul is in Turkey, while Trieste is in northeastern Italy.

To travel between Istanbul and Trieste, you would need to consider alternative transportation options such as flights, trains, or buses...


Of course, there is a route from Istanbul to Trieste (at least that's what google tells me).

Sure more data, more parameters will make it better, but it's still not reliable.
You may be right and it'll die a death. I'm not so sure. The reasons for the adoption of technology are often social and cultural, rather than technical.

In fact, there is evidence it's already taken off.
 
  • #101
Motore said:
ferry ride from Istanbul to Trieste take
Perhaps you should have asked about the ferry from Constantinople.

 
  • #102
PeroK said:
You may be right and it'll die a death. I'm not so sure.
No, of course it won't die. It is extremely usefull for a lot of tasks.
I am just saying that in the way that it's structured right now it cannot be reliable.

Here is an interesing use for a similar language model AI: https://www.biorxiv.org/content/10.1101/2022.09.29.509744v1
 
  • #103
PeroK said:
Chat GPT doesn't have gross political, religious or philosophical prejudices.
This isn't exactly true (though depends on what you mean by "gross"). It has guardrails designed to constrain content, which reflect the biases of the programmers. For example, a few months ago someone asked it for religious jokes and while it was OK with Christian jokes it declined to provide Islamic jokes. I think this bias has since been corrected.

It is also biased by its programmers' choice of source information. For example, the user base of Reddit has a lot more say in the generated output than the membership of AARP.
 
  • Like
Likes dextercioby, Vanadium 50, Motore and 1 other person
  • #104
russ_watters said:
This isn't exactly true (though depends on what you mean by "gross").
In contrast with social media software, for example, whose model is to focus information based on your perceived prejudices.

With ChatGPT you are not in an echo chamber being fed a steady diet of misinformation.

For example, I stumbled on a twitter feed about the COVID vaccine. Everyone on the thread believed that it was harmful. One woman was puzzled by those who willingly took the vaccine and they all agreed it must be down to "low intelligence".

That is "gross"; and your examples of ChatGPT bias pale by comparison.
 
  • Like
Likes russ_watters
  • #105
PeroK said:
With ChatGPT you are not in an echo chamber being fed a steady diet of misinformation.
True, because "misinformation" requires intent just as much as "honesty" does, and ChatGPT has no intent. Or, to put it another way, ChatGPT is not reliably unreliable any more than it is reliably reliable. :wink:

PeroK said:
I stumbled on a twitter feed about the COVID vaccine. Everyone on the thread believed that it was harmful. One woman was puzzled by those who willingly took the vaccine and they all agreed it must be down to "low intelligence".

That is "gross"
While I understand the point being made in the context of this thread, let's please not take it any further, in order to ensure that we don't derail the thread into a political discussion.
 
  • Like
Likes russ_watters and phinds

Similar threads

Replies
21
Views
1K
  • STEM Educators and Teaching
2
Replies
39
Views
3K
Replies
10
Views
2K
  • General Discussion
Replies
4
Views
709
Replies
5
Views
751
  • Differential Equations
Replies
1
Views
747
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
Replies
13
Views
1K
Back
Top