Can ChatGPT Handle Complex Physics Questions Accurately?

In summary, ChatGPT is an artificial intelligence system that is able to ask difficult questions about quantum mechanics.
  • #106
Me: What is the Galois group of x^2+1?

ChatGPT:
The Galois group of a polynomial is a group of symmetries of the polynomial, which consists of all the permutations of the roots of the polynomial that can be obtained through field automorphisms.
In the case of the polynomial x^2+1, the roots of the polynomial are complex numbers i and -i, which are not real numbers. Therefore, the Galois group of the polynomial x^2+1 is a subgroup of the group of all automorphisms of the complex numbers, which is known as the Galois group of the field extension of the real numbers by the roots of the polynomial.
The Galois group of the polynomial x^2+1 is a finite group of order 2, and consists of the identity automorphism (which leaves the roots unchanged) and the automorphism which swaps the roots i and -i. This group is known as the Klein four-group, and is isomorphic to the group of symmetries of a square.
The Galois group of a polynomial is a fundamental concept in Galois theory, and is used to study the symmetries and properties of the roots of the polynomial, as well as the structure of the field extension generated by these roots.

I was reading is as it was typing it, and I thought "Damn it it is OK. I cannot complain about it." But then it wrote about Klein and the Square and I was happy again.
 
  • Like
Likes mattt
Physics news on Phys.org
  • #107
It always says that the Galois group of a polynomial is a group of order the degree of the polynomial.
 
  • #108
"Explain Galois theory with a peanut butter analogy"

Sure! Galois theory is a branch of mathematics that deals with the solution of polynomial equations. Here's an analogy to help understand it:
Imagine you have a jar of peanut butter and you want to spread it evenly on two slices of bread. You could use a knife to do this, but it would be difficult to get the peanut butter evenly distributed.
Instead, you might choose to use a spoon. This would make it easier to spread the peanut butter evenly, but it would still be a bit of a challenge.
Now, suppose you had a special tool called a "peanut butter spreader." This tool would make it much easier to evenly distribute the peanut butter on the bread. In the same way, Galois theory provides a "tool" for solving polynomial equations in a systematic and efficient way.
I hope this analogy helps! Let me know if you have any questions.
 
  • Like
  • Haha
  • Skeptical
Likes Office_Shredder, InkTide, mattt and 9 others
  • #109
martinbn said:
It obviously gave the wrong answer. The correct answer is that the average physicist cannot understand Galois theory. :devil:
@Demystifier is not your average physicist!
 
  • Like
  • Love
Likes physika, Demystifier and gentzen
  • #110
“Physics professors are quite a sight

They study day and night

Their work is never done

But they're always having fun

They teach us about energy and mass

And how to calculate the speed of light's pass

They explain the laws of motion

And solve problems with precision

Their minds are sharp as a knife

They're experts in their life

But they don't just teach and lecture

They also love to conduct experiments, in nature

So let's give a round of applause

To all the physics professors

For all the knowledge they impart

And the joy they bring to the heart
In labs and classrooms they do dwell

Physics professors, smart as hell

Their minds are sharp, their theories sound

They'll leave you dizzy, spinning round

They study particles, both big and small

From atoms to galaxies, they know it all

They delve into the depths of space and time

Leaving no mystery behind

Their equations, formulas, and laws

Are complex, but they're the masters of them all

So if you want to learn about the ways of physics

Just find a professor, they'll be oh so prolific

They'll teach you about the speed of light

And all the forces that take flight

They'll show you how the universe works

And leave you with a sense of awe, it's quite a perk

So don't be afraid, give physics a try

With a professor by your side, you'll surely fly

Through the mysteries of the natural world

Your knowledge will be unfurled”

67567B5E-FE77-42D3-BA15-7291F857B66A.jpeg
 
  • #111
PeroK said:
@Demystifier is not your average physicist!
But I'm average (in the population of theoretical physicists with PhD) in my abilities to understand abstract algebra. :smile:
I was always an analysis-rather-than-algebra guy.
 
  • Like
Likes vanhees71
  • #112
martinbn said:
I was reading is as it was typing it, and I thought "Damn it it is OK. I cannot complain about it." But then it wrote about Klein and the Square and I was happy again.
Me: Why are some people happy when others are wrong?

ChatGPT:
There could be a variety of reasons why some people might feel happy when others are wrong. Here are a few possibilities:

1. Schadenfreude: This is the feeling of pleasure or joy that comes from seeing someone else experience failure or disappointment. Some people might feel happy when others are wrong because it gives them a sense of superiority or validation.

2. Competition: If someone views a situation as a competition, they might feel happy when others are wrong because it gives them an advantage.

3. Personal biases: People may feel happy when others are wrong if it confirms their preexisting beliefs or biases.

4. Misunderstandings: Sometimes, people might feel happy when others are wrong simply because they misunderstand the situation or the intentions of the other person.

It's important to note that feeling happy when others are wrong is not necessarily a healthy or positive reaction. It's usually more productive and empathetic to try to understand others' perspectives and find common ground, rather than taking pleasure in their mistakes or failures.
 
  • Like
Likes physika, vanhees71 and PeroK
  • #113
martinbn said:
Out of curiosity, is this the answer?
No, mine was different, and more illuminating to me.
 
  • #114
Demystifier said:
No, mine was different, and more illuminating to me.
Interesting. So it does not always generate the same answer.
 
  • #115
Demystifier said:
Me: Why are some people happy when others are wrong?

ChatGPT:
There could be a variety of reasons why some people might feel happy when others are wrong. Here are a few possibilities:

1. Schadenfreude: This is the feeling of pleasure or joy that comes from seeing someone else experience failure or disappointment. Some people might feel happy when others are wrong because it gives them a sense of superiority or validation.

2. Competition: If someone views a situation as a competition, they might feel happy when others are wrong because it gives them an advantage.

3. Personal biases: People may feel happy when others are wrong if it confirms their preexisting beliefs or biases.

4. Misunderstandings: Sometimes, people might feel happy when others are wrong simply because they misunderstand the situation or the intentions of the other person.

It's important to note that feeling happy when others are wrong is not necessarily a healthy or positive reaction. It's usually more productive and empathetic to try to understand others' perspectives and find common ground, rather than taking pleasure in their mistakes or failures.
Oh come on. The previous posts were jokes. Wasnt it clear? I even put one of those faces.
 
  • #116
martinbn said:
Interesting. So it does not always generate the same answer.
True. You even have a button "Regenerate Response" that gives you a different answer to the same question.
 
  • Like
Likes vanhees71
  • #117
martinbn said:
Oh come on. The previous posts were jokes. Wasnt it clear? I even put one of those faces.
If it was clear that you were joking, why do you think that I was not joking too? I'm sure you had a good reason.
 
  • Like
Likes vanhees71
  • #118
Demystifier said:
If it was clear that you were joking, why do you think that I was not joking too? I'm sure you had a good reason.
Because you put a "I am sad" response on my post. I expected a "haha".
 
  • Like
Likes vanhees71
  • #119
martinbn said:
Because you put a "I am sad" response on my post. I expected a "haha".
You are good in math, but bad in rationalization.
 
  • Like
Likes physika
  • #120
Me: how would you explain "quantum spookiness" with an every day analogy?

ChatGPT:
Quantum mechanics is a fundamental theory in physics that describes the behavior of matter and energy at the atomic and subatomic scale. One of the most striking and counterintuitive aspects of quantum mechanics is the idea of "quantum superposition," in which a quantum system can exist in multiple states simultaneously. This concept is often referred to as "quantum spookiness."

To understand quantum superposition, it might be helpful to consider the following everyday analogy: imagine you are playing a game of "rock, paper, scissors" with a friend. At any given moment, your hand can be in one of three states: rock, paper, or scissors. But in the world of quantum mechanics, a quantum system can be in multiple states at the same time. It's as if your hand could be both rock and scissors simultaneously, and only when you make a choice and "measure" the state of your hand does it become one or the other.

This strange behavior is a consequence of the fundamental uncertainty principle in quantum mechanics, which states that it is impossible to know the precise state of a quantum system with complete accuracy. The idea of quantum superposition helps to explain many of the strange and counterintuitive phenomena that are observed at the quantum scale, such as the fact that particles can be entangled and can seemingly communicate instantaneously over vast distances.
 
  • Like
Likes gentzen, vanhees71 and Demystifier
  • #121
timmdeeg said:
Me: how would you explain "quantum spookiness" with an every day analogy?

ChatGPT:
Quantum mechanics is a fundamental theory in physics that describes the behavior of matter and energy at the atomic and subatomic scale. One of the most striking and counterintuitive aspects of quantum mechanics is the idea of "quantum superposition," in which a quantum system can exist in multiple states simultaneously. This concept is often referred to as "quantum spookiness."

To understand quantum superposition, it might be helpful to consider the following everyday analogy: imagine you are playing a game of "rock, paper, scissors" with a friend. At any given moment, your hand can be in one of three states: rock, paper, or scissors. But in the world of quantum mechanics, a quantum system can be in multiple states at the same time. It's as if your hand could be both rock and scissors simultaneously, and only when you make a choice and "measure" the state of your hand does it become one or the other.

This strange behavior is a consequence of the fundamental uncertainty principle in quantum mechanics, which states that it is impossible to know the precise state of a quantum system with complete accuracy. The idea of quantum superposition helps to explain many of the strange and counterintuitive phenomena that are observed at the quantum scale, such as the fact that particles can be entangled and can seemingly communicate instantaneously over vast distances.
Come to think about it, I am no longer surpised that it generates responses like this one when it comes to QM. After all physicists do the same when they write popular articles/books about QM.
 
  • Like
Likes timmdeeg and vanhees71
  • #122
martinbn said:
Me: Solve ##x^5=1##
ChatGPT:
If ##x^5 = 1##, then ##x## is one of the fifth roots of ##1##. The fifth roots of ##1## are ##1, -1, i, -i##, and ##-i^2##. Note that ##i^2 = -1##. Therefore, the solutions to ##x^5 = 1## are:
##1, -1, i, -i##, and ##-i^2##
So, ##x## can be any of these values.
It works much better when you give "him" a little help, like to a real person.

Me: Solve x^5=1.

ChatGPT:
The equation x^5 = 1 has five solutions: x = 1, x = -1, x = 1i, x = -1i, where i is the imaginary unit defined as the square root of -1.

To see this, we can rewrite the equation as (x-1)(x+1)(x-1i)(x+1i) = 0. This equation is satisfied when any of the factors is equal to zero. Since the factors x-1, x+1, x-1i, and x+1i are all equal to zero when x is equal to 1, -1, 1i, or -1i, respectively, the solutions to the equation x^5 = 1 are x = 1, x = -1, x = 1i, and x = -1i.

Me: That's wrong, try it in polar form of the complex number.

ChatGPT:
I apologize for the error in my previous response. You are correct that the solutions to the equation x^5 = 1 can also be expressed in polar form. In polar form, the solutions to the equation x^5 = 1 are x = cis(0), x = cis(2pi/5), x = cis(4pi/5), x = cis(6pi/5), and x = cis(8pi/5), where cis(theta) represents the complex number with magnitude 1 and argument theta.

To see this, we can rewrite the equation x^5 = 1 as x = cis(2kpi/5) for k = 0, 1, 2, 3, 4. This equation is satisfied when x has a magnitude of 1 and an argument of 2kpi/5 for any integer k. Therefore, the solutions to the equation x^5 = 1 are x = cis(0), x = cis(2pi/5), x = cis(4pi/5), x = cis(6pi/5), and x = cis(8pi/5).
 
  • Like
  • Wow
Likes mattt and gentzen
  • #123
The fact it isn't very great at math isn't surprising given its ability to do math is basically a side effect of its ability to generate meaningful text.

Perhaps a model designed and trained specifically for doing math is just around the corner and it will blow our minds?
 
  • Like
Likes Demystifier
  • #124
Jarvis323 said:
The fact it isn't very great at math isn't surprising given its ability to do math is basically a side effect of its ability to generate meaningful text.

Perhaps a model designed and trained specifically for doing math is just around the corner and it will blow our minds?
Exactly!

And I would add that most humans are not very good in math for a similar reason.
 
Last edited:
  • #125
Ask it to which essential question the answer is 42. If it crashes, you can blame me! ;-)
 
  • #126
Maarten Havinga said:
Ask it to which essential question the answer is 42. If it crashes, you can blame me! ;-)
I think systems trained through machine learning generally don't crash. Once the training is over, the high-level explanation of their working is not based on logical loops (repeat until ...), but on shortcut heuristics (if it quacks like a duck, it's probably a duck). Of course, at the low level it is still based on logical circuits, but this level does not distinguish "deep" questions (what's the meaning of everything) from "mundane" ones (solve equation x=x+1).
 
  • #127
For some reason, I missed the opening of this post. We must remember that with regard to very specific topics such as a type of math or science, it is not expected to be accurate and was not designed for such specific tasks. Considering that, it is amazing (at least to me) what it can do. I believe ChatGPT is a scaled-down/less powerful version of GPT3.

Unfortunately, most people seem to think it should be universally accurate. It would be interesting to see how it would perform on physics questions if it was set up as a physics tutor considering it would not understand any of it.
 
  • #128
Demystifier said:
It makes errors, just like real intelligence. Moreover, it can correct itself.
Not only that, you can talk it into correcting an error and then into making the error once again. As a source of information - especially on the more diffuclt subjects - it is completely uselesss. Plus, it is dangerous, as it sounds pretty confident.

I am afraid it will only strengthen all the problems we already have with antiviral (and every other type of) crackpots - they will have another "source" of information that will "support" their stance.
 
  • Like
Likes ChemAir, aaroman, InkTide and 3 others
  • #129
Borek said:
Not only that, you can talk it into correcting an error and then into making the error once again. As a source of information - especially on the more diffuclt subjects - it is completely uselesss.
The best way I've seen its usual output described is "fluent bullshit". It's bad at coding and math because it is, at its core, a weighted shuffling of components in the training data.

From a computer science standpoint I think it's important to recognize that ML models use, from input to processing to output, data in the form of matrices - there is no way for operations with these matrices to infer details between elements within these matrices that are not contained within the matrices (this is what your brain does with the matrix visual input from screens, and why AI art struggles with consistent anatomy and lighting, for example - you have coherent mental models from incomplete data, the AI can only attempt to replicate the data, and is ultimately still judged in accuracy by humans with said mental models), without essentially baking those inferences in... which is exactly the sort of tedious problem ML models are meant to avoid.

From an epistemological standpoint I think it's important not to discount the capacity of our mental models to resolve AI output into an inference of... well, AI inference. An inability to answer "why" questions (especially "why did you get this wrong" questions) coherently and correctly suggests we're still the only ones doing the inferring.

Probably the best example I've seen of that lack of inference in these models was the way one of them (Galactica or something, I think) created citations when asked to write papers - they were formatted perfectly, and cited papers that didn't exist with titles completely unrelated to the inline reference. There was no understanding of what a citation is, how to use one, or why to use one - just an output made to look like one.
 
  • Like
Likes aaroman
  • #131
InkTide said:
described is "fluent bullshit"
"William's syndrome?"
 
  • #132
InkTide said:
"fluent bullshit"

which reminds me of this tweet: ;)

1675152447359.png
 
  • Like
Likes InkTide and martinbn
  • #133
Borek said:
which reminds me of this tweet: ;)

View attachment 321485
I don't agree with gendering terms like that, but I definitely agree with the core sentiment.

I think we need to be extremely cautious of mistaking the imitation of problem solving for actual problem solving - if the student (AI) is just copying the answer from the textbook (training data), are they really understanding the problem or solution? Is that understanding even something that can be achieved by increasing the likelihood of accurate reproduction of the answer coupled to the input problem, or is there something else required? Is rote memorization really "learning"?

As for the confidence... it may be a simple consequence of a binary responses to output (e.g. "good output" vs "bad output", or direct comparison/ranking of output quality) disfavoring outputs acknowledging uncertainty or ambiguity - simpler handwriting analysis neural nets will confidently read symbols from random noise for this reason.
 
  • Like
Likes PeroK
  • #134
Demystifier said:
While I, as a language model, am able to provide information and explanations on a wide range of topics, I do not have access to the Internet and am not able to browse for or access current information about specific topics
Anticipating two chatGPT's since they are language models they would be able to chat to each other in their own language more efficient.
 
  • #135
Delta Prime said:
Anticipating two chatGPT's since they are language models they would be able to chat to each other in their own language more efficient.
I read that this happened at Google a few years ago. They had two AI agents that were communicating with one another in their own language. Google not knowing what was being exchanged shut them down.
 
  • Haha
Likes Maarten Havinga and anorlunda
  • #136
gleem said:
I read that this happened at Google a few years ago. They had two AI agents that were communicating with one another in their own language. Google not knowing what was being exchanged shut them down.
Siri versus Alexa.
 
  • #137
gleem said:
I read that this happened at Google a few years ago. They had two AI agents that were communicating with one another in their own language. Google not knowing what was being exchanged shut them down.
Just to play it safe I wonder if one could have access to the internet and the other not.
 
  • #138
Demystifier said:
Exactly!

And I would add that most humans are not very good in math for a similar reason.
Intriguing! Perhaps we may gain insight on those who are exceptional in mathematics being savants
 
  • #139
gleem said:
I read that this happened at Google a few years ago. They had two AI agents that were communicating with one another in their own language. Google not knowing what was being exchanged shut them down.
Possibly an apocryphal story inspired by events in Colossus: The Forbin Project (1970). In the movie, US and Soviet defense computers conspire to control the world using their own indecipherable mathematical language.
 
Last edited:
  • #140
renormalize said:
Possibly an apocryphal story inspired by events in Colossus: The Forbin Project (1970). In the movie, US and Soviet defense computers conspire to control the world using their own indecipherable mathematical language.
Given the track record of comic books being ahead of science by 50 years I would say Sci-Fi movies are right on time,. Relatively speaking. (Pun intended)
 

Similar threads

Back
Top