Stephen Wolfram explains how ChatGPT works

In summary: ChatGPTing.In summary, ChatGPT is a program that tries to do the thinking for you in Physics and other subjects. It is currently not very good at handling figures and pictures, but it is getting better. It is important to use it to help students find its mistakes or inaccuracies.
  • #36
By the way, I really welcome ChatGPT in the classroom. If you think about it, the fact that ChatGPT often gives quite good answers to general introductory level physics and astronomy questions (not mathematical logic, mind you, it's not good at that at all so here I'm talking about lower level classes that do not include much mathematical reasoning), yet does not have any deeper understanding of those answers, means that it simulates the kind of student that can get an A by parroting what they have heard without understanding it at all. The way ChatGPT fools us into thinking it understands its own explanations, is exactly the problem we should be trying to avoid in our students. The worst part is, sometimes our students don't even realize they are only fooling us, because we have trained them to fool themselves as well. We tell them an answer, then ask them the question, and give them an A when they successfully repeat the answer we gave them before. They walk away thinking they learned something, and we think they learned something. But don't dig into their understanding, don't ask them a question which calls on them to think beyond what we told them, if you don't want to dispell this illusion!

Hence, the way to defeat using ChatGPT as a cheat engine is the same as the way to dispell the illusion of understanding where there is not understanding: ask the follow on question, dig into what has been parroted. That's actually one of the things that often happens in this forum, we start with some seemingly simple question, and get a seemingly straightforward answer, but after a few more posts it quickly becomes clear that there was more to the question than what was originally intended by the asker. If we teach students to dig, we are teaching them science. If we teach them to do what ChatGPT does, we cannot complain that ChatGPT can be used to cheat!
 
  • Like
Likes slider142 and gmax137
Science news on Phys.org
  • #37
I don't know if this is the same topic, but today's New York Times has an article revealing that many travel books offered for sale on amazon today are "fakes", cheap worthless documents written by AI rather than actual humans who have traveled to the relevant countries, just paste ups using freely available material from the internet. I subsequently found several such books advertised there by apparently fake authors such as "William Steeve" (a rip off of Rick Steves), and Mike Darcey. They have apparently taken down the books by "Mike Steves", which were prominently featured in the article. It is not clear to me if Amazon itself is perpetuating this fraud or only abetting it, but some of the kindle books at least seem to be published by amazon. Even the author photographs and biographies are apparently fakes. The biography of "Mike Steves" closely mirrored the biography of Rick Steves, but all the information was apparently false for Mike, including his writing history and his home town, neither of which checked out.

https://www.nytimes.com/2023/08/05/...te=1&user_id=73fab36102e49b1b925d02f237c74b7e
 
  • Like
Likes David Lewis, PhDeezNutz and BillTre
  • #38
It certainly does seem like AI makes these kinds of ripoffs much easier, along with all kinds of cons. Since a lot of scams these days originate in other countries, one of the most common tipoffs is strange mistakes in the English, which might be a lot easier to avoid by using AI. Why is it that every new invention comes with all this collateral damage?
 
  • Like
Likes BillTre
  • #40
  • #41
mathwonk said:
properly used and implemented, AI seems to help with physics instruction, potentially replacing a section man to answer student questions. One advantage of AI in this experiment seems to be its superior accomodation to the level/speed of the student. But they had to "customize" the program appropriately.
https://news.harvard.edu/gazette/st...i-tutor-to-physics-course-engagement-doubled/
Yup, as someone who works on the back end on one of the bigger companies in the LLM space, we have been hiring "tutors" so we can mass train on math, coding, physics, chemistry, etc. Essentially, they come up with problems and see if they can stump our model, and if it stumps the model, they will correct it and we collect all these as training data. The other part is we use web scrappers to take problems from the internet too, and then have the tutors see if the LLM solved the problem correctly, or if they have to correct them (sorry physicsforums!).

On one hand, it's very interesting to see how fast the models are getting at "reasoning" and solving problems. I think it'll be better as a whole if we can succeed on creating LLMs that can help people from areas with lesser education to have access to something that can really help them! There are a lot of "new" implementations with things such as agents which means the "AIs" will have the ability to use python for calculations since LLMs aren't reliable for actual calculations, or APIs. This means that if a student asks the something like chatgpt for help on a math problem, it is known that it can't reliably do things as simple as multiplication due to the probabilistic nature of how our current neural network infrastructures do computations. The way we've gotten around this is when the LLM is prompted to do a hard calculation, it will contact the agent to do the calculation which is know is more reliable. The LLM will then use this result, and continue on with the problem solving.

On the other hand, I already see the writing on the wall, these companies will want to enter the tutoring space and offer these LLMs for a monthly subscription for math, physics, etc and if they become good enough, TAs and tutoring will be a thing of the past. This becomes the scary part for me. With the rise of fascism (again...), these tools can also be used in secular communities and for more thought control.

I'm hoping that these tools are used for the bettering of humanity, but I'm old enough to also know that we as a species have a pattern of going two steps forward for equality, then one step back. It's still forward progress, but would be nicer if we just skipped the step of going back.
 
  • Like
Likes Ken G, slider142 and PeroK
  • #42
Ken G said:
By the way, I really welcome ChatGPT in the classroom. If you think about it, the fact that ChatGPT often gives quite good answers to general introductory level physics and astronomy questions (not mathematical logic, mind you, it's not good at that at all so here I'm talking about lower level classes that do not include much mathematical reasoning), yet does not have any deeper understanding of those answers, means that it simulates the kind of student that can get an A by parroting what they have heard without understanding it at all. The way ChatGPT fools us into thinking it understands its own explanations, is exactly the problem we should be trying to avoid in our students. The worst part is, sometimes our students don't even realize they are only fooling us, because we have trained them to fool themselves as well. We tell them an answer, then ask them the question, and give them an A when they successfully repeat the answer we gave them before. They walk away thinking they learned something, and we think they learned something. But don't dig into their understanding, don't ask them a question which calls on them to think beyond what we told them, if you don't want to dispell this illusion!

Hence, the way to defeat using ChatGPT as a cheat engine is the same as the way to dispell the illusion of understanding where there is not understanding: ask the follow on question, dig into what has been parroted. That's actually one of the things that often happens in this forum, we start with some seemingly simple question, and get a seemingly straightforward answer, but after a few more posts it quickly becomes clear that there was more to the question than what was originally intended by the asker. If we teach students to dig, we are teaching them science. If we teach them to do what ChatGPT does, we cannot complain that ChatGPT can be used to cheat!
 
  • #43
My first year of college was quite exciting. I had learned differential calculus over the summer I thought and was anxious to move onto Integral calculus. However, I had to get permission from my new to my college differential calculus professor.

He quizzed me on a variety of differential calculus problems and I got them mostly right that he knew I had mastered the material. Until, he asked about limits. He wanted me to explain them in my own terms which I did but then he said well that's not quite right. Go study some more and when you're ready come back.

I went back a second and third time getting more not quite right responses. My fourth and final attempt, I recited the limit definition exactly as it was stated in the book and he said you know I think you got it.

He was one my favorite profs. This approach could work for those students using ChatGPT to do their work.

---

A more primitive example, was when some freshmen were doing an electrical lab measuring resistance and current and asked to determine the voltage of a battery.

We watched them use their newly acquired electronic calculators circa 1973 to compute an answer of 1500v for the battery and were shocked by their answer. We asked how did the arrive at that answer to wit they said that's what the calculator said.

We had to chuckle because we knew it to be a 1.5v D cell wrapped in some tape to obscure the labeling.
 
  • Haha
Likes romsofia
  • #44
jedishrfu said:
He wanted me to explain them in my own terms which I did but then he said well that's not quite right. Go study some more and when you're ready come back.

I went back a second and third time getting more not quite right responses. My fourth and final attempt, I recited the limit definition exactly as it was stated in the book and he said you know I think you got it.

He was one my favorite profs. This approach could work for those students using ChatGPT to do their work.
It's odd that he wanted you to explain it in "your own terms," but only accepted the literal definition. It's tricky in math, because there really isn't "your own terms" in math, the definitions are extremely precise and proofs are, at some level, purely syntactic. Computers are used to prove very difficult theorems via brute force methods, but for me that kind of spoils the whole point of math. Do we prove theorems because we want to use them to prove other theorems that we don't understand any better than the ones the computer proved? Or do we prove theorems because we think that if you understand its proof, you understand the theorem better?
jedishrfu said:
---

A more primitive example, was when some freshmen were doing an electrical lab measuring resistance and current and asked to determine the voltage of a battery.

We watched them use their newly acquired electronic calculators circa 1973 to compute an answer of 1500v for the battery and were shocked by their answer. We asked how did the arrive at that answer to wit they said that's what the calculator said.

We had to chuckle because we knew it to be a 1.5v D cell wrapped in some tape to obscure the labeling.
Yes, that's the kind of mistake these AIs make, they can't tell when their answer is absurd, or contradictory. They only know it's the answer they got.
 
Back
Top