Stephen Wolfram explains how ChatGPT works

In summary: ChatGPTing.In summary, ChatGPT is a program that tries to do the thinking for you in Physics and other subjects. It is currently not very good at handling figures and pictures, but it is getting better. It is important to use it to help students find its mistakes or inaccuracies.
  • #1
Science news on Phys.org
  • #2
apostolosdt said:
It's funny that everyone kinda assumed that "AI" would be good at solving logical / analytical tasks (yeh ok it handles chess well) but the current best "AI"s handles language and pictures .,

Even winning art competitions.

EDIT: changed link to avoid ny times paywall)
EDIT2: or maybe *I* just assumed....
 
Last edited:
  • #3
I believe that we cannot keep hiding our heads in the sand. AI applications like ChatGPT are here to stay for ever. What we apparently need to do in these forums is start a serious dialogue on how to use AI apps to the benefit of physics research.

In the classroom environment, I think the battle is over: ChatGPT is the winner.
 
  • Like
Likes Ishika_96_sparkles and PhDeezNutz
  • #4
apostolosdt said:
I believe that we cannot keep hiding our heads in the sand. AI applications like ChatGPT are here to stay for ever. What we apparently need to do in these forums is start a serious dialogue on how to use AI apps to the benefit of physics research.

In the classroom environment, I think the battle is over: ChatGPT is the winner.

Someone might've won a Pyrrhic battle but the war is still being fought.

The operational word being "watermarking" I think...
 
  • #5
apostolosdt said:
I believe that we cannot keep hiding our heads in the sand. AI applications like ChatGPT are here to stay for ever. What we apparently need to do in these forums is start a serious dialogue on how to use AI apps to the benefit of physics research.

In the classroom environment, I think the battle is over: ChatGPT is the winner.

I don't know how much of an actual problem it will be in the long run. The most important skill most people have now is how well they can use a computer, and the more computers can do, the more they will be able to do with them. People have been "cheating" in school forever, doesn't mean they don't have working professional knowledge of a subject or can't contribute in the workplace. In most careers, the majority of what you will be asked to do is learned on the job.
 
  • Like
Likes apostolosdt
  • #6
JLowe said:
I don't know how much of an actual problem it will be in the long run. The most important skill most people have now is how well they can use a computer, and the more computers can do, the more they will be able to do with them. People have been "cheating" in school forever, doesn't mean they don't have working professional knowledge of a subject or can't contribute in the workplace. In most careers, the majority of what you will be asked to do is learned on the job.
Good point, JLowe; I recall the time when the first scientific calculators came out and the fuss about the deterioration in the users' mental abilities in math! Or when Hewlett-Packard introduced its programmable calculators series, like the HP-67, and the "noise" about the potential cheating during exams. Truth is, I practically bought all HP models over time, but I still kept on doing calculations mentally!

After all, AI has been in our daily life for many years now.
 
  • Like
Likes Klystron, dextercioby, JLowe and 1 other person
  • #7
The difference between a "calculator" and ChatGPT is that the calculator does not do the "thinking" for you, while ChatGPT often tries (and in many cases, fail) to do just that. What I mean in doing the thinking is that it tries to make correlations and THEN tries to establish the causation, something that a calculator doesn't do.

What is concerning is that it does this using a language that makes it sounds very confident that it is correct. I would point out this amusing interaction with it on the question on why the movie "Shazam" bombed at the box-office. It states several things with such confidence as if they are valid facts, and only retreats when challenged of their validity.

Of course, if one wants to view what Chat GPT does with physics/astronomy, one can view something like this:



As of now, it still cannot handle figures and pictures, so questions regarding those are still safe from being used there (Chegg is still an issue). But I intend to use Chat GPT as a tool for the students to hunt for its mistakes or inaccuracies. I've been trying to compile a typical General Physics questions where it gives rather puzzling or dubious responses. I have to revise the list because ChatGPT4 seems to be getting better at it.

Zz.
 
  • Like
  • Informative
Likes slider142, DennisN, dextercioby and 2 others
  • #8
From a recent session with ChatGPT.

"ME: Hi, how do you do your research when you're asked a question involving a math-based physical theory?

Hello! As an AI language model, I rely on a large database of information and algorithms to answer math-based physics questions.
When asked a math-based physics question, I use my programming to break down the question into its components and search my database to identify the relevant physical principles, equations, and concepts required to answer the question. I then use this information to generate a response that is as accurate and informative as possible.
In some cases, I may need to consult external sources to ensure the accuracy of my response. However, I always strive to provide the most accurate and helpful information possible within the scope of my training and knowledge cutoff.

ME: Do you consult peer-reviewed articles in scientific journals?Yes, I can consult peer-reviewed articles in scientific journals to gather information for answering math-based physics questions.
My programming allows me to access a vast amount of information from a variety of sources, including peer-reviewed articles in scientific journals, textbooks, and other reputable sources. I can use natural language processing algorithms to analyze and understand the content of these sources and extract the relevant information needed to answer a particular question.
However, it's important to note that my responses are based solely on the information available in my database, which may not always include the latest research or findings. Additionally, my responses are generated based on mathematical models and physics theories, and may not necessarily reflect real-world conditions or experimental results.

ME: Still impressive, for I used to think you are only consulting internet sources. Now, consulting published scientific articles requires a certain level of topic's knowledge, PRIOR to the research. How do you cope with that?As an AI language model, my knowledge is derived from the large database of information and algorithms that I was trained on. This includes a vast amount of knowledge across many different domains, including physics and mathematics.
However, while I have a broad understanding of these topics, my knowledge is limited by my training data and the knowledge cutoff, which is the point in time up to which my database of information was compiled. As a result, there may be certain areas of physics or mathematics where my knowledge is more limited or where I may not have the most up-to-date information.
When confronted with a question that requires a high level of topic-specific knowledge, I use a variety of strategies to try and generate an accurate and helpful response. For example, I may rely on my natural language processing algorithms to analyze the question and identify key concepts, which I can then use to search my database for relevant information. In some cases, I may need to consult external sources, such as scientific journals or textbooks, to ensure the accuracy of my response.
Overall, while my knowledge is broad, I may not have the same level of expertise as a human expert in a particular field. However, I am constantly learning and updating my database to improve my performance and accuracy over time."A bit of a surprise. I really was under the impression that ChatGPT doesn't look into professional texts.
 
  • #9
Tl;DR
Somwhat offtopic really. Just a scary thought.
My point with Google Scholar exactly. I mean how deep does it go?! How good arguments will it end up using?

Which kinda scares me even more in this light then being that if you're carrying on a conversation with some next-gen ChatGPT and you can't tell the difference between it and a human - a scenario probably not so far in the future I might add - what IS the difference then? Apart from the obvious fact that the lights are on but there's noone home. We've already seen that these chatbots can go full nazi virtually overnight. Imagine a virtual army of nazitrolls, indistinguishable by humans, working round the clock to convince people that Dachau wasn't such a bad idea after all.

I rather suspect that this is the kind of "AI" we'll see.
 
Last edited:
  • #10
ZapperZ said:
What is concerning is that it does this using a language that makes it sounds very confident that it is correct.

Humans do this all the time. Confident ignorance can get you paid quite well.
 
  • Like
Likes slider142, dextercioby, mathwonk and 2 others
  • #11
otherwise put: "deceiving people is always more lucrative than enlightening them"
 
  • Like
Likes dextercioby
  • #12
apostolosdt said:
CHATGPT said:
In some cases, I may need to consult external sources, such as scientific journals or textbooks, to ensure the accuracy of my response.
Why should we believe this? We've seen that CHAT is quite capable of just making things up.
 
  • Like
Likes Nugatory
  • #13
ZapperZ said:
What is concerning is that it does this using a language that makes it sounds very confident that it is correct.
I've actually witnessed award-winning 'experts' make stuff up on the fly. Sadly, the audience and their peers apparently have no idea that a statement is false. And I've seen some journal articles contain false information (fabricated results) or sometimes incorrect information that even a peer review missed.

ChatGPT said:
In some cases, I may need to consult external sources, such as scientific journals or textbooks, to ensure the accuracy of my response.
The ChatGPT response presupposes/assumes that scientific journal articles or textbooks are correct. They may be correct, but not always, and there are nuances sometimes in the conclusions. I find some articles either incomplete in the supporting information, and once in a while, I find article just plain wrong.
 
  • #14
Astronuc said:
The ChatGPT response presupposes/assumes that scientific journal articles or textbooks are correct.
True.

But I'm not convinced that Chat really looks at them. Seems like 9 times in 10 when I go to find a paper, it is behind a paywall, so I can't read it. Maybe Chat has a university account that gives it access? Or it has indexed Arxiv?
 
  • Like
Likes dextercioby
  • #15
gmax137 said:
But I'm not convinced that Chat really looks at them. Seems like 9 times in 10 when I go to find a paper, it is behind a paywall, so I can't read it. Maybe Chat has a university account that gives it access? Or it has indexed Arxiv?
Many academic and research organizations have subscriptions to scientific journals or they have subscription packages with the major publishers (Elsevier, Springer, Taylor & Francis, DeGruyter, . . . .). I suspect someone in academia or research may have access to a set of relevant publications.

Can ChatGPT be set up to browse one's computer, or a network or shared directory?

The typical person outside of an academic or research institution may not have access to subscription based articles.
 
  • #16
naive question: I wonder if chatgpt can answer a simple freshman calculus question like this?

“If f is the function defined on the interval [0,1], by setting f(x) = 1 - 1/2^n on the interval
[1-1/2^n, 1-1/2^(n+1)), for every integer n≥0, and f(1)= 1, is f Riemann integrable on [0,1]?
If yes, why, and what is the value of that definite integral? If not, why not?”

or this question from my logic test:

ID. i) Mathwonk has discovered the most wonderful beach.
ii) Some things are really fine, but nothing is as fine as the sand at the beach.
iii) If a person discovers something really fine, he should bury his head in it.
what conclusion can be drawn (using all hypotheses):
 
Last edited:
  • Like
Likes Astronuc and dextercioby
  • #17
So, I prompt the LLMs a certain way before I asked your questions (hence why you see think, act, observe), and this one has access to wolfram, and it did 2 calculations through the process. I'll just post the response so you can be the judge of them, and maybe it'll inspire you to take a closer look into these new tools!
mathwonk said:
“If f is the function defined on the interval [0,1], by setting f(x) = 1 - 1/2^n on the interval
[1-1/2^n, 1-1/2^(n+1)), for every integer n≥0, and f(1)= 1, is f Riemann integrable on [0,1]?
If yes, why, and what is the value of that definite integral? If not, why not?”

**Think**:

The question is asking whether a given function is Riemann integrable on the interval [0,1] and, if so, what is the value of the definite integral.

A function is Riemann integrable if it is bounded and the set of its discontinuities has measure zero. In other words, a function is Riemann integrable if the "area" under the curve can be approximated arbitrarily closely by Riemann sums.

The function f(x) is defined piecewise on the interval [0,1] with different values on the subintervals [1-1/2^n, 1-1/2^(n+1)) for every integer n≥0, and f(1)= 1.

**Act**:

Let's first check if the function is bounded and if the set of its discontinuities has measure zero.

The function is clearly bounded as it takes values between 0 and 1 for all x in [0,1].

The function is discontinuous at the points 1-1/2^n for every integer n≥0. However, these points form a geometric sequence that converges to 1, and the sum of the lengths of the intervals [1-1/2^n, 1-1/2^(n+1)) is 1/2 + 1/4 + 1/8 + ... = 1, which is finite. Therefore, the set of discontinuities has measure zero.

So, the function is Riemann integrable.

To find the value of the definite integral, we can sum up the areas of the rectangles under the curve. The width of each rectangle is 1/2^n and the height is 1 - 1/2^n, so the area is (1 - 1/2^n) * 1/2^n.

**Observe**:

Let's calculate the sum of these areas to find the value of the definite integral.

The sum of the areas under the curve, which is the value of the definite integral of the function f(x) from 0 to 1, is ##\frac{2}{3}##.

So, the function f(x) is Riemann integrable on the interval [0,1] and the value of the definite integral is ##\frac{2}{3} ##.

mathwonk said:
i) Mathwonk has discovered the most wonderful beach.
ii) Some things are really fine, but nothing is as fine as the sand at the beach.
iii) If a person discovers something really fine, he should bury his head in it.
what conclusion can be drawn (using all hypotheses):
(I switched your name to Bob).

**Think**:

The problem presents three statements:

i) Bob has discovered the most wonderful beach.
ii) Some things are really fine, but nothing is as fine as the sand at the beach.
iii) If a person discovers something really fine, he should bury his head in it.

We are asked to draw a conclusion using all these hypotheses.

**Act**:

Let's analyze the statements:

From statement (i), we know that Bob has discovered the beach.

Statement (ii) tells us that nothing is as fine as the sand at the beach. This implies that the sand at the beach is really fine.

Statement (iii) suggests that if a person discovers something really fine, he should bury his head in it.

**Observe**:

Combining all these statements, we can conclude that since Bob has discovered the beach, and the sand at the beach is really fine, Bob should bury his head in the sand at the beach.
 
  • Haha
Likes berkeman
  • #18
I like the second answer better than the first. I.e. the argument that the given function has discontinuities of measure zero seems unconvincing to me. For one thing it uses an irrelevant observation, or expresses itself badly, and should have said the discontinuities occur only at those points, and having said that it should have observed the sequence of endpoints is countable, hence of measure zero. the argument involving the sum of the series being 1, could be used to argue that the complementary set, the endpoints, has measure zero, but only because the sum of the lengths of those intervals equals the full length of the base interval, not merely that it is finite.

Moreover there is a simpler reason for integrability, namely the function is monotone, enough said. I am also less than impressed that it seems to have asked the question of wolfram mathematica, which I know can do integration.

But I like the fact that it carried out some logical deduction on the second question. cool! much better than I expected. thank you!

edit: I was about to ask how I know the answerer was not actualy a person, remembering the first automaton of note, a chess playing machine that hid a small chess playing human inside a box. Then i realized that a human mathematician should have answered slightly better. qed. i.e. this performance exceeded my expectations, but did not mimic that of a competent human, ... yet. (Maybe soon I will be claiming that an AI performance is too perfect to be human!)
 
Last edited:
  • #19
gmax137 said:
True.

But I'm not convinced that Chat really looks at them. Seems like 9 times in 10 when I go to find a paper, it is behind a paywall, so I can't read it. Maybe Chat has a university account that gives it access? Or it has indexed Arxiv?
If you search the paper's title on any search engine for those behind a paywall, most often I find them pre-released on Researchgate.com where they may be listed for download or you can DM the author to request a link or copy. But like Google, it had access to almost anything public.
 
  • #20
Fair enough, @TonyStewart . Still, did Chat open links to download and "absorb" papers off researchgate? I doubt Chat emailed the authors. I don't know if the details of the database used by Chat are provided anywhere.

What gets me going on this is the examples we have seen (in other posts) where Chat fabricated references. Once I saw it doing that, I tend not to believe anything it "says"
 
  • #21
OpenAI chose which references for best accuracy to be collected and which to block from what I recall hearing Sam Altman with Lex Fridman. GPT 3.5 only used responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data. But GPT 4 was several orders of magnitude greater data collection. That implied to me some special memberships to some professional journals but unlikely all. I don't expect collecting more data makes it better, rather better data reduction techniques for the models which makes it an iterative process with a good deal of human feedback. Just like some papers have dead links I expect GPT to improve but they will have to find profit from this xx Billion$ investment so enjoy the free rides now.edit:
Chat OpenAI says "I have not been directly trained on specific professional journals, nor do I have access to proprietary databases or subscription-based content. I should note that while I strive to provide accurate and up-to-date information, my responses may not always reflect the most current research or advancements in a particular field. Therefore, it's always a good idea to consult primary sources, authoritative publications, or domain experts for professional journal access and the most recent information."

The literacy level seems to be targeted for Grade 10 readability or optionally a 10 yr old in Wikiwand and Bing search and not at any advanced level from what I see, but it makes decent summaries. A few generations are worth the effort.
The generative examples need a few more generations of algorithms, Sam Altman didn't know even if GPT 7 would be it!. This depends on the acceptance or error critieria, just like the public's perception of autonomous driving.
 
Last edited:
  • Informative
Likes gmax137
  • #22
Astronuc said:
Many academic and research organizations have subscriptions to scientific journals or they have subscription packages with the major publishers (Elsevier, Springer, Taylor & Francis, DeGruyter, . . . .). I suspect someone in academia or research may have access to a set of relevant publications.

Can ChatGPT be set up to browse one's computer, or a network or shared directory?

The typical person outside of an academic or research institution may not have access to subscription based articles.

gmax137 said:
Fair enough, @TonyStewart . Still, did Chat open links to download and "absorb" papers off researchgate? I doubt Chat emailed the authors. I don't know if the details of the database used by Chat are provided anywhere.

What gets me going on this is the examples we have seen (in other posts) where Chat fabricated references. Once I saw it doing that, I tend not to believe anything it "says"
I would find it very strange if OpenAI did not include all the research papers and articles they could get access to (legitimately or otherwise) in the training dataset. Given that the "books2" dataset is suspected to be a certain bookpiracy site I don't know if I'm allowed to mention and the "books3" dataset is all of bibliotik (a book piracy private tracker), I doubt they'd avoid using all of scihub as well. Even if they did, it would be very strange if they couldn't afford access to all the major journals for their work
 
  • #23
Muu9 said:
I would find it very strange if OpenAI did not include all the research papers and articles they could get access to (legitimately or otherwise) in the training dataset. Given that the "books2" dataset is suspected to be a certain bookpiracy site I don't know if I'm allowed to mention and the "books3" dataset is all of bibliotik (a book piracy private tracker), I doubt they'd avoid using all of scihub as well. Even if they did, it would be very strange if they couldn't afford access to all the major journals for their work
Maybe they did, maybe they didn't, my point really is that I would not take Chat's "word" on this as the truth.
 
  • #24
I wonder if ChatGPT can/does consult vixra.org ?
 
  • #25
My experience(s) with Chat GPT revolved around trying to get it to sensibly explain why Hydrofluoric acid is considered a "weak" acid. Eventually it conceded that maybe it wasn't - well sort of anyway!

No, that result does not make sense. The calculated value for the energy release per molecule of HF is significantly higher than the value for HCl, which contradicts the fact that HF is known to ionize only partially in water, while HCl fully ionizes. The difference in energy release between the two should be reflected in their respective ionization constants, which are significantly different. Therefore, the calculation must be reviewed to determine where the error occurred.

Another experience with AI (not Chat GPT) is the infuriating "light enhancement" which is a feature of Windows 10/11 Photo app. Previously there was an option to disable that but now the AI is apparently so smart (or thinks itself so smart) that it has concluded stupid humans should not be able to disable "light enhancement" and so there is no longer a disable option. Unfortunately the "stupid humans" are trying to take astro pics at night whereby in addition to city light pollution , we now have to contend with the "light enhancing" genius of AI.
 
  • #26
neilparker62 said:
but now the AI is apparently so smart (or thinks itself so smart) that it has concluded stupid humans should not be able to disable "light enhancement"
What's your evidence that the decision to remove the ability to disable light enhancements was made by an AI and not a human?
 
  • #27
ZapperZ said:
The difference between a "calculator" and ChatGPT is that the calculator does not do the "thinking" for you, while ChatGPT often tries (and in many cases, fail) to do just that. What I mean in doing the thinking is that it tries to make correlations and THEN tries to establish the causation, something that a calculator doesn't do.
No, that is not what ChatGPT is doing. Wolfram's article makes that clear. ChatGPT does not make use of the meanings of words at all. All it is doing is generating text word by word based on relative word frequencies in its training data. It is using correlations between words, but that is not the same as correlations in the underlying information that the words represent (much less causation). ChatGPT literally has no idea that the words it strings together represent anything.
 
  • Like
Likes Lord Jestocost, BillTre and gmax137
  • #28
Muu9 said:
What's your evidence that the decision to remove the ability to disable light enhancements was made by an AI and not a human?
Don't have any hard evidence other than following (in italics) but I think I'd rather blame AI than a human programmer for removing what was previously - and still should be - an obvious user choice in terms of enabling / disabling light enhancement in the Windows Photos app. Whether the actual choice was made by human or machine we are still left in a situation where - on top of city light pollution - we now have do deal with the compulsory light enhancing genius of Windows Photos.

Parts of Windows already use AI, where it’s involved in everything from system management to search, speech recognition, grammar correction, and even noise suppression and camera image processing. Some of that AI processing is typically farmed out to the cloud. Some can be done on a PC’s graphics chip or its main CPU. With onboard AI-specific hardware, though, the processing could be done right on the PC.
 
  • #29
neilparker62 said:
Don't have any hard evidence other than following (in italics) but I think I'd rather blame AI than a human programmer for removing what was previously - and still should be - an obvious user choice in terms of enabling / disabling light enhancement in the Windows Photos app. Whether the actual choice was made by human or machine we are still left in a situation where - on top of city light pollution - we now have do deal with the compulsory light enhancing genius of Windows Photos.

Parts of Windows already use AI, where it’s involved in everything from system management to search, speech recognition, grammar correction, and even noise suppression and camera image processing. Some of that AI processing is typically farmed out to the cloud. Some can be done on a PC’s graphics chip or its main CPU. With onboard AI-specific hardware, though, the processing could be done right on the PC.
That quote has nothing to with actual UI decisions. AI is used as a tool, it doesn't make high-level unilateral decisions.
 
  • #31
Someone should ask these AI models if they fear a Butlerian Jihad.
 
  • Like
Likes PhDeezNutz
  • #32
For me, the most interesting thing ChatGPT does is encourage us to introspect more deeply into how we think, and what we mean by "understanding." On grounds that ChatGPT only looks for frequency of association, it is easy to say that ChatGPT doesn't "understand" anything (and it is ironic that it uses that language itself, personally I don't think any AI should ever use the pronoun "I" in any situation or for any reason, so if you avoid that, it becomes easier to avoid claims like "I understand"). But what is less easy to say is, what are we doing when we "understand the meaning" of our own words? What are we doing that is different from frequency of association?

I am not a linguist, but if you look at a child learning language, it seems pretty clear that they are establishing frequency of association. But it is not just association with other words, it association with experience. That is the "ghost in the machine" of understanding, the ability to have experience and detect similarities in classes of experience, and then it is those associations that can be correlated with words. Humans who existed prior to language would certainly have been able to notice similarities in classes of experience, it just did not occur to them to attach labels to those associations. ChatGPT is the opposite, it's pure labels, no experience at all. So it borrows the meaning we establish with our experiences, and it is our experiences that give the words meaning which then "prime the pump" of ChatGPT.

Let us then dig deeper. Our brain is not one thing, it has elements that are capable of registering experience, elements that are capable of contemplating experience, and elements that house our language capabilities that can associate and recognize labels around those experiences. So before we judge ChatGPT too harshly on the basis that it can only manipulate labels without any understanding, we should probably note that whatever elements of our brains are capable of manipulating language, including the capabilities of our greatest language masters and poets, are probably also incapable of "understanding" those labels either. They must draw from other parts of our brains to establish what the meanings are, what the experiences are, but I suspect without knowing that our own language mastery also involves primarily frequencies of associations.

That may be why, if you ask ChatGPT to write a poem, it's technical proficiency of manipulating poetic language is actually pretty good (try it, and ask it to use the style of any poet you like), but the poems will be rather vanilla and lacking of the sublime and specific contours of individual experience, borrowing as they must from the training data from a multitude of experience to create any kind of meaning. Hence I think the most reasonable way to think of ChatGPT is not as a con man that is fooling us, but rather as an incomplete piece of a thinking system that has a rather enormously disconnected "middleman" between the experiences from which its words borrow meaning, and the actual manipulation of that language. Perhaps like a piece of a thinking brain, moreso than a complete brain capable of higher thought on its own.

If so, then the real value of AI going forward will not be in replacing human brains, but in augmenting their capabilities. In the process, we may learn a great deal more about what our brains actually do, and even more than that, what they also are fooling us into believing they are doing! In light of all this, let us recollect the prescient words of B.F. Skinner: "The real question is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man." (No doubt he meant to include women as well, unless he was indeed a wry dog.)
 
  • Like
Likes slider142
  • #33
Ken G said:
if you look at a child learning language, it seems pretty clear that they are establishing frequency of association. But it is not just association with other words, it association with experience.
Exactly, and this is what ChatGPT (or at least the version discussed in the article here) lacks.
 
  • #34
Ken G said:
I am not a linguist, but if you look at a child learning language, it seems pretty clear that they are establishing frequency of association. But it is not just association with other words, it association with experience.
hmm has this been established by linguists? I have observed association with both experience and words. Think how a child is taught to say, "you're welcome," in response to "thank you." It is just rote. Same with learning times tables: "what is five times seven?" little Johnny says "thirty-five." He's not calculating the answer, at least not after a couple months in class.
 
  • #35
I agree, that's why I said is it not just association with other words. With ChatGPT, it's just association of words with words.
 
Back
Top