In summary: We should pay more attention and be a little more concerned, because honestly I didn't believe it would reach this point yet. Not because of any "AI singularity" nonsense but because it seems like it is still learning and getting better.
  • #176
This is how ChatGPT responds to a similar prompt:

Certainly! I'd be happy to explain the steps I take from receiving a prompt to generating an output. The process involves several stages, each contributing to the final response:

  1. Tokenization:When you provide a prompt, the first step is to break down the text into smaller units called tokens. Tokens can be as short as one character or as long as one word. For instance, the sentence "Can you help me?" is tokenized into ["Can", "you", "help", "me", "?"]. This helps the model process the text efficiently.
  2. Input Encoding:After tokenization, the tokens are encoded into numerical representations that the model can understand. Each token is mapped to a unique integer called a token ID using a pre-defined vocabulary. This vocabulary is built during the model training phase.
  3. Initial Processing:The encoded tokens are passed through several initial layers in the model, including embeddings. Embeddings convert token IDs into continuous vector representations, capturing semantic meaning and relationships between words.
  4. Transformer Layers:The core of the GPT architecture is the Transformer, which consists of multiple layers. Each layer has two main components: the multi-head self-attention mechanism and the feedforward neural network. These layers process the tokens in parallel, allowing the model to capture contextual relationships between words.
  5. Attention Mechanism:The self-attention mechanism helps the model weigh the importance of each word/token in relation to the others. This allows the model to consider the context of each word within the entire input sequence, not just its immediate surroundings.
  6. Feedforward Neural Network:After attention, the output passes through a feedforward neural network within each transformer layer. This network applies non-linear transformations to the data, further capturing complex patterns and relationships.
  7. Layer Normalization and Residual Connections:Each sub-layer in the transformer layer is followed by layer normalization and a residual connection. These components help stabilize and facilitate the flow of gradients during training, preventing issues like vanishing gradients.
  8. Decoder for Autoregressive Generation:In autoregressive language models like GPT, generating text is done sequentially. The model predicts the next token based on the tokens generated so far. During generation, the previous tokens are used as context to predict the next token. This process continues until the desired length or the stopping criterion is reached.
  9. Sampling Strategy:When generating text, there are different strategies for selecting the next token. This can include strategies like greedy sampling (choosing the most likely token), random sampling (choosing tokens with a probability proportional to their likelihood), or temperature-controlled sampling (adjusting the randomness of token selection).
  10. Repetition and Length Control:Models like mine tend to be creative but might also generate repetitive or overly long responses. Techniques like nucleus sampling (limiting token selection to the most likely candidates) or simply setting a maximum length can be employed to control output quality.
  11. Output Decoding:Once the model generates token IDs, they need to be decoded back into human-readable text using the vocabulary that was used during encoding.
  12. Post-processing:The generated text might contain extra spaces, punctuation, or other artifacts that need to be cleaned up for a coherent and grammatically correct output.
  13. Presentation:The final processed output is then presented to you as the response to your prompt.
Throughout these steps, the model aims to generate coherent, contextually relevant text based on the information provided in the prompt and the patterns it learned during training. Keep in mind that while GPT models are advanced, they don't have true understanding or consciousness; they generate responses based on statistical patterns in the data they've seen.
 
Computer science news on Phys.org
  • #177
Had a work problem and was confused about whether I should choose to use mean difference vs standard deviation (+coefficent of variance) and GPT4 was able to explain it well and give examples. That worked better than reading Wikipedia.
 
  • Like
Likes russ_watters
  • #178
Greg Bernhardt said:
GPT4 was able to explain it well and give examples
Well GPT4 is behind a subscription plan, which means most of us won't use it and also the insight article is based on GPT3 (and 3.5).
Having said that, as GPT4 has an order of magnitude more parameters (I think 100 trillion vs 3.5's 100 billion), bigger short term memory and some other improvements, it's understandable that it's more accurate.
I still wouldn't say that it's reliable though.
 
  • #179
Another important issue, beyond the size of the database and training complexity, is the size of the allowed prompt, which will inevitably expand dramatically in future upgrades. A prompt not only is the way we establish what we want the LLM to do, it is also an opportunity for us to essentially engage in a certain amount of training ourselves. Imagine large and commercially available "prompt prefaces" that you can input along with your own description of what you want the LLM to do, enabling your own prompt to tailor the LLM to be better at something you need it to do. It could also predispose the LLM to respond in a way that is more consistent with your own personal world views and interpretations of the news. It seems inevitable to me that eventually AIs will be brought into the "echo chamber" mode of operation that people seem to require these days.
 
  • #180
https://arxiv.org/pdf/2307.01850.pdf
Self-Consuming Generative Models Go MAD
Seismic advances in generative AI algorithms for imagery, text, and other data
types has led to the temptation to use synthetic data to train next-generation
models. Repeating this process creates an autophagous (“self-consuming”) loop
whose properties are poorly understood. We conduct a thorough analytical and
empirical analysis using state-of-the-art generative image models of three families
of autophagous loops that differ in how fixed or fresh real training data is available
through the generations of training and in whether the samples from previous
generation models have been biased to trade off data quality versus diversity. Our
primary conclusion across all scenarios is that without enough fresh real data in
each generation of an autophagous loop, future generative models are doomed to
have their quality (precision) or diversity (recall) progressively decrease. We term
this condition Model Autophagy Disorder (MAD), making analogy to mad cow
disease
 
  • Informative
Likes Motore
  • #181
PeterDonis said:
To the extent that it implies matching up the question with some kind of semantic model of the world, yes. ChatGPT does not do that; it has no such semantic model of the world. All it has is relative word frequencies from its training data.
I think the description of what chat GPT is grossly over simplified in this article. The abilities of chat GPT come from its neutral network. The truth is that no one really fully understands how chat GPT works. The functions come from the details of of the network parameters. There are 175 billion in 3.5. To understand what it is doing you would need to study the structure of the network and how it reacts to various groups of related input vectors. Even doing this you could really only hope to understand large scale functions similar to how we do with the brain.

The better (more practical)way to study chat GPT is to treat it as a black box and see how it responds to various inputs. It can and does interpret. It has the ability to track context. How it does this is anyone's guess but that does not change the fact that it performs these tasks.

I am not sure if Wolfram does not know this or is simply trying to sll his own product which doesn't work as well.
 
  • Like
  • Skeptical
Likes Motore and PeroK
  • #182
Ken G said:
This is how ChatGPT responds to a similar prompt:

Certainly! I'd be happy to explain the steps I take from receiving a prompt to generating an output. The process involves several stages, each contributing to the final response:

  1. Tokenization:When you provide a prompt, the first step is to break down the text into smaller units called tokens. Tokens can be as short as one character or as long as one word. For instance, the sentence "Can you help me?" is tokenized into ["Can", "you", "help", "me", "?"]. This helps the model process the text efficiently.
  2. Input Encoding:After tokenization, the tokens are encoded into numerical representations that the model can understand. Each token is mapped to a unique integer called a token ID using a pre-defined vocabulary. This vocabulary is built during the model training phase.
  3. Initial Processing:The encoded tokens are passed through several initial layers in the model, including embeddings. Embeddings convert token IDs into continuous vector representations, capturing semantic meaning and relationships between words.
  4. Transformer Layers:The core of the GPT architecture is the Transformer, which consists of multiple layers. Each layer has two main components: the multi-head self-attention mechanism and the feedforward neural network. These layers process the tokens in parallel, allowing the model to capture contextual relationships between words.
  5. Attention Mechanism:The self-attention mechanism helps the model weigh the importance of each word/token in relation to the others. This allows the model to consider the context of each word within the entire input sequence, not just its immediate surroundings.
  6. Feedforward Neural Network:After attention, the output passes through a feedforward neural network within each transformer layer. This network applies non-linear transformations to the data, further capturing complex patterns and relationships.
  7. Layer Normalization and Residual Connections:Each sub-layer in the transformer layer is followed by layer normalization and a residual connection. These components help stabilize and facilitate the flow of gradients during training, preventing issues like vanishing gradients.
  8. Decoder for Autoregressive Generation:In autoregressive language models like GPT, generating text is done sequentially. The model predicts the next token based on the tokens generated so far. During generation, the previous tokens are used as context to predict the next token. This process continues until the desired length or the stopping criterion is reached.
  9. Sampling Strategy:When generating text, there are different strategies for selecting the next token. This can include strategies like greedy sampling (choosing the most likely token), random sampling (choosing tokens with a probability proportional to their likelihood), or temperature-controlled sampling (adjusting the randomness of token selection).
  10. Repetition and Length Control:Models like mine tend to be creative but might also generate repetitive or overly long responses. Techniques like nucleus sampling (limiting token selection to the most likely candidates) or simply setting a maximum length can be employed to control output quality.
  11. Output Decoding:Once the model generates token IDs, they need to be decoded back into human-readable text using the vocabulary that was used during encoding.
  12. Post-processing:The generated text might contain extra spaces, punctuation, or other artifacts that need to be cleaned up for a coherent and grammatically correct output.
  13. Presentation:The final processed output is then presented to you as the response to your prompt.
Throughout these steps, the model aims to generate coherent, contextually relevant text based on the information provided in the prompt and the patterns it learned during training. Keep in mind that while GPT models are advanced, they don't have true understanding or consciousness; they generate responses based on statistical patterns in the data they've seen.
It is ironic that this is a much better description of how chat GPT works.
 
  • #183
Ken G said:
The process involves several stages
Where are these stages described?
 
  • #184
neobaud said:
I think the description of what chat GPT is grossly over simplified in this article.
Wolfram's description is taken from the documentation for the ChatGPT version he reviewed.
 
  • Like
Likes Motore
  • #185
PeterDonis said:
Where are these stages described?
Wolfram himself alludes to some of those stages in his article, he just does not go into detail. Probably some of the detail is proprietary anyway. Needless to say there is an extensive process that must occur to go from a bulk of input data to a trained LLM. One stage involves creating the language model itself, which is a huge part of the process that Wolfram does go into some detail about. It is one thing to say that the training creates a body of connections among tokens that can be scanned for frequencies to predict useful connections to the prompt, but it is another to describe the details of how that training process actually occurs. Wolfram mentions that one has to model how the connections work, because the training set will never be dense enough by just using the input dataset for that. There are also a lot of human choices that go into deciding what constitutes a success that should be encouraged.

I think @neobaud must be right that there is a lot that happens "under the hood" that even the human trainers don't understand, which might be a lot like what happens in our brains that we also don't understand. It seems to us there is a "ghost in the machine" of how our brains work, even though a microchemical understanding of our neural system might ultimately involve relatively simple steps (even simple enough to be something like "predicting the next word"). Profoundly complex behavior can emerge from simple systems that are connected in massively complex ways, this has always been a core principle of nonlinear dynamics that I still don't think we've penetrated very far into (which is how we keep surprising ourselves by what happens).
 
  • #186
Ken G said:
Wolfram himself alludes to some of those stages in his article, he just does not go into detail.
Yes, but you did. Where are you getting those details from?

Ken G said:
Probably some of the detail is proprietary anyway.
In which case you would not know them. But you must have gotten what you posted from somewhere. Where?
 
  • #187
The description that ChatGPT gave, things Wolfram alluded to, etc. I think the best summary is what ChatGPT itself gave, stages like "embeddings" where they deal with the all important sparseness problem (Wolfram described that in some detail, it seemed to me like he was saying that if one takes a chess program analogy, one might try to program a computer to search every possible move out to some critical distance in the future where it looks like a decisive advantage has been gained. But that is of limited value in chess, and completely useless in language, because the searchable range is "sparse" in the space of possibilities. Embeddings somehow "fill in" the sparseness, create a sense of "closeness" akin to a vector space. That requires language modeling, and will only be as good as the model is, I don't think you can train the LLM to do that for itself without "supervision," a key concept in machine learning.

Then there's the important "transformers", which involve two additional stages, as described. This must be where the frequency of connections between words happens, but it must already encompass some kind of difference between what is in the prompt and what is in the training database. Important there, it seems, is the "nonlinear transformations" that seem to play a role in finding connections between tokens that are important. Again I don't think one can let the LLM create its own nonlinear transformations, the human trainers must have a role in deciding on their structure, which likely involves some trial and error, I'm guessing. Trial and error must have also uncovered the problem of "vanishing gradients", which I believe can cause the process of predicting the next word to get stuck somewhere, as if an iteration process is used to hone in on the predicted probabilities, and that iteration must follow gradients in some kind of cost function to arrive at its result.

That seems to be the guts of it, but even then some additional scaffolding is inserted, more or less manually it sounds like, to detect special elements of the prompt like how long the answer is supposed to be (things that ChatGPT tries to respect but does not do so exactly). All these elements seem to have the fingerprints of the designers on them in many places, so although I'm sure the designers were constantly surprised how significantly the final result was affected by seemingly minor adjustments in the training protocol, nevertheless it seems clear that such adjustments were constantly needed.
 
  • #188
I'm trying to understand the answer to Peter's question from your reply. Where did you get this? It sounds like you asked ChatGPT itself. How is that any more reliable than asking any other question of ChatGPT?
 
  • Like
Likes russ_watters and Motore
  • #189
A professor tells me of having a colleague who asked the following question: "I asked three AI's the answer to the question X. Below are their responses. Which, if any, are correct and why?"

Apparently, the students are livid and want the professor's head on a pike. Granted, the summer session is "special", but the hostility is still impressive.

FWIW, I don't see this as an uinfair question at all.
 
  • #190
Vanadium 50 said:
I'm trying to understand the answer to Peter's question from your reply. Where did you get this? It sounds like you asked ChatGPT itself. How is that any more reliable than asking any other question of ChatGPT?
Part is asking ChatGPT that question, part is from other questions to it, part is from Wolfram's article, and part is just from other knowledge about neural nets and machine learning (though I claim no special expertise there). But I think it's pretty clear that ChatGPT has been specially trained in a lot of ways, and one of them is special training to respond to prompts about ChatGPT. This is my whole point here, there seems to be some kind of claim being made that since ChatGPT's training process results in a trained dataset of word frequencies that it can use to "predict the next token" as it is creating its response, that that somehow means it did not receive quite highly supervised and specialized training along the path to creating that database of word frequencies. I would say that the database so created is just as much a result of that specialized and supervised training as it is a result of the original data on which it was trained (culled from a proprietary list of sources but still way too sparse to produce a well trained database of word frequencies without substantial language modeling and supervised training, hence the term LLM).

So that's why it's more reliable to ask ChatGPT about what ChatGPT is than asking it some random question, it is a type of question that it is well trained to respond to. Just like it is well trained to respond to the prompt "poem" by delivering a poem, and it is well trained to respond to "you have made a mistake in your last answer" with an obsequious apology.
 
  • Skeptical
Likes russ_watters and Motore
  • #191
Vanadium 50 said:
A professor tells me of having a colleague who asked the following question: "I asked three AI's the answer to the question X. Below are their responses. Which, if any, are correct and why?"

Apparently, the students are livid and want the professor's head on a pike. Granted, the summer session is "special", but the hostility is still impressive.

FWIW, I don't see this as an uinfair question at all.
Yeah, I'm mystified by that. Can you say more about what they were livid about? For me, that professor is using LLMs in exactly the way they should be used, as a springboard to inquiry not an "answer man." The very fact that their answers differ is also an excellent lesson in understanding what they are good for, and what their limitations are. The students are all mixed up somehow, but I wonder how?
 
  • #192
I'll offer an example of how well trained ChatGPT is to answer questions about its own training (much better than it is trained to answer questions that require, for example, multiplying a chain of large numbers). Here is a prompt and the ChatGPT response, relating to the "embedding" step it uses:
Can you tell me more about the "embedding" that is done as part of your training?Certainly! In the context of natural language processing and machine learning, "embedding" refers to the process of representing words, phrases, or entire documents as numerical vectors (arrays of numbers) in a high-dimensional space. These vector representations capture semantic relationships between words and are crucial for various NLP tasks such as text classification, sentiment analysis, machine translation, and more.
The idea behind word embeddings is to map words from a vocabulary to continuous vector spaces in such a way that words with similar meanings are located close to each other in this space. The key concept is that words that appear in similar contexts in a large corpus of text will have similar vector representations.
The training of embeddings typically involves models like Word2Vec, GloVe, or embeddings as a byproduct of neural language models like GPT (Generative Pre-trained Transformer). Here's a simplified overview of how embedding training works:
  1. Corpus Collection: A large corpus of text is collected, which can be anything from books and articles to social media posts and websites.
  2. Tokenization: The text is tokenized, meaning it is divided into words or subword units (such as subword pieces for handling out-of-vocabulary words).
  3. Context Window: For each word in the corpus, a context window is defined. This window encompasses the nearby words or tokens, which are used to predict the target word. The idea is that words within the same context window are likely to have similar meanings.
  4. Learning Word Vectors: The model learns to predict the probability of a word occurring given its context (or vice versa) using the context window. It adjusts the vector representations of words during training to minimize prediction errors.
  5. Vector Space: Once trained, the model has created vector representations for each word in the vocabulary. These vectors are typically dense and continuous, with dimensions representing various aspects of word semantics.
  6. Semantic Relationships: The resulting word embeddings capture semantic relationships. Words with similar meanings will have vectors that are closer together in this high-dimensional space. For example, in a well-trained embedding space, "king" might be close to "queen," "man" close to "woman," and so on.
These word embeddings can then be used as input features for various NLP tasks or can be fine-tuned for specific tasks using transfer learning.
In the case of GPT-3.5, the model doesn't explicitly generate word embeddings, but it learns contextual embeddings as part of its training process. These embeddings capture rich contextual information about words within sentences and can be used for a wide range of NLP tasks without the need for additional pre-processing or training.
 
  • Like
Likes neobaud
  • #193
What's more, here is how ChatGPT summarized how human involvement fine tunes its training in ways that go well beyond simple word frequency patterns:

"In summary, while the core of AI model behavior relies on patterns learned from data, the goals, values, and behavior guidelines are established by human designers, developers, and organizations. The training process includes human reviewers who follow these guidelines to rate and evaluate model responses, helping to shape the model's behavior and align it with the intended goals, which include providing useful and informative responses and maintaining a respectful and ethical tone. The combination of data-driven learning and human guidance is central to achieving the desired behavior in AI models."

Then I asked Bard if it agreed with the above, since the above is a general statement about AI language models, and it said:
"

I agree with the statement. AI models are trained on data, but the data is not the only factor that determines their behavior. The goals, values, and behavior guidelines established by human designers, developers, and organizations also play a role.

The training process includes human reviewers who follow these guidelines to rate and evaluate model responses. This helps to shape the model's behavior and align it with the intended goals. For example, if the goal is for the model to provide useful and informative responses, then the reviewers will rate responses that are not useful or informative negatively. This will help the model to learn to avoid giving these types of responses in the future."
 
  • #194
So, is there something we didn't already know?
 
  • #195
That depends on what you already know, doesn't it? I mean, the point is, different people are saying different things. How could I possibly know what "we already know" when that is not even a well defined concept? What is not at all well known is why ChatGPT is good at some things and not others, and just how good it is, how good it will be in ten years, and what forms will the improvements take. All this depends on exactly what it is doing right now, which none of us even know. But we can try to delve into the details of what it is doing, which is what we are doing, based on the nice introduction by Wolfram, but which quite frankly leaves an awful lot out. The devil is in the details.

Another relevant point is that none of us know what our own brains are doing when we answer questions put to us. Sometimes it feels like we are also spitting out answers one word at a time, with some general awareness of where we are going. It is not at all obvious to me that once we understand how our brains do it, we will not find there are some pretty simple steps involved, coupled to a vastly complex neural net. That sounds a lot like an LLM. So no, I remain unconvinced that LLMs "don't understand what they are saying" in a more fundamental way than this could be said about ourselves, except for the fact that we have experiences attached to our words in many cases. We also come with a self awareness feature, which some people suspect is really a kind of rationalization mechanism that essentially experiences mental processes after they have already finished happening.
 
Last edited:
  • #196
Ken G said:
What is not at all well known is why ChatGPT is good at some things and not others, and just how good it is, how good it will be in ten years, and what forms will the improvements take. All this depends on exactly what it is doing right now, which none of us even know.
But we know how it functions conceptually which is word prediction based on the training data. Of course the programers needed to add some feedback and limitations so it is usable, but it still doesn't understand what is outputing, so it's still not reliable.

Can it be 100% reliable with more data? I don't think so.
The only way it can be reliable is with a completely different model in my opinion.
 
  • Like
Likes Vanadium 50
  • #197
Motore said:
Can it be 100% reliable with more data?
Just out of curiosity, what can be 100% reliable?
 
  • Like
Likes russ_watters
  • #198
PeterDonis said:
Wolfram's description is taken from the documentation for the ChatGPT version he reviewed.
Ok but back to my point you have made assertions about what chatGPT is doing and not doing. I am saying that we don't know what it is doing. Actually, Wolfram states this in his description of neural nets:

"And maybe that’s because it truly is computationally irreducible, and there’s no general way to find what it does except by explicitly tracing each step. Or maybe it’s just that we haven’t “figured out the science”, and identified the “natural laws” that allow us to summarize what’s going on."
 
  • Like
Likes russ_watters
  • #199
neobaud said:
you have made assertions about what chatGPT is doing and not doing. I am saying that we don't know what it is doing
We don't know the exact internals of its neural net, that's true. But we can still make general statements about what it is doing and not doing. For example, we know that it is not fact checking its output against actual data sources.
 
  • Like
Likes russ_watters and Vanadium 50
  • #200
Greg Bernhardt said:
Just out of curiosity, what can be 100% reliable?
Well, nothing of course. But that's not the point.

Would you let ChatGPT diagnose illness and prescribe medication? I mean, human doctors are 100% either. What could possibly go wrong?

"Hmmm...the search tree shows that no patients who were prescribed cyanide complained ever again. Therefore that must be the most effective treatment."
 
  • Like
Likes PeterDonis and Motore
  • #201
Motore said:
But we know how it functions conceptually which is word prediction based on the training data.
Yes, so it's all about the body of training data that it builds up. That is what is analogous to anything we could call "knowledge" on which to base its responses.
Motore said:
Of course the programers needed to add some feedback and limitations so it is usable, but it still doesn't understand what is outputing, so it's still not reliable.
That logic does not necessarily follow. Many people do understand what they output, but their understanding is incorrect, so their output is not reliable. There are many causes of unreliability, it's not clear that ChatGPT's cause of unreliability is its lack of understanding of what it is saying. The problem for me is that it is still quite unclear what humans mean when they say they understand a set of words. We can agree that ChatGPT's approach lacks what we perceive as understanding, but we cannot agree on what our own perception means, so contrasts are vague. Sometimes we agree when our meanings are actually rather different, and sometimes we disagree when our meanings are actually rather similar!
Motore said:
Can it be 100% reliable with more data? I don't think so.
The only way it can be reliable is with a completely different model in my opinion.
It might come down to figuring out the right way to use it, including an understanding (!) of what it is good at and not so good at, and how to interact with it to mitigate its limitations. I agree that it might never work like the Star Trek computer ("computer, calculate the probability that I will survive if I beam down and attack the Klingons") or like the Hitchhiker's Guide to the Galaxy's attempt to find the ultimate answer to life, the universe, and everything.
 
  • #202
Vanadium 50 said:
Would you let ChatGPT diagnose illness and prescribe medication? I mean, human doctors are 100% either. What could possibly go wrong?
I sincerely believe in the not-so-distant future, we'll have pharmacies and medical institutions where X% of low-grade illnesses will be handled by bots.
 
  • #203
Greg Bernhardt said:
I sincerely believe in the not-so-distant future, we'll have pharmacies and medical institutions where X% of low-grade illnesses will be handled by bots.
Before trusting any such bot, I would want to know that it was not based on an internal model like that of ChatGPT, which, as I've said, does not fact check its output.

But bots which do fact check their output are of course possible.
 
  • Like
Likes Motore, russ_watters and Greg Bernhardt
  • #204
Vanadium 50 said:
Would you let ChatGPT diagnose illness and prescribe medication? I mean, human doctors are 100% either. What could possibly go wrong?

"Hmmm...the search tree shows that no patients who were prescribed cyanide complained ever again. Therefore that must be the most effective treatment."
This was essentially what people thought Watson would be very good at, but it turned out to never be useful for that. The problem with Watson was that the data it was analyzing could not be properly standardized to make it useful. Like you say, humans are better at "filling in the gaps" using our ability to make logical connections, when there is sparseness in the evidence. ChatGPT navigates immense sparseness in its language model, it can write a poem that contains short sequences of words, say five or six in a row, that never appeared anywhere in its training data, yet make sense together. But there's a difference between words that make sense together and the right treatment for some ailment, and each patient is enough different from all the rest that Watson never had access to enough standardizable information to be able to do better than a human doctor. So the problem might not have been that Watson didn't understand what it was outputting the way a human does, but rather it could not understand the various unstandardizable aspects of the input data the way humans do.

But now think of a context where there is not so much unstandardizable information to understand, like a game with simple rules. No doubt this is why machine learning is so vastly successful at beating humans at games with simple rules, the input data is completely standardizable, there is literally nothing there except the rules and the possible positions. Does Stockfish "understand" how to play chess, while it destroying the greatest human chessplayers? An interesting question in the context of this thread.
 
  • #205
There appears to be skepticism that LLMs receive any kind of "special training" to respond to certain types of questions. Indeed, there seems to be skepticism that the entire process is not completely automated, so they don't receive any specialized training at all, they just spit out word frequencies in a kind of automatic way. We know that is not the case, however, because strict ethical guidelines are imposed, to avoid things like hate speech or contraversial positions that would be present (frequently) in their training database. Given this, I think it is natural to assume that the fine tuning (and human supervised) stages of their training has focused intentionally on certain types (indeed many types) of prompts that the team anticipates. If so, that kind of human intent is of course a central component of LLM training.

An example of what I mean here is the clear fact that ChatGPT is carefully trained never to say "I don't know the answer" (though it can essentially say "that depends", or "I only have information up to 2021"). Instead, it tries to give what information it can, even if it is essentially "blowing smoke." So I asked it why it doesn't say "I don't know", even though it is trained on a dataset where that response must be quite common, and it said:
"You are correct that "I don't know" is a common response to questions that AI models like me might not have enough information to answer accurately. While it is indeed a common response in human communication, I have not been explicitly trained to respond with "I don't know." Instead, my training data and fine-tuning process emphasize providing informative and contextually relevant responses whenever possible."
So clearly we see here an example where ChatGPT is not simply trained to respond based on word frequencies, certain word frequencies are intentionally skewed by the "fine-tuning process" to avoid answers like "I don't know" in favor of trying to provide something useful. If you ask it something that no one knows (like the maximum population of the ancient city of Troy), it will say that it is a matter of debate, and tell you a bunch of other things about it. There are many examples where we can see the intentions of the trainers on how ChatGPT responds, so I don't think it is a stretch at all that it has received specialized attention to its training in various specific areas (such as questions about how it operates). ChatGPT agrees: "Yes, it's highly likely that during the fine-tuning process of AI models like mine, the human reviewers and developers placed special emphasis on prompts that involve explaining how the model functions. This is because providing clear and informative responses about how the AI works is important for user understanding and trust." But of course, we don't really know when it is correct, that problem never goes away.
 
  • #206
PeterDonis said:
Yeah, it would seem we're still a long way from Strong AI. I think it was Bill Watterson who made Calvin say: "Scientific discovery goes *BONK*", so who knows....?
 
  • #207
I find this thread to be an interesting read and I believe, although to some people this is obvious, important statements are made here that would calm down the general public regarding the alarmism we are seeing in media.
Im not sure how AI is used in academia right now, since Im not within that sphere, but Im really curious to why the students where vivid in the example given by @Vanadium 50.

Another obvious statement (and excuse me for thinking outloud) is that ChattGPTs total lack of human qualities such as mental representations and awareness of written meaning does not imply that models as this one wont have a huge impact on our society. I believe many people are really impressed with what these models can do and how they can make our life easier (or more difficult, depending on opinion), but that is a trait of how our society works, what work assignments we have and what we do from 9 to 5, not necessarily an indication of the complexity of the chatbot. Its important to keep these things apart, since assigning awareness to something just because it can imitate us is an example of personal projection and noting else.

With that said, thanks for the insight article Peter and stay calm, stay human. :smile:
 
  • Like
Likes PeroK
  • #208
Here's an interview with Geoffrey Hinton, where he expresses the view that ChatGPT already "understands" what it's doing.

 
  • #209
Kontilera said:
calm down the general public regarding the alarmism we are seeing in media.
From the media.
Or from certain elements of the AI community, with vested interest.
 
  • Like
Likes Kontilera
  • #210
256bits said:
From the media.
Or from certain elements of the AI community, with vested interest.
Yes, sorry, english is not my mother tongue but I apprieciate corrections when they are done constructively.
 
  • Like
Likes 256bits

Similar threads

Replies
21
Views
1K
  • STEM Educators and Teaching
2
Replies
39
Views
3K
Replies
10
Views
2K
  • General Discussion
Replies
4
Views
709
Replies
5
Views
751
  • Differential Equations
Replies
1
Views
747
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
Replies
13
Views
1K
Back
Top