How to simulate feelings and instinct in computers

  • Thread starter Kontilera
  • Start date
In summary, simulating feelings and instincts in computers involves the use of artificial intelligence and machine learning techniques to create algorithms that mimic human emotions and decision-making processes. This can be achieved through the integration of various data sources and programming languages, as well as the implementation of complex neural networks and deep learning models. By incorporating these techniques, computers can learn to recognize patterns and make decisions based on emotional cues, similar to how humans process information and make choices. This technology has the potential to greatly enhance the capabilities of artificial intelligence and improve its ability to interact and communicate with humans.
  • #1
Kontilera
179
24
Im hearing more and more adjectives related to human experiences when describing computer programs, and I wonder why.
Have we really found ways to simulate gut instinct, intuition, legal pathos etc. in comupter code?
An example is regarding Alphadev, where I found this description on the IndiaAI webpage (an organization founded by the goverment):

" AlphaDev uses a computational hybrid of deliberate thought and gut instinct to make strategic decisions during board games."
https://indiaai.gov.in/article/deepmind-ai-sorts-data-faster-than-humans

Im wondering.
1. Can they really simulate gut instinct for the software?
2. If not. Isnt there a huge risk and responsibility in using words as this. Words that we associate with human values, human experience, and human ways of life?

Since my personal answer to the second question is yes, Im also wondering:
Who is taking this responsibility for this kind of descriptions?

To give this post context I should make it clear that I work with people who have problem with paranoia, anxeity and stress.
People who actually have to pay a price for this kind of sensationalism.
 
Computer science news on Phys.org
  • #2
You are falling prey to popular technology descriptions of new tech. The current wave of AI technology is based on statistical learning ie using large amounts of data to train a model to respond to textual, acoustical or visual input and generating a response. Popular writers prefer the term machine learning because it conveys a sense of a machine that is somehow alive as opposed to statistical learning which sounds so mathematical like estimating the probability of winning some game.

People using AI systems feel they mimic human behavior and think the model is somehow intelligent but it is simply using a sophisticated form of statistics to get the results it presents. Its not unlike the predictive text that editors use to help speed your typing. The AI software looks at the prior words you wrote and tries to predict what the next word you will want to type based the statistics it collected on the textual data used to train it.

What is amazing is that the AI can mimic human responses so well that we believe it is intelligent when it isn't.

ChatGPT for example, can answer questions very convincingly but upon further analysis the illusion is broken. It may have provided a totally wrong answer or provided the correct answer but with bad units of measure or with a totally wrong conceptual description of the problem. In other words, the intelligence vanishes when you look deeper into the responses provided and you realize they are artifacts of the statistical learning.
 
  • Like
Likes PeterDonis, Vanadium 50, russ_watters and 3 others
  • #3
Kontilera said:
Im hearing more and more adjectives related to human experiences when describing computer programs, and I wonder why.
Have we really found ways to simulate gut instinct, intuition, legal pathos etc. in comupter code?
A great amount of the AI that is popular today involves trained neural networks. A neural network is trained by feeding it data that it can analyze, "learn" from, and mimic. All the factors of gut instinct, intuition, and legal pathos (I assume because I don't know what that is) can be included in the training data and associated results. So the AI system would have those factors built into it.
 
  • #4
jedishrfu said:
You are falling prey to popular technology descriptions of new tech.
Yes, Im willing to agree. Although know better, Im sometimes frustrated with the way things are described in media and how apocalyptic many scientists are based just on beliefs and loose thoughts.
jedishrfu said:
The current wave of AI technology is based on statistical learning ie using large amounts of data to train a model to respond to textual, acoustical or visual input and generating a response. Popular writers prefer the term machine learning because it conveys a sense of a machine that is somehow alive as opposed to statistical learning which sounds so mathematical like estimating the probability of winning some game.

People using AI systems feel they mimic human behavior and think the model is somehow intelligent but it is simply using a sophisticated form of statistics to get the results it presents. Its not unlike the predictive text that editors use to help speed your typing. The AI software looks at the prior words you wrote and tries to predict what the next word you will want to type based the statistics it collected on the textual data used to train it.

What is amazing is that the AI can mimic human responses so well that we believe it is intelligent when it isn't.

ChatGPT for example, can answer questions very convincingly but upon further analysis the illusion is broken. It may have provided a totally wrong answer or provided the correct answer but with bad units of measure or with a totally wrong conceptual description of the problem. In other words, the intelligence vanishes when you look deeper into the responses provided and you realize they are artifacts of the statistical learning.
Great explanation. Thanks.
 
  • #5
FactChecker said:
A great amount of the AI that is popular today involves trained neural networks. A neural network is trained by feeding it data that it can analyze, "learn" from, and mimic. All the factors of gut instinct, intuition, and legal pathos (I assume because I don't know what that is) can be included in the training data and associated results. So the AI system would have those factors built into it.
I dont know about you, but I believe that by saying that something acts based on "gut instict" you actually insinuate that there is an intuitive feeling involved somehow. Not merely that both logical and intutive decisions very used in the training of the neural network. Unless, theres reason to believe that such a feeling as arised within the system, my opinion is that this is desinformation.
 
  • #6
Kontilera said:
I dont know about you, but I believe that by saying that something acts based on "gut instict" you actually insinuate that there is an intuitive feeling involved somehow. Not merely that both logical and intutive decisions very used in the training of the neural network. Unless, theres reason to believe that such a feeling as arised within the system, my opinion is that this is desinformation.
You asked how to simulate feelings and instinct. If the data used to train a neural network includes the effects of human feelings and instinct, then the results will simulate human feelings and instinct. I'm not sure that there is a way to distinguish the simulated results from human results except by their repeatability, and some randomness could be added to simulate human inconsistent behavior.
 
  • #7
FactChecker said:
If the data used to train a neural network includes the effects of human feelings and instinct, then the results will simulate human feelings and instinct.
I strongly disagree and I believe this is one of the misconceptions I want to oppose.
If the neural network is trained with the _effects_ of human feellings and insticts, its behaviour will _imitate_ that of a sentient being. However the software will not "simulate human feelings and instics" in the aspect of being an emotional experience.
 
  • #8
What is intuition? If I woke one morning and felt something bad would happen during the day and ended up having a car accident, what does it say about me? Do I have some superpowers? Is it a coincidence? Or was my thought of having something bad happening to me stressing me so much that I lost my focus while driving?

Assuming we consider this a coincidence, can a machine go berzerk and spit gibberish and just happen to be right? That is for sure. In machine learning, they study the stability of the results. Could an unstable algorithm be considered using intuition?
 
  • #9
Kontilera said:
I strongly disagree and I believe this is one of the misconceptions I want to oppose.
If the neural network is trained with the _effects_ of human feellings and insticts, its behaviour will _imitate_ that of a sentient being.
I think that to make your point rigorously, you will have to be specific and formal about how human "feelings and instinct" works. IMO, the more specific and formal you make it, the more likely that a machine can mimic it.
 
  • Like
Likes PeterDonis
  • #10
FactChecker said:
I think that to make your point rigorously, you will have to be specific and formal about how human "feelings and instinct" works. IMO, the more specific and formal you make it, the more likely that a machine can mimic it.

Both human beings and Chatbots can be regarded as functions that turns input to output. How these function works internally differs.

A human being is a biological and sentient being in the aspect the we have a spectrum of emotions that we need to relate to. Although we are capable of performing a vareity of actions, many of them conflicts with the experience of being human.

Since this in an instrinct property of being human its hard to put into something measureable, although I highly doubt that any serious person doesnt agree upon this distinction, since even simple machine can be made to mimic human behaviours.

Few people would claim that the pocket calculator finds joy in reaching the answer of a multilplication involving two three digit numbers. Or that the toilet paper holder gets bored by holding the paper for such a long time.

Also, the distinction is one of the reason why humans and software compliment each other, since the mechanics of not weighing you actions against your inner nature saves time at the cost of being truly creative.

I read your post as the claim that if a neutral network is trained with the effects (i.e the output) of feelings then it will simulate (i.e. experience) feelings. Although english isnt my mother tongue so if this is a misunderstanding it is probably my fault.

But none the less, this is my stand point. I think this is common grund for many people but also on the boarder between science and philosophy.

I might not be the best person to argue for this in english, but if you believe that even the most simple devices have an emotional experience I think we are too far off to get anything worked out. If you dont believe that, then claiming that our inventions suddenly has this experience is upon you to prove, since its quite the extraordinary claim.
 
Last edited:
  • #11
Kontilera said:
Few people would claim that the pocket calculator finds joy in reaching the answer of a multilplication involving two three digit numbers. Or that the toilet paper holder gets bored by holding the paper for such a long time.
But these are not very specific nor formal definitions. What is joy or boredom? How do you know a human being is experiencing it?

For example, say you define that a human is feeling pain when someone shouts at him. How do we know? One could say that it is because the human hears the sounds, his brain analyzes the signal and then reacts by walking away to protect itself against extreme stress he couldn't support. If we witness this, then we know the human felt pain.

If your definition of pain is not more specific, then an engine with a speed limiter can feel pain. The engine can identify the speed it is currently at, it can sense that its upper limit has been reached, and it can react by reducing its power input such that the speed never goes over its maximum allowable value, protecting itself from stresses it couldn't support. You don't even need a computer to do this: This type of setup has been used for over 200 years with a centrifugal governor.

I'm not saying an engine can feel pain or make a decision. I'm just saying if your definition of pain is no more specific, then you can say a machine makes a decision or has feelings.
 
  • Like
Likes PeterDonis
  • #12
Kontilera said:
But none the less, this is my stand point. I think this is common grund for many people but also on the boarder between science and philosophy.
I am talking only about the science. Philosophy is not my concern.
 
  • #13
Kontilera said:
A human being is a biological and sentient being in the aspect the we have a spectrum of emotions that we need to relate to. Although we are capable of performing a vareity of actions, many of them conflicts with the experience of being human.

Since this in an instrinct property of being human its hard to put into something measureable, although I highly doubt that any serious person doesnt agree upon this distinction, since even simple machine can be made to mimic human behaviours.
I do disagree:cool:
I am sure you've heard about the Turing test. Now, lets say we one day manage to build a machine that passes this test; i..e. the way it interacts with you (answers questions etc) is indistinguishable from that of a human. According to the Turing test the machine is then a generalized AI.
Now, if you do not think passing the Turing test means that a machine is "truly" intelligent you need to come up with an alternative "improved" way to define "proper" intelligence. Many people have tried but it has turned out to be extremely hard.

You also have the problem that you need to be able to explain why you are sure that every human you meet is intelligent? Maybe there are lots of people out there who's brains are wired very differently from yours (and perhaps would not pass the "improved" test) but since they pass the Turing test there is now way for you to tell.

Arguing that there is some fundamental difference between biological and electronic systems is in my view a bit silly, unless you invoke something like a "soul" there is no reason why a sufficiently powerful computer could not simulate every cell in your brain and virtually exactly reproduce the behaviour of your brain.

As was already stated above, this is more philosophy than science.
 
  • Like
Likes PeroK
  • #14
FactChecker said:
I am talking only about the science. Philosophy is not my concern.
I want to be as honest as possible, and with that said, I think Im not super interested in either.
I created this thread, partly in emotion, when listening to the radio where they were once again talking about how the AGI-apocalypse might go down. Making various bold claims that I feel were completely off.

It is interesting how such topics can be discussed without anybody taking too much responsibility for what they are claiming. The article above being one example. As I read the article, they claim that the software has a _perception_ of gut instinct, not that it was trained on data obtained by people using gut instinct.
In this sense, since the author of the article is using the word, I feel it is up to them to supply the definitions etc.

Thats also what I was reacting to in you post.
Since you claim that the software percieve feelings by being trained on data from people having feelings. What do you mean by this? In what way does this make the software sentient?
Can you supply us with a definiton that fits with our perception of what a feeling is, with our everyday intuitive understanding of sentient and non-sentient objects?
Surely there are softwares that perform difficult tasks without being trained on such data. Why dont they experience emotions?
Do you think a neuroscientist would agree with you statements?
Many of are feelings are related to signal substances in the brain, how can you be sure that an electronic harware can obtain the feelings we are talking about?

Again. I am wondering whats the basis of using this terminology? Is there a gratitude-command in c++ I dont know about?
I cant supply definitions, Im asking about them.
 
Last edited:
  • #15
f95toli said:
I do disagree:cool:
I am sure you've heard about the Turing test. Now, lets say we one day manage to build a machine that passes this test; i..e. the way it interacts with you (answers questions etc) is indistinguishable from that of a human. According to the Turing test the machine is then a generalized AI.
Now, if you do not think passing the Turing test means that a machine is "truly" intelligent you need to come up with an alternative "improved" way to define "proper" intelligence. Many people have tried but it has turned out to be extremely hard.

You also have the problem that you need to be able to explain why you are sure that every human you meet is intelligent? Maybe there are lots of people out there who's brains are wired very differently from yours (and perhaps would not pass the "improved" test) but since they pass the Turing test there is now way for you to tell.

Arguing that there is some fundamental difference between biological and electronic systems is in my view a bit silly, unless you invoke something like a "soul" there is no reason why a sufficiently powerful computer could not simulate every cell in your brain and virtually exactly reproduce the behaviour of your brain.

As was already stated above, this is more philosophy than science.
I dont have to prove the difference between biological and electrical systems. I dont claim they one day coulndt be equivalent. What Im questioning is the usage of psychological terms to describe present day software.
 
Last edited:
  • #16
"simulated" emotions are not necessarily the real thing and probably would not work like the real thing. But unless there is some test to distinguish the simulation from the real thing, who is to say that they are different? If a neural network is trained so that it behaves the same way as a human in a variety of situations, how can you test the difference? If you can't prove a difference, are you sure that there really is a difference?
I feel like this is getting into areas where I have nothing more to contribute and will leave it to others.
 
  • #17
FactChecker said:
"simulated" emotions are not necessarily the real thing and probably would not work like the real thing. But unless there is some test to distinguish the simulation from the real thing, who is to say that they are different? If a neural network is trained so that it behaves the same way as a human in a variety of situations, how can you test the difference? If you can't prove a difference, are you sure that there really is a difference?
Present day software can only mimic an almost infinitesimal amount of the abilities that humans have. The imitation in poor upon deeper inspection, and simple questions may sometimes give nonsense answers that leaves anybody with some kind of conecptual understanding totally perplexed.
Your claim is like saying that just because the pocket calculators from the 60's and Iphone 13 both can perform simpler mathematical calculations theres no reason to believe that they are written in different languages.

Im not really sure where you are getting at.

You claim that the software has simulated emotions. What is your support for such a claim?
 
  • #18
Im sure that, when push comes to shove, you would like some kind of evidence for your claim.
If a relative to you was in a coma and the doctor says:
- Well, ***** is expected to wake up within the coming days. However he/she will suffer from occasional headaches after this trauma. If you want we could take out the brain, replace it with a disc installed with a c++-human interface. Output wise I can guarantee that you will notice no difference.

Would you still trust your statement?
 
  • #19
Kontilera said:
Im sure that, when push comes to shove, you would like some kind of evidence for your claim.
If a relative to you was in a coma and the doctor says:
- Well, ***** is expected to wake up within the coming days. However he/she will suffer from occasional headaches after this trauma. If you want we could take out the brain, replace it with a disc installed with a c++-human interface. Output wise I can guarantee that you will notice no difference.

Would you still trust your statement?
And, would you in such a situation argue that it is of uttermost importance for:
1 The doctor to prove the equivalence between the systems
2 You to prove that there is a difference between the systems.
 
  • #20
Kontilera said:
Present day software can only mimic an almost infinitesimal amount of the abilities that humans have.
The breath of situations that AI can handle is an entirely different dimension from whether emotions can be simulated. I think you need to be disciplined and focus on one aspect at a time.
 
  • #21
FactChecker said:
The breath of situations that AI can handle is an entirely different dimension from whether emotions can be simulated. I think you need to be disciplined and focus on one aspect at a time.
Is it though? We got no evidence that present AI, such as ChatGPT, is sentient, but still people (including professionals with in the branch) insist on using terminologi from our own point of reference, insinuating that they are.

When asked what the basis for such claims are Ive been met with arguments that since they can do what we do, theres no reason to say we are different.

This is nonsense on so many levels.
Not only are we two completely different kind of systems. Me being biological and ChatGPT being electronic. But they cant do what we do, lets acknowledge that. There is no evidence that ChatGPT has any conceptual understanding at all. Quite the contrary, the people on PF who seems well read on the subject insist that it hasnt.

The topic of the thread is how to simulate feelings in bots, since this is constantly claimed to be done. The first question is then, of course, on what basis these claims are made.If such claims shouldnt be regarded as complete rubberish and unnecessary alarmism some kind of support is needed, more than "look what our algoritm can achieve".
 
  • #22
f95toli said:
Arguing that there is some fundamental difference between biological and electronic systems is in my view a bit silly, unless you invoke something like a "soul" there is no reason why a sufficiently powerful computer could not simulate every cell in your brain and virtually exactly reproduce the behaviour of your brain.
Yes, but I think you need to add biology to that. Data processing is one thing, but biological, hormonal and sensory input may well be a factor in characteristics like emotions and instinct. The human brain is an extraordinarily complex data processor; but, likewise, the inputs are sophisticated and complex.

The other factor is social structures: the knowledge of having been conceived by your parents; and, belonging to one or more social groups. We might very well need something like that to engender consciousness.

That said, no one really knows.
 
  • #23
Kontilera said:
Is it though? We got no evidence that present AI, such as ChatGPT, is sentient, but still people (including professionals with in the branch) insist on using terminologi from our own point of reference, insinuating that they are.

When asked what the basis for such claims are Ive been met with arguments that since they can do what we do, theres no reason to say we are different.
ChatGPT is certainly very different from a human being. It's not religious, and it doesn't deliberately lie (although, it could easily be programmed to do so); it doesn't hate (although, again, it could easily be programmed to hate); and, it doesn't answer questions out of self-interest. It has no individuality.

But, in terms of core data processing, I'm not convinced that its intelligence is completely different from human intelligence.

Kontilera said:
There is no evidence that ChatGPT has any conceptual understanding at all. Quite the contrary, the people on PF who seems well read on the subject insist that it hasnt.
Define "conceptual understanding". We are probably still at the point where I could demonstrate more of what we might mean by "conceptual understanding" about mathematics then ChatGPT. But, we might need an undergraduate maths student to tell the difference. If you picked someone off the street, then I suspect they would struggle to tell who was the amateur mathematician and who was ChatGPT. Of course, some of the subtleties of style might give it away, but in terms of the core answers, I doubt many people could tell us apart.

If we took another example of mountaineering, then explicitly ChatGPT cannot claim any personal experience - so that's a giveaway. But, if you fixed that and allowed it to pretend that it is quoting from personal experience, then again it's not so clear whether the average person could tell us apart. Again, a fellow mountaineer might be able to pick up things in my responses that convince them that I'm the real mountaineer and ChatGPT is pretending.
Kontilera said:
The topic of the thread is how to simulate feelings in bots, since this is constantly claimed to be done. The first question is then, of course, on what basis these claims are made.
I suspect it could be done, but the results might be worrying. All it would need would be an instruction to answer in the persona of, say, a religious person. It could definitely do this, as it can produce text in the style of whatever you ask. I doubt I could tell the difference between a devout Catholic and ChatGPT pretending to be a devout Catholic.
Kontilera said:
If such claims shouldnt be regarded as complete rubberish and unnecessary alarmism some kind of support is needed, more than "look what our algoritm can achieve".
Ultimately, the biggest threat to humanity is humans ourselves. I don't see AI of its own accord being a threat. But, it could be a powerful weapon in the wrong hands.
 
  • Like
Likes Kontilera
  • #24
PS here's an interview with Geoffrey Hinton (the "Godfather of AI"). He believes that ChatGPT already "understands" what it's doing.

 
  • Informative
Likes jack action
  • #25
How to simulate a computer using "gut instinct" in a board game:

Rate each move on the likelihood of it leading to a win.
When two or more moves give a very similar percentage of success, create a set of random numbers and append them in order generated to each answer.
Create a second random number.
Use the move whose assigned random number is nearest the second one created and use that move.
(Or just chose the second best move once every five attempts)
State the percentage success of each move but display the text " Although there is not much difference between the success of each of these moves, my gut instinct says use this move."

A naive player will be impressed, especially if the move does lead to the computer winning.

Will it fool you into believing it was gut instinct?

It took me two or three seconds to work out how to give the impression it used gut instinct.
 
  • #26
A second way to give the impression of using "gut instinct"

Calculate the top three or four best moves with their likelihood of winning.
Then calculate the likelihood of someone using the moves that would be needed to beat the machine
EG
Move A 70% chance of winning, 30% chance that the way to beat the computer would be found
Move B 68% chance of winning, 20% chance that the way to beat the computer would be found.
Move C 67% chance of winning, 10% chance that the way to beat the computer would be found.

Again, display the percentage chances of winning, but choose the move with the greatest difference between chance of winning and chance of someone finding the way to beat the computer. (And not mentioning that!)
Again say "gut instinct says choose move C even though it is only the third best move".

The chances of the player finding a way to beat the computer could be related to the number of moves required to beat it being very high, the moves require that several apparently bad moves are made first (thinking very far ahead), the likelihood that at some point many apparently very good moves are required to beat the computer, chance that only one sequence of moves would lead to the computer loosing.
 
  • #27
I believe this thread served its purpose albeit not offering any in-depth discussion.
It seems to me that the post #2, by jedishrfru, is a good summary.

There doesnt seem to be any consensus that any software today exhibits human like qualities such as conceptual understanding, feelings, gut-instict, consciousness etc.

Futhermore, there doesnt even seem to by any generally accepted scientific definitions of the concepts that are used everyday to describe the software. We are therefore seeing a development where science is mixed with pseudoscience, or downright misleading.

If the only argument for using these big words is that 'Our software (which is trained to mimic), must be sentient because it behaves human', then the industry (according to me) suffers from a tendency to overestimate its own achievements.

While I am not opposed to the theoretical possibility of programming such software, I am a strong proponent that the evidence must be stronger before these qualities are attributed to the software.

Unequivocal evidence of such qualities is not only of immense scientific magnitude, but would, in my view, also require a jurisdiction adapted to these new entities. Just as we humans try to protect ourselves and animals from wrongdoing with the help of the law. Perhaps is this one-sidedness a sign of hypocrisy within the industry. This because many like to trumpet words that bring to mind human sensory experiences, while few advocate laws regarding how these programs should be handled and developed from a moral and ethical perspective - a discussion that would possibly slow down the industry and its development.

Having said that, the choice is between either declaring the claims baseless, probable lies.
Alternatively, declare the industry's developers as ruthless, with no trace of moral scruples.
 
  • #28
A proposal for testing conceputal understanding would be to train models with data that excludes a specific concept, lets say steam engine. If there is no trace of the word steam engine in the training data (and no clues of what such a thing might be) but the model can still supply an educated guess about what it might be, then I would be more willing to consider the quality of conceptual understanding.
A person reading about pressure, the phases of water and engine should be able to offer some kind of guess as to what such an invention could look like.
 
  • #29
Kontilera said:
A proposal for testing conceputal understanding would be to train models with data that excludes a specific concept, lets say steam engine. If there is no trace of the word steam engine in the training data (and no clues of what such a thing might be) but the model can still supply an educated guess about what it might be, then I would be more willing to consider the quality of conceptual understanding.
A person reading about pressure, the phases of water and engine should be able to offer some kind of guess as to what such an invention could look like.
I think that AI has gone well beyond that. ChatGPT can now take the requirements for a computer program controlling simple devices and generate a fairly good program along with instructions for wiring the device. The code does benefit from some tweaking by a human.
See this.
 
  • #30
PeroK said:
PS here's an interview with Geoffrey Hinton (the "Godfather of AI"). He believes that ChatGPT already "understands" what it's doing.


I was thinking a little bit more about his statement and I think I disagree with it. The neural networks (that run now) "reason" but I don't think they "understand".

I can see a difference between "deducing or coming to a conclusion" and "grasping a concept, being aware of a meaning or the intent of".
 
  • Like
Likes Kontilera
  • #31
FactChecker said:
I think that AI has gone well beyond that. ChatGPT can now take the requirements for a computer program controlling simple devices and generate a fairly good program along with instructions for wiring the device. The code does benefit from some tweaking by a human.
See this.
You are missing the point.
Generating code for programs is a prime example of something that can be made by training the software in relative word frequence, sentence construction and so on. As long as there is a huge amount of programming code in the training data, chances are youll get something useful, maybe in need of minor corrections. My opening post was about an example of AI for optimizing computer code. To test conceptual understanding, I argue that the software has to reach outside of the training data, to exclud mimicry. For example, train the software on data where prime searching algorthims (and similar programs) has been excluded. If the software has a conceptual understanding of the code it writes it should be able to make decent attempts on such programs without being trained on it. Without the use of stochastical processes.
Just as we expect from out high school students.
 
  • #32
Kontilera said:
You are missing the point.
Generating code for programs is a prime example of something that can be made by training the software in relative word frequence, sentence construction and so on. As long as there is a huge amount of programming code in the training data, chances are youll get something useful, maybe in need of minor corrections. My opening post was about an example of AI for optimizing computer code.To test conceptual understanding, I argue that the software has to reach outside of the training data, to exclud mimicry. For example, train the software on data where prime searching algorthims (and similar programs) has been excluded. If the software has a conceptual understanding of the code it writes it should be able to make decent attempts on such programs without being trained on it. Without the use of stochastical processes.
Just as we expect from out high school students.
You seem to think that combining the concepts of "steam" and "engine" would take a large leap of imagination beyond their individual properties. Maybe so, but I am not so sure.
 
  • #33
FactChecker said:
You seem to think that combining the concepts of "steam" and "engine" would take a large leap of imagination beyond their individual properties. Maybe so, but I am not so sure.
I argue that in order to combine the concepts of steam and engine and thus sketch the idea of a steam engine youll need a conceptual understanding of the two words. That is, if you've never heard of the concept steam engine before.

I order to prevent the illusion of understanding by mimicing, we subtract the data describing the concept (or anything too similar) in the training set.
 

FAQ: How to simulate feelings and instinct in computers

How can feelings and instincts be simulated in computers?

Feelings and instincts can be simulated in computers through the use of complex algorithms that mimic human emotions and behaviors. This can involve creating decision-making processes based on predefined rules and learning from past experiences.

What are some common techniques used to simulate feelings and instincts in computers?

Common techniques include artificial neural networks, machine learning algorithms, and fuzzy logic systems. These methods allow computers to analyze data, recognize patterns, and make decisions based on the information available to them.

Can computers truly experience feelings and instincts like humans do?

While computers can simulate feelings and instincts to a certain extent, they do not experience emotions in the same way humans do. Computers lack consciousness and self-awareness, which are essential components of true emotional experiences.

How can the accuracy of simulated feelings and instincts in computers be measured?

The accuracy of simulated feelings and instincts in computers can be measured through various metrics, such as the success rate of decision-making processes, the ability to adapt to new situations, and the consistency of behaviors over time. Evaluating these factors can help determine the effectiveness of the simulation.

What are the potential applications of simulating feelings and instincts in computers?

Simulating feelings and instincts in computers can have various applications, including improving artificial intelligence systems, enhancing human-computer interactions, and developing more advanced robotics. These simulations can also be used in fields such as healthcare, finance, and entertainment to create more intelligent and responsive systems.

Similar threads

Back
Top