Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #386
Demystifier said:
artificial intelligence is not developed through the autonomous survival of the fittest.
The specific ideas are accepted or rejected through the AI's self-play and simulations. But I think you are talking about the AI as a whole, and in that sense I agree. The comparison with the intelligent nerd is apt, and I think rather important. A person's ability to succeed in life has more to do with social skills than the kind of intelligence that is valued in schools. AI will force us to re-evaluate what we value in ourselves.
 
  • Like
Likes Demystifier
Computer science news on Phys.org
  • #387
 
  • #388
I see no reason to expect AI to share any similarities with humans other than raw intelligence. It won't spontaneously develop complex emotions, desires, fears or strong survival instinct. If it's only programmed to be intelligent, it'll be intelligent and nothing else. Although, if it's given important responsibilities and the capability to act autonomously, it will have to be carefully programmed so that it doesn't decide the best way to carry out it's programming leads to undesirable consequences, like wiping out half the population.

But many humans have decided wiping out half the population is right way to achieve their goals, so maybe having AI make those decisions isn't all the much of a downgrade.
 
  • #389
sbrothy said:
"Uncorruptible AI" kinda reminds me of the phrase "Unsinkable ship". As in Titanic.
The owners of the White Star shipping line that included HMS Titanic advertised and popularized an outright lie. Even if the internal vertical bulkheads properly sealed each section, the cast iron exterior rivet failures exposed entire compartments to seawater influx.

However, point taken. Let me clarify that I intended incorruptible (sorry for the original typo) in the sense of 'self correcting' and not (easily) perverted or bribed. I was considering pilotless airplanes and spaceships in this context where central code should not be modified on-the-fly yet must adapt to changing conditions.
 
  • #390
Klystron said:
The owners of the White Star shipping line that included HMS Titanic advertised and popularized an outright lie. Even if the internal vertical bulkheads properly sealed each section, the cast iron exterior rivet failures exposed entire compartments to seawater influx.

However, point taken. Let me clarify that I intended incorruptible (sorry for the original typo) in the sense of 'self correcting' and not (easily) perverted or bribed. I was considering pilotless airplanes and spaceships in this context where central code should not be modified on-the-fly yet must adapt to changing conditions.
I'm afraid that maybe you took my comment more serious than intended. But let's be serious for a second then:

What I worry about, especially in the near future, is that the children of tomorrow will be unable to recognize agency when taking a phone call or socializing online.

In the near future - someone interacting with you, where the lights seem to be on but in reality there's noone home, may look like strong AI but in reality be a "zombie" designed, or intentionally "parameterized" [sorry] to push your buttons specifically.

I'm too old to understand. I'm sure tomorrow's children will meander through what looks like a sociological minefield to me just fine. That I'm glad it isn't me "meandering" is probably very natural.

The next 50 years will be exciting for sure...
 
  • Like
Likes russ_watters and Klystron
  • #391
Algr said:
The AIs will have had (or tapped into) personal conversations with just about every voter, and will be able to gauge political motivation from conversations that seem to have absolutely nothing to do with politics. The AI will also be quite skilled at planting ideas in a voter's head, and making them think that they were the ones who arrived at some special insight.
I would argue that first any AI will need to show the slightest sign of consciousness before it can do any of the work you stated.

Or it will simply be a tool in some conscious user's hands which it already is.Now I have had the experience of AI fans getting very upset at me for daring to say this and I never really say it to piss someone off rather just to state an obvious fact.
Currently we still don't have clear understanding of the "neural correlates of consciousness" in terms of how the known brain regions come together to form a subjective mind that can attach reason and meaning to any of the continuous huge stream of raw information entering our brain through our senses, I would think we might first want to fully probe our own workings until we devise a plan that can lead us to artificial one.
That being said I do leave the option open that we just might arrive at AGI by a random happenstance because one can hit the target even by shooting in the dark.

Make no mistake , intelligence is easy consciousness is not,
we do understand intelligence rather good we also have definitions for it and we can measure it by IQ points and put in in a scope/range etc, we have very little scientific theoretical understanding of what consciousness is and how to define it properly etc,

I mean we have cracked protein folding with AI which is a very complicated intelligent problem, but it seems subjective awareness with meaning is something on a whole different level,

I do feel cracking consciousness will be similar to making nuclear fusion practical, we have tried the latter for some 70 years now without much success, and , mind you, in fusion we at least know the theory 100% which is some 90% more than we know about consciousness...
It is often said that fusion is simply an engineering problem and rightly so because we do have the theory worked out for it.
But for conscious subjective awareness we don't even have a decent theory , how can we then jump to conclusions of which AI will take over which country or do what?

It seems to me we are getting far ahead of ourselves.
Over the years of reading this topic I have noticed that the ideas of what consciousness is also have changed with the times, in the second half of the 20th century many researchers just assumed that consciousness is just an emergent property of a what can be labeled as a complex biological computation taking place within the brain, even now many still think like that and if I can say so, I do feel this will be eventually proven not the case.
My simple reason for thinking so is that we now have had plenty of complex computer architectures and complex software run on them and nowhere has that even slightly shown any signs of subjective awareness , it seems to me you need more than just complex arithmetic , algorithms and neural networks to produce a self aware subjective mind, or maybe I'm wrong only time will tell.
But that's a whole different topic.
 
Last edited:
  • #392
artis said:
I would argue that first any AI will need to show the slightest sign of consciousness before it can do any of the work you stated.

Or it will simply be a tool in some conscious user's hands which it already is.Now I have had the experience of AI fans getting very upset at me for daring to say this and I never really say it to piss someone off rather just to state an obvious fact.
Currently we still don't have clear understanding of the "neural correlates of consciousness" in terms of how the known brain regions come together to form a subjective mind that can attach reason and meaning to any of the continuous huge stream of raw information entering our brain through our senses, I would think we might first want to fully probe our own workings until we devise a plan that can lead us to artificial one.
That being said I do leave the option open that we just might arrive at AGI by a random happenstance because one can hit the target even by shooting in the dark.

Make no mistake , intelligence is easy consciousness is not,
we do understand intelligence rather good we also have definitions for it and we can measure it by IQ points and put in in a scope/range etc, we have very little scientific theoretical understanding of what consciousness is and how to define it properly etc,

I mean we have cracked protein folding with AI which is a very complicated intelligent problem, but it seems subjective awareness with meaning is something on a whole different level,

I do feel cracking consciousness will be similar to making nuclear fusion practical, we have tried the latter for some 70 years now without much success, and , mind you, in fusion we at least know the theory 100% which is some 90% more than we know about consciousness...

But that's exactly my point. If you can't tell the difference - if it tells you it's conscious - then how do you test it?

EDIT: But youre right. With fusion and such at least we know the theory.
 
  • #393
sbrothy said:
I'm afraid that maybe you took my comment more serious than intended. But let's be serious for a second then:

What I worry about, especially in the near future, is that the children of tomorrow will be unable to recognize agency when taking a phone call or socializing online.

In the near future - someone interacting with you, where the lights seem to be on but in reality there's noone home, may look like strong AI but in reality be a "zombie" designed, or intentionally "parameterized" [sorry] to push your buttons specifically.

I'm too old to understand. I'm sure tomorrow's children will meander through what looks like a sociological minefield to me just fine. That I'm glad it isn't me "meandering" is probably very natural.

The next 50 years will be exciting for sure...
What actually freaks me out about my little exchange with ChatGPT is that it's already saying "we", not "you". :)
 
  • #394
sbrothy said:
But that's exactly my point. If you can't tell the difference - if it tells you it's conscious - then how do you test it?
Well that's John Searle's original point by the Chinese room experiment.
How do you tell that an entity which can perfectly manage a foreign language dictionary isn't a native speaker with all the cultural background etc and a conscious one at that.

I think there are examples of how one can tell true conscious subjective awareness apart from a cleverly programmed AI.

Here's one, I think the difference between a true subjective consciousness and a clever AI parrot of it is this, only true subjective conscious entity can make a deliberate mistake.
Because in order to make a deliberate mistake one has to not only be intelligent but also have personal motivation and reason as well as subjective meaning behind it.

Can a autonomous vehicle cross the red light on purpose? Not a chance, it can only cross the red light by mistake. A human on the other hand does it because for example he wants to get home faster because he wants to see his kids because he values his time and family and doesn't value other drivers on the same level , each of these things is a whole universe of information that has meaning attached to each bit of the information within it which together is not only a lot of information but also a very specially and specifically structured form of information , it's essentially a cryptographic labyrinth that seems so simple to us that we can answer it in one sentence when the police officer pulls us over " yes I was speeding/crossing red light because of that and this" but for an AI any of this information is nonsensical gibberish.
And the reason is that AI is simply intelligent but intelligence doesn't understand meaning, meaning is just another couple of bytes of additional information attached to the original information.

But how do you convert those bytes of meaning so that they actually begin to mean something...?
That's not as straight forward as converting from programming language to machine code etc...

So i'd say there's plenty of ways to spot a fake consciousness , one of them is to observe the lack of meaning and lack of reason behind specific errors.By the way we often don't think about it but just for the fun of it - no known computer as of yet can make a deliberate mistake, all computational mistakes are only 100% accidental ones.

On the other hand if someone drives under the influence for example and crashes, was that actually an accident? I would say not at all, it was a consciously made error.
You can only make errors like that if you have the understanding of all the countless complex issues involved, I have never heard an alcoholic who drives drunk who said that they did not know it was wrong or bad, literally every time the reason was that they either valued their own fun above the safety of others or they simply got tired of life and started affording certain "liberties" that they otherwise wouldn't.

Think about this from a computational point of view, how would an AI do it? It would need to translate certain information into specific form that can then measure it's meaning and it;'s relation to all other information , you can't do it simply by preprogrammed values because then either every AI will drive drunk (if it could) or they would never drive drunk or cross red lights.

It's easy to preprogram certain parameters into it or make it learn to behave in certain way by "reinforcement learning" of certain models but it's not understood how would one make it such that it can do things one way and then one day decide to do them differently but not by mistake but by deliberate action...

It is only when people think about this huge obstacle when they realize how hard subjective self awareness is.
 
  • #395
To reiterate on what I just said, intelligence is easy exactly because it has objectively discernible parameters like weights, numbers, laws, rules etc etc.

Subjective consciousness is hard exactly because it has no real objectively discernible rules or parameters it simply takes all incoming information and ascribes a subjective meaning to it and then goes on and uses that subjective meaning to make objective outputs,

the case of the red light crossing for example, the red light crossing itself is an objective action performed by a real individual whose brain made real measurable outputs to achieve that action , what is not measurable or defineable is the subjective meaning that made the outputs, yet that meaning is real, it used brain resources and energy to exist and real actual neurons were involved in it's functioning but it in an of itself is not easily parametrized, you cannot simply write a code that will execute it , because then it will only execute it randomly but we do know our wrong choices are never really random, their almost always premeditated.

You can't really randomly rob a house or a liquor store or be mean to a child or misbehave in traffic etc etcIn terms of consciousness we have done the easy part so far, we have tested and seen nerve input signals to brain, visual, auditory, sense etc , we also know where they go, we also know the outputs and what they do, we can measure brain waves etc etc, the hard part is figuring out how those rather mundane signals and frequencies come together to create new information where there is none (create meaning and attach it to certain inputs) and then act upon that meaning to predict, and perform certain tasks,
And we do know that brains don't run on software so this makes it even more interesting, somehow it's all hardwired in us by neural connections which do change though during life as we get new information and act upon it.
 
Last edited:
  • #396
Stumbled over this one just now. It looks relatively recent and, considering the subject, like an easy read:

Could a Large Language Model be Conscious?.

EDIT: Oh "Transcript of a talk given November 28, 2022". Anyway....
 
  • #397
sbrothy said:
In the near future - someone interacting with you, where the lights seem to be on but in reality there's noone home, may look like strong AI but in reality be a "zombie" designed, or intentionally "parameterized" [sorry] to push your buttons specifically.
Surely AI influencers are in our future. Such has already been semi-automated in "troll farms." A single individual manages a number of puppets who befriend the marks. An AI influencer will be all of cheaper, more controllable, and more effective.

A software entertainer named Hatsune Miku is a star in Japan and has a worldwide following in the multi-millions. A candidate for the Diet tried to get her endorsement. She even does sold out concerts in stadia, appearing as a "hologram." She isn't paid a salary or an appearance fee. There is no possibility of a scandal or Taylor Swift-style contract dispute. What's not to like?

AI nudging "friends" will replace the heavy handed censorship of today, which outrages and alienates the censored. Rock them to sleep gently instead. Get them to love it.
 
Last edited:
  • #398
https://crfm.stanford.edu/2023/03/13/alpaca.html

Stanford says they have a child of the OpenAI system that can be trained for $600. It's performance is comparable to a system that took five million dollars to train. The basic method is to use a big AI to train a little AI. Copies are available for academic research but not commercial purposes.
 
  • #399
ChatGPT seems conscious to me. What care I the methods it may use?
 
  • Skeptical
  • Wow
  • Like
Likes nuuskur, russ_watters, artis and 2 others
  • #400
artis said:
our wrong choices are never really random, their almost always premeditated.
So I thought about it then chose the wrong thing instead of flipping a coin, you say.

Sometimes I literally flip a coin. Other times I don't try to figure it out and just do the first thing that comes into my head, to get it over with and avoid dithering.

There was a bridge player who was asked how he could make difficult decisions so rapidly. He said, I know I can't figure it out so I just do what I feel like doing. (This avoids the other side using a pause as a useful clue.)
 
  • #401
Hornbein said:
ChatGPT seems conscious to me. What care I the methods it may use?
It's still just a somewhat smart search engine. Calling it conscious seems to me to be... generous.
 
  • Like
Likes Structure seeker
  • #402
Hornbein said:
So I thought about it then chose the wrong thing instead of flipping a coin, you say.

Sometimes I literally flip a coin. Other times I don't try to figure it out and just do the first thing that comes into my head, to get it over with and avoid dithering.

There was a bridge player who was asked how he could make difficult decisions so rapidly. He said, I know I can't figure it out so I just do what I feel like doing. (This avoids the other side using a pause as a useful clue.)
sure one can come up with examples of random actions but I would argue that the absolute most choices we make in our lives are more or less premeditated including the evil ones.

Hornbein said:
ChatGPT seems conscious to me. What care I the methods it may use?
I'm not sure whether you mean this sincerely but it's wrong either way.

Of course a language model made by humans adjusted by humans that uses 100% of the information we humans have ever put online (you yourself including) will sound and feel human because it exactly copies us.

This I believe is the far bigger danger than AI taking over the world, it's us giving too much credit to a glorified search engine.
I think John Searle was right in the way that it is far easier to copy consciousness than it is to generate one.
And the copy does seem legit because it uses the very patterns and information a conscious being uses.The analogy that comes to mind is someone standing in a tunnel hearing their own echo and then suddenly thinking someone else might be on the other side.
The speed of sound does make it seem like someone else is answering you as the echo comes with a delay, but in all actuality you are just reflecting on yourself.
 
  • Like
Likes PeterDonis and russ_watters
  • #403
Hornbein said:
ChatGPT seems conscious to me.
Lots of people don't seem conscious to me. So many people seem to be living their lives on autopilot - never seeming to think past conformity and the expectations of others. I have to remind myself put this down to my own perception. I don't know what is going on in the parts of their lives that they see as important.

On the other hand, there is a trend in some groups to call people they don't like NPCs. That is a term that comes from video games and roleplaying. If you know what the term means, the implications are very disturbing.

If the AI isn't conscious, but we act like it is, the consequences aren't as dire as the reverse mistake.
 
  • Like
Likes russ_watters and Hornbein
  • #404
Hornbein said:
ChatGPT seems conscious to me.
"Seems", yes. I think that's its point.
 
  • Haha
Likes Structure seeker
  • #405
Hornbein said:
ChatGPT seems conscious to me.
Weizenbaum's ELIZA program in the 1980s fooled psychologists into thinking they were talking to an actual paranoid human. It seemed that way to them. That doesn't mean ELIZA was actually paranoid.

ChatGPT is just a souped up version of ELIZA that, instead of only being able to simulate a paranoid human, can simulate any human making authoritative, confident-sounding statements that have no reliable relationship to reality. But the fact that humans who do that are conscious does not mean ChatGPT is conscious.
 
  • Like
Likes Lord Jestocost, bhobba, Klystron and 2 others
  • #406
PeterDonis said:
Weizenbaum's ELIZA program in the 1980s fooled psychologists into thinking they were talking to an actual paranoid human. It seemed that way to them. That doesn't mean ELIZA was actually paranoid.

ChatGPT is just a souped up version of ELIZA that, instead of only being able to simulate a paranoid human, can simulate any human making authoritative, confident-sounding statements that have no reliable relationship to reality. But the fact that humans who do that are conscious does not mean ChatGPT is conscious.
See "William's Syndrome."
 
  • #407
If a language model can make so many people question whether it's conscious or not, think about what will happen when we master the ability to couple such a model with a realistic looking artificial human body with enough movement capability that in simple movements like walking it's almost indistinguishable from an actual human.
 
  • Like
Likes russ_watters
  • #408
artis said:
If a language model can make so many people question whether it's conscious or not, think about what will happen when we master the ability to couple such a model with a realistic looking artificial human body with enough movement capability that in simple movements like walking it's almost indistinguishable from an actual human.
Somebody should make a movie about that or something... :woot:
 
  • Haha
Likes bhobba and russ_watters
  • #409
Algr said:
If the AI isn't conscious, but we act like it is, the consequences aren't as dire as the reverse mistake.
What would be the reverse?
The AI is conscious, and acts like it is ( or isn't ); and we act like it is ( or isn't )
The AI isn't conscious, and acts like it is ( or isn't ); and we act like it is ( or isn't ).

Of the 8 possibilities, I am unsure which one is the most dire.

The Red Button Stop failsafe problem I think has functionality of 100% only for the case of three 'isn't'.
 
  • #410
Short term - not in the least. Long-term (10 years +), no idea. This is moving so fast I have no idea what will eventuate. What I do know is predictions can be wildly off the mark. Take driverless cars, which will eventually have a massive impact on society. It is a solved engineering problem. 10 years ago, there were predictions we would be driving them by now. But that is not what happened. While the basic problem is solved, getting them to drive at least as well as a human being has proved a long hard slog. Progress is being made, but slowly. I don't see them taking off for at least 10 years. But when the dam breaks, so to speak - watch out - society will dramatically change. Just imagine - no parking (hence no income from parking meters for local government or associated car parks) - no real need for car ownership - you hire as needed (Uber will boom) - I can think of many more. And that is just one area - there will be many more. So while I am sanguine short term - long term - watch out.

All I would suggest is that as far as college is concerned, a general technological-based degree such as Data Science will continue to have a bright future.

A university near me, Bond, does not offer a straight Actuarial degree - you do it with another major (or minor):
https://bond.edu.au/program/bachelor-of-actuarial-science/subjects

I asked about the degree for the son of my physiotherapist, and they STRONGLY recommend the second major be Data Analytics. It is close to the Data Analytics degree, so doing both is easy. You only take 4 extra subjects for the Actuarial degree. Actuaries must do financial mathematics, contingencies, actuarial and financial models, plus stochastic processes - but otherwise, are the same At present, the job market for Actuaries is strong, but they foresee over time, fewer actual Actuaries will be required, and many will move over to Data Analytics. Plus, of course, passing all the actual exams is known to be HARD - only the best survive.

https://www.linkedin.com/pulse/actuary-endangered-profession-age-artificial-mahesh-kashyap/

His son decided on a double degree in Systems Engineering and Commerce.

Thanks
Bill
 
Last edited:
  • #411
bhobba said:
10 years ago, there were predictions we would be driving them by now. But that is not what happened.
I would say the same problem exists for conscious AI , most of the people researching it still think that human consciousness is just a complex computation therefore throw more of the same at it...

But more of the same isn't working and there isn't even a clear scientific theory that would argue that human brain actually does work like a complex biochemical computer, at this point it's basically just an assumption.
We do have a good enough view of how the various signals pass into brain and what the various brain regions approximately do , but it clearly seems that is not nearly enough to understand why that real time objective information that passes down our nerves can create a subjective capability to reason, observe, experience and chose whether to even ignore the signals that come down.

I believe this is the biggest problem for self driving cars, they don't have a "self" because our current AI software driven hardware has no self, therefore it cannot reason nor can it understand meaning but driving down the road you see endless stream of objects that without subjective meaning are nothing but shapes and forms that are meaningless, and as such they have to be calculated to compare to a known database to determine what they are
When you have to finger point the computer to every object it should recognize as human and avoid, it becomes time and resource consuming.It seems to me that subjectivity by whatever means it works in our brain is an absolute must for any system that wants to be not just intelligent but also conscious and even more so the beauty of subjectivity is that it decreases the need for complex processing resources because then you can recognize a familiar object just by a sneak peak view of it instead of going through the complex geometrical algebra to calculate whether the points/pixels in your view constitute a human or a sign or a deer or whatever.

I base my assumptions on the fact that I get tired far faster solving algebra than driving down the road seeing humans and avoiding them, that is almost effortless to me and to most humans it takes almost no brain power as one glimpse is enough to have a 100% determination of what it is that you see.
 
  • #412
artis said:
I base my assumptions on the fact that I get tired far faster solving algebra than driving down the road seeing humans and avoiding them, that is almost effortless to me and to most humans it takes almost no brain power as one glimpse is enough to have a 100% determination of what it is that you see.
None of this is evidence that driving takes much less "brain power" than solving algebra problems. It is only evidence that driving takes much less conscious "brain power" than solving algebra problems. But we have abundant evidence from neuroscience that a huge amount of unconscious brain power underlies everyday activities like driving. You just aren't aware of all the brain power being used because it's unconscious; it takes place below the level of your awareness.

You have that huge pool of unconscious brain power available for things like driving because those activities are similar enough to ones that humans evolved to do (driving is basically a way of getting around from one place to another, something humans have always done) that your brain has evolved a huge amount of functionality that works for it. The problem with "AI" software is that it has only been under development for roughly half a century, whereas humans have evolved as a species for hundreds of thousands of years, and many of our unconscious brain functions (such as identifying "objects" in your visual field--you aren't aware of how you brain does it, it just does it) evolved even before our species did. So "AI" software is, at the very least, hundreds of thousands to millions of years behind the human brain in evolutionary terms.

With regard to solving algebra problems, however, this is something human brains never evolved to do in the first place, so when you try to use your brain to do it, you have to consciously repurpose brain hardware and software that was designed by evolution for very different things. Whereas a computer can just be programmed from a clean sheet of paper for that specific problem. That's why computers can easily beat us at things like that while at the same time being many orders of magnitude worse than us for things our brains evolved to do, like picking objects out of a visual field.

By the way, none of this is evidence that human brains don't do computations either. It's just evidence that human brains are much, much better than computers at specific kinds of computations--the ones human brains evolved to do in real time in order to help the human survive.
 
  • Like
Likes Klystron, russ_watters and bhobba
  • #413
Hornbein said:
ChatGPT seems conscious to me. What care I the methods it may use?
Should the ability to reason be considered as a necessary condition for being conscious? If so, then the engine is not conscious. It will always produce an answer, even if the answer is complete nonsense. It's not able to analyse the statements it produces at the level of first order logic, for instance (and it's not supposed to! That's not a feature of statistical learning).
 
  • #414
PeterDonis said:
None of this is evidence that driving takes much less "brain power" than solving algebra problems. It is only evidence that driving takes much less conscious "brain power" than solving algebra problems. But we have abundant evidence from neuroscience that a huge amount of unconscious brain power underlies everyday activities like driving. You just aren't aware of all the brain power being used because it's unconscious; it takes place below the level of your awareness.
Well , but there isn't really anything close to "all the brain power" because we now know for some time that the human brain actually idles at close to the energy consumption of it's peak performance.
In other words your brain consumes almost the same energy when your sleeping as when your driving or drinking your morning coffee or solving those "evolution did not evolve us to do math" math problems.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5732842/
Sleep interrupts the connection with the external world, but not the high cerebral metabolic demand. First, brain energy expenditure in non rapid eye movement (NREM) sleep only decreases to ~85% of the waking value, which is higher than the minimal amount of energy required to sustain consciousness [3**]. Second, rapid eye movement (REM) sleep is as expensive as wakefulness and the suspension of thermoregulation during REM sleep is paradoxically associated with increases in brain metabolic heat production and temperature [4].
https://press.princeton.edu/ideas/is-the-human-brain-a-biological-computer

But if we just look at the brain’s power consumption, we must conclude that the human brain is very “green.” The adult human brain runs continuously, whether awake or sleeping, on only about 12 watts of power.

So roughly 12 watts on average irrespective of the task your doing.I can agree with your assessment that math is more complicated to us because we don't have good "architecture" for it in our brains, but interestingly enough energy expenditure wise driving is just as energy consuming as doing math.

Anyway I think we agree that there is something about the unconscious/conscious subjective ability of a human brain to be aware and awake and yet consume almost no extra resources for that because that is clearly what the data shows.
I guess you could say that simple driving is close enough to simply being awake and aware that it doesn't really noticeably increase the brain's load so that it doesn't get tired that fast of doing that.
I have noticed myself that I only ever get tired when I have to do specific tasks that require alot of focusing.
PeterDonis said:
By the way, none of this is evidence that human brains don't do computations either. It's just evidence that human brains are much, much better than computers at specific kinds of computations--the ones human brains evolved to do in real time in order to help the human survive.

Well this is a tricky argument, while I agree that so far we don't have any clear evidence of whether brains do or don't compute their tasks , I would argue that it seems that most of the tasks they do they seem to not compute , and I base my reasoning on the fact that the total energy consumed by walking and watching the surroundings is roughly the same as when your asleep.
It only really becomes involving when you sit down to a specific complex task, but awareness itself is effortless as I'm sure you would agree.
My original point was that this is one of the key differences not just between our current AI but all computers and software that runs them in general - namely that all computers require processing for any information that enters them and that can be clearly monitored by the increase in energy use, meanwhile our brains intake most of the information sent to them via senses while sitting at the same energy level as when we are asleep and very little input is gathered from the senses.

You could argue that this is simply because the brain repurposes the same resources to different tasks as we go along but can that really explain how the total power consumption and metabolism doesn't really change?
Because that would imply that either
1) The absolute most tasks in existence are on the same level of energy consumption demand for the brain which is totally not like it is for our computers where the energy consumption is directly proportional to the amount of information processed for given time period.

2) The brain always works at very thin margin between min and max energy and is ready for anything you "throw at it" therefore the energy usage doesn't dramatically differ from task to task.There seems to be certain data for the second argument although I find it hard to believe because solving complex problems while being awake means you are putting a large information input on top of an already large information input as while you are solving math problems you are still awake and aware and all the background processes are running so if the brain was always close to it's max capacity by judging based on it's energy consumption linearity then it would seem as we should observe a noticeable decrease in our awareness capability when doing complex tasks.
 
Last edited:
  • #415
 
  • Like
Likes bhobba
  • #416
Hornbein said:

We already know that autonomous driving is easier in a very "prepared" environment like a city with white stripes , clearly visible signs, possibly an uploaded map in the car's memory etc, all these guides for the self driving car serve as "rail tracks" to it.
Now put it in most small towns around the world with bad asphalt, no white stripes and a map that doesn't match actual road conditions and you might just hit a pedestrian.
Or as observed more often - you might get the car to drive weirdly as it tries to compute from scratch what to do with the very limited input information it receives.That being said I myself fully believe we will solve autonomous driving to the point where it will be safer on average than real human driving but that will happen before we understand consciousness,
In fact I don't think the robot has to be conscious to be good at certain tasks even driving, it's just that it most likely won't be as energy efficient doing that as we are.
And in some rare cases it might perform worse than an actual human, other than that one can use a driverless Tesla now and it does the job already.
 
  • Like
Likes bhobba and russ_watters
  • #417
artis said:
we now know for some time that the human brain actually idles
I don't think we "know" this. There has been research suggesting that we only use a fraction of our available "brain power" most of the time, but there has also been research suggesting otherwise.

artis said:
your brain consumes almost the same energy when your sleeping as when your driving or drinking your morning coffee or solving those "evolution did not evolve us to do math" math problems.
Yes, but you are assuming that sleeping uses much less "brain power" than driving or solving math problems. We don't know that is true either. As I understand it, most experts in the field believe that our brains actually do a lot of processing during sleep--for example, making sure short-term memories formed during the last awake period are stored in long term memory. Similar remarks would apply during waking periods when you're doing something like drinking coffee that doesn't use up a lot of conscious "brain power" the way solving math problems does. But there is still a lot of unconscious processing going on.

artis said:
so far we don't have any clear evidence of whether brains do or don't compute their tasks
Before even trying to assess that question, you need to first define what "compute" means. Or more to the point, what it doesn't mean. We already know that individual neurons act like analog computers--they take input signals and process them in a fairly complicated way to produce output signals. Is that not "computation"? If not, why not?

artis said:
, I would argue that it seems that most of the tasks they do they seem to not compute , and I base my reasoning on the fact that the total energy consumed by walking and watching the surroundings is roughly the same as when your asleep.
This is not a good argument. See above.

artis said:
awareness itself is effortless
Unconscious awareness is, I agree. I don't agree that conscious awareness is always effortless.

artis said:
can that really explain how the total power consumption and metabolism doesn't really change?
The standard explanation for this is that the brain's neurons are always firing at basically the same rate. The brain does not work like the CPU in a digital computer, which can reduce its power usage if it is not doing heavy computations. The brain is always "running" at maximum load in terms of neuron firings. The only thing that changes is what the neuron firings are doing, functionally, at a higher level. Conscious attention and conscious "focus" can affect some of that, but much if not most of it is "unconscious" brain activity that goes on much the same no matter what you are consciously doing (or not doing, if you are sleeping, for example).
 
  • #418
Where is a neuroscientist when you need one? Anyway, the internet started out as a way of sharing information which by all accounts seemed like a great idea. But sharing changed to stealing and cybercrimes. Ai, is much more powerful, and conscious or not, it will give us many challenging issues to deal with. AI will do what many new innovations have done that is create or lead to unintended/ unanticipated consequences. In the vernacular, we will be "blind-sided". We will come to realize the true meaning of intelligence.
 
  • Like
Likes bhobba
  • #419
This is an example of another thing we should be concerned about, playing around with AI

https://www.msn.com/en-us/news/tech...n&cvid=e7b02c4a0ffd426e9f9b97e62d0b20dc&ei=94

OK, it wasn't capable of doing what was asked but trying to see what it might be able to do without actually knowing is worrisome. On top of that, this little experiment is now on the internet and can/will be incorporated into future AI bot data.

Considering the prowess that AI has in playing games it would seem we should be careful in creating a situation where AI might interpret it as a game.
 
  • #420
PeterDonis said:
Yes, but you are assuming that sleeping uses much less "brain power" than driving or solving math problems. We don't know that is true either. As I understand it, most experts in the field believe that our brains actually do a lot of processing during sleep--for example, making sure short-term memories formed during the last awake period are stored in long term memory. Similar remarks would apply during waking periods when you're doing something like drinking coffee that doesn't use up a lot of conscious "brain power" the way solving math problems does. But there is still a lot of unconscious processing going on.
I think you misunderstood me here, my point about the almost equal energy use during the 24h of brain activity was exactly that it seems we use almost all available power all the time even during sleep.

PeterDonis said:
Before even trying to assess that question, you need to first define what "compute" means. Or more to the point, what it doesn't mean. We already know that individual neurons act like analog computers--they take input signals and process them in a fairly complicated way to produce output signals. Is that not "computation"? If not, why not?
I agree , from the literature I've read it seems they fit closest to a form of analog computer with massively parallel structure.
What I and seems many others are not as sure is whether all complex tasks including simple awareness itself is also based on that same type of analog computation, what I'm trying to say is whether a complex analog computation can bring about subjective aware experience aka consciousness as an emergent property (which seems to be the current prediction) because the way I see consciousness is that it is first and foremost subjective awareness rather than the ability to solve math riddles.
This is exactly the problem, not how to make analog or digital circuits process intelligent tasks even if they do them differently than our brain we still get the mechanism , but we don't get how subjectivity arises out of that in a way that seems to live a life on it's own.
That I would argue is the so called "hard problem" to understand why a computation , any computation real time or otherwise brings about subjective awareness , awareness itself is fine, CCTV with face recognition in real time is also in a way "aware" when the "MATCH" signal blinks or otherwise but it's not as aware as to decide whether it "feels" like arresting someone today or letting it slip.A predator in jungle is also aware when it sees it's prey and yet it doesn't have subjectivity I believe because it cannot deny it's instinct to survive and kills the prey.

But then you get humans, humans like the scientists at the Pavlovsk Experimental station in Russia that during the siege of Leningrad by the Nazi forces defended the station against locals from the eating out of the seed collection, they even died of starvation themselves to do that.
https://en.wikipedia.org/wiki/Pavlovsk_Experimental_Station

This ability to subjectively reason against every signal incoming into your brain to the point where you reason yourself to death is what has always made me wonder.

It's as if a computer knew when and for what to shut itself down when it never got any command to do so.

I say we solve this ability and then we get AGI for sure.
 

Similar threads

Replies
10
Views
2K
  • Sci-Fi Writing and World Building
Replies
12
Views
290
  • Computing and Technology
Replies
1
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
21
Views
1K
  • General Discussion
Replies
5
Views
2K
Replies
19
Views
2K
Replies
4
Views
2K
  • General Discussion
Replies
12
Views
2K
Replies
7
Views
5K
  • Astronomy and Astrophysics
Replies
3
Views
1K
Back
Top