Can Everything be Reduced to Pure Physics?

In summary: I think that this claim is realistic. It is based on the assumption that we have a complete understanding of physical reality, and that all things can be explained in terms of physical processes. I think that this assumption is reasonable, based on our current understanding of physical reality. Does our ability to mathematically describe physical things in spacetime give us sufficient grounds to admit or hold this claim? Or is there more to physical reality than a mere ability to matheamtically describe things?I don't really know. I think that there could be more to physical reality than a mere ability to mathematically describe things. It is possible that there is more to physical reality than just a description in terms of physical processes. In summary,

In which other ways can the Physical world be explained?

  • By Physics alone?

    Votes: 144 48.0%
  • By Religion alone?

    Votes: 8 2.7%
  • By any other discipline?

    Votes: 12 4.0%
  • By Multi-disciplinary efforts?

    Votes: 136 45.3%

  • Total voters
    300
  • #351
Fliption said:
BTW, when we say a zombie "believes", we're talking purely functional behaviour.


Meaning what, exactly. Does the Zombie have any inner life? He sees blue things remembers blue things, can compare what he sees with what he remembers, can imagine a blue thing he has never seen, and so on. Right? But he just doesn't experience, or miss, blueness.
 
Physics news on Phys.org
  • #352
Why Should The So-called 'Hard Problem' Hold us back Intellectually?

We are heading in the wrong direction intellectually with this Qualia issue. Let's call a truce and start thinking differently. For why may indescribability of qualia to each other in the public realm hold us back? As I have pointed out many times above, the only fundamental benchmark in the measure and understanding of qualia is if it fails us in the most important aspect of the human existence: COLLECTIVE RESPOSIBILITY. If we fail to collectively look at things, recognise what they are and action or act upon them in the 'same' or equivalent way, then this would be the the most useful way to know that qualia as part of the human conscious existence is playing dirty tricks on us...and we may very legitimately declare its presence in our being fundamentally useless.

Yes, it would be fundamentally useless as it would have nothing sinificant to contribute to collective existence.

LANGUAGE AND QUALIA. Which language are we talking about? Verbal? Written? What about BODY LANGUAGE? Tell me which scientific discipline has made any attempt to conceive it, let alone study it? Tell me! Yes, oral or spoken language cannot explain qualia from one person to the next becuase qualia is self explanatory. If you see a red car just point at it and say 'that is a red car'! If a bystander points at it and says the same thing 'that is a red car', we ought to accept that they are seeing, recognising and understanding the same thing. Qualia in this sense serves only a 'Discriminatory value' to the overall human existence. If Qualia fundamentally fails to discriminate between the different states of the physical world, then it's in deep trouble as it will fail to be a reliable part of a conscious human self.

The job of the eye is to see and discriminate between differing visual states and not to explain them. It not only must be able to discriminate between the visual states that are already known to the perceiver, but also it must discriminate between new visual states that become available as the perceiver goes about his or her daily life. All that the spoken or written language does is to label things that come through the eyes and all other visual organs - the nose, the ears, the tongue, the skin, the memory cells etc - and use them in inquisitive, acquisitive and precuationary visual activities or should I vaguely say conscious activities.

CHANGE AND QUALIA. Qualias like all other visual states obey causal and relational laws and, most importantly, they stick to rules of logic. They rely on and deteriorate with visual states or organs. Well, I don't want to go through that route of things not being there when we are not looking at them, not feeling them or simply not experiencing them. I believe that they remain very much there except that faulty bodily organs just fail to display them. Qualias change according to the corresponding changes in the physical states of the body organs that perceive them. The reliance of qualias on the proper functionaing of the visual organs that display them makes qualia an engineering problem and quite naturally prone to change. The Question now is: which type of change? FUNCTIONAL CHANGE or STRUCTURAL CHANGE? So far, we tend to habitually waste a great deal of time in concentrating entirely on the functional change of things around us and naively hope that they stay changed or they improve the physical states of things for good. But we do know that that simply isn't the case. Structural change is entirely ignored for the usual naive reason of not wishing to interfer with nature. But I am arguing that the configurational relation between quailias and the physical body organs that display them can be improved by structural re-engineering of the entire human reality or should I say the human physical state. I predict that several aspects of qualia may very well be re-engineered out of place or existence not unless they can prove their continual usefulness in the end-state of man.

If Mary came out of the black and white room and she's confronted with a new visual information...so what? What is the big deal? The Inquisitive mode of consciousness treats all knowledge of enquiry as commulaltive. In the human realm, all knowledge is classified into (1) Useful and (2)Non-useful. Mary coming out to see other colours other than black and white is just an addition to her stock of knowledge. But the most important question that should concern Mary is this. HOW USEFUL THIS NEW KNOWLEDGE IS TO HER IN THE PUBLIC REALM WHERE SHE MUST PHYSICALLY SUCCEED IN SURVIVING IS THE FUNDAMENTAL ISSUE AT STAKE HERE. Whatever happens to this knowledge inside her is irrelevant. The only signifacnt use of this knowledge is that she must be able to discriminate between different colours in the real world that she lives in. As she goes about her daily life, she will continue to come across new visual information that is either obvious and self-explanatory or is explainable by means of our natural Langauge or is by a combination of both.

SCIENCE AND QUALIA. The attitude of science and approach of science to dealing with qualia must change. Science must treat qualia as an engineering issue that is capable of being altered when the visual organs are physically interfered with at the structural engineering or re-engineering level. The only significant scientific research that is any use to the humans is structural engineering of bodily parts and seeing precisely how they affect the visual states of all kinds. And not the analysable components of qualia. This sort of experiment would make a huge difference. Also, it is the duty of science to investigate whether the increase or decrease in the number of visual organs in the human body has any effect on the quality of visual data or visual perception. For all we know, the current physical configuration of man with the current number of visual organs may very well be inadequate for climbing to a higher or superior state of being. Science must must recsue the human race from total destruction...for the naive claim that we must leave every thing to nature is profoundly dangerous, if not wholly suicidal! For me, this amounts to what I habitually call 'DANGEROUS CONTENTMENT'.

QUESTION: Must science explain qualia first before making a genuine attempt improve the physical state of man? Science of man or science of needs: which one should science pursue?
 
Last edited:
  • #353
selfAdjoint said:
Meaning what, exactly. Does the Zombie have any inner life? He sees blue things remembers blue things, can compare what he sees with what he remembers, can imagine a blue thing he has never seen, and so on. Right? But he just doesn't experience, or miss, blueness.

No inner life. It means that all the brain functions associated with believing are working but nothing else. Here's the way it was put in the thread I linked.

"Belief here is used strictly in a functional sense, i.e. one's disposition to make certain verbal utterances, and does not refer to any experiential aspect of belief-- eg the subjective feelings associated with believing something.)
 
  • #354
selfAdjoint said:
As it was previously stated here, the Zombie could not only state that it had p-consciousness, but believe it to be so. We are presumed able to read the mind of a Zombie for purposes of discussion, I guess. If that is so, seeing that our only evidence for p-consciousness at all is introspection, we are all in the position of such a Zombie, and thus p-consciousnesss becomes an epiphenomenon, which makes no detectable difference in our inner lives, to say nothing of our behavior.
i think an interesting point is implied here... that is, that consciousness is simply a trait of the ability to analyze "inwards" just like we analyze "outwards"...
i've read a paper about consciousness recently, where a couple of scientists claimed the awareness of action to simply be a matter of degree of intelligence and our ability to analyze ourself and our environment.
the selfawareness comes at a point when we intellectually discover, that we are thinking and feeling.
 
Last edited:
  • #355
Fliption said:
No inner life. It means that all the brain functions associated with believing are working but nothing else. Here's the way it was put in the thread I linked.

"Belief here is used strictly in a functional sense, i.e. one's disposition to make certain verbal utterances, and does not refer to any experiential aspect of belief-- eg the subjective feelings associated with believing something.)

Well in that case it seems that you are using petitio principi to define p-consciousness; that is you are assuming on the one hand that p-consciousness is the presence of qualia, and on the other hand your definition of zombies as without p-consciousness takes away ALL inner life except belief. So p-c includes the features I mentioned, which have good neurochemical substrates; sensation, memory, imagination, mental comparison.

A materialist zombie would have to have those (sensation, memory of sensation, imagination of sensation, comparison of differently generated sensations) because the brain features that produce them are being actively studied, and if the hard problem means anything, it has to confine itself to the complement of those features.
 
  • #356
selfAdjoint said:
Well in that case it seems that you are using petitio principi to define p-consciousness; that is you are assuming on the one hand that p-consciousness is the presence of qualia, and on the other hand your definition of zombies as without p-consciousness takes away ALL inner life except belief. So p-c includes the features I mentioned, which have good neurochemical substrates; sensation, memory, imagination, mental comparison.

I'm not understanding the definitional problem you're pointing out. Perhaps too much is being made of the word "belief"? The point is simply that there is no reason to believe that a zombie with identical A-consciousness to you would behave any differently from you. So if you believe you have P-consciousness, a zombie with identical A-consciousness must also behave as if it has the same belief. To suggest it really "believes" is a stumbing block because it implies an inner life, which by definition there is none. That's why I posted the clarification above that when we say belief, we are talking only about the functional aspects of it. It is probably best that the word not be used at all.
 
Last edited:
  • #357
Egmont said:
You are smart and I think you can understand what I'm trying to say.

If I can make a resume of our two different positions:
-you say that consciousness is an explanatory concept people have invented to explain the behavior of humans, until we will find out in more detail how they really work and can describe their behavior in "simpler" terms, at which point the concept of consciousness becomes irrelevant (in the same way phlogiston is).

-I claim that consciousness is something that exists in my world (and probably in yours too) which has nothing to do with the explanation of the behavior of humans, but which, in itself, needs an explanation, and that the non-behavioral property of consciousness makes that explanation very hard in scientific terms.

After that, we got into the issue of whether we are using the word "consciousness" in the same way.

Would you agree with me then, that whatever _that_ property is, it's something that can't be communicated? And would you also agree that Chalmers and his followers think they are successful at communicating what the "hard problem" is about?

I don't know Chalmers. I will not agree with you that that property is something that cannot be communicated. I hope that it can - even easily - be communicated between two entities who both have consciousness, and hence, after some thinking, should have the same "problem", and recognize that's what is being talked about.
However, it cannot be communicated in formal terms (you agreed upon that). In order to communicate it, you can only "reach out a helping hand" and hope that it clicks on the other side.

You seem to be close to grasping that the world is full of hard problems, but none of them are about things we can talk about. I hope you give the issue some more thought; you would be glad you did.

I don't know what you are talking about. I only see one hard problem in _that_ category. And as I pointed out, you CAN talk about it. Maybe you can enlighten me.

Now keep this in mind: there is something we can't talk about, but I can't tell you what that something is, you have to figure out by yourself. Anything I can describe to you in words is not something we can't talk about. I have no way to talk about this thing, but there is a way to see it, and it's possible to guide people so they can also see it for themselves.

I can talk about it, I can try to tell you what it is, but I cannot come up with a formal definition. I can - and that's the whole problem here - also not come up with an OBJECTIVE operational description. But I can come up with a subjective description, assuming that this also fits close YOUR subjective experiences. So it is not that I cannot talk about it at all. I only need "a little help from my friends".

I didn't accuse you of anything. All I said was, if I started talking about bewustness with you without fully understanding what you mean by it, we would end up disagreeing at some point. Which is exactly the situation concerning most philosophical issues.

But you DO know what I mean, because at a certain point you said you didn't see the point in defining a new word if it was "consciousness" that I was talking about. A Freudian slip ? :-)

You are absolutely correct, but is "consciousness" one such thing or not? That is, is "consciousness" a concept that is related to "things in the world", or is it not?

We're getting close. I am conscious. So it IS a thing in the world. I *HAVE* subjective experiences. I *DO* feel pain. It is something that exists in MY world. But - I think we agreed on this - so it must be in YOUR world.

I am really impressed with that statement. Serious. So you see, we need a lot of concepts which lack formal definition in order for our communication to be meaningful, but at the same time concepts without a formal definition cannot be subject to scientific study.

Wrong. All the objects of study of all natural sciences are such. It is only in mathematics (and in linguistics and law) that concepts have a formal definition. But that is because they don't describe things in the world, but formal systems (maths and languages). Take the concept of "electron". I can simply say that it is the particle we accept as the particle of the QED dirac spinor. I can formally define what we mean with a dirac spinor in QED, but that doesn't define the "physical" electron. To do that, I'd have to give you lots of descriptions: some experimental ones, like it is the particles that come out of a hot cathode in vacuum, and they happen to be the same particles we find in the outer regions of atoms etc..., they have charge -1, they have a mass (at low energies!) of about 511KeV etc.. but I cannot DEFINE what is an electron because there will always be instances where my definition will flunk. I do exactly the same with consciousness, except for one thing: I cannot describe OBJECTIVE measurements with instruments dealing with it.
And it is thanks to this lack of formal definition that scientific progress is possible. If we had FORMALLY DEFINED an electron as Thompson could have done it, then it would not have been compatible with its quantum mechanical or relativistic description ! We still mean the same "thing in the world" as Thompson with "electron" but its theoretical description has seriously altered.


Doesn't that make you think? Doesn't it sound like science is only true to the extent that it restricts itself to formal logic?

I tried to explain you exactly the opposite !

I hope we get a chance, one day, to talk about why I think physics doesn't have as much to do with "things in the world" as we usually think. The truths of physics, from my perspective, seem to come from formal logic, not from the nature of reality. But that's a discussion way ahead.

But I think you misunderstood what physics is about in that case ! It is in setting up RELATIONSHIPS between formally defined concepts in theories (Dirac spinor) with "things out there" (electrons).

So can we take the fact that someone understands our descriptions of consciousness as proof that they are conscious?

Yes, but we're back to the same difficulty. It is not because it APPEARS as if someone understands the concept, from its behavioral point of view, that he also DOES understand it. A very smart computer program might be generating all what I'm typing here, and as such have no clue as what it is talking about.

You are correct about that, but as stated it is a problem like any other. Like any scientific problem, it will take time to be worked on, a final, absolute answer will never be found, but there's nothing preventing us from learning a lot more than we currently know.

Ah, something we can agree upon. Only, the way things present themselves, we haven't even started. As I wrote somewhere, interconnecting consciousnesses could be a first step. If it can be done.

That really depends on what you mean by behaviourism. Using a computer to send messages to an internet forum on metaphysics sounds like "behaviour" to me. Granted, mention to behaviour is absent in your description, but the description itself is manifested behaviour of a conscious entity (yourself)

No, absolutely not. Our message exchanges are (to me) absolutely no indication that either of us has consciousness. The only thing that indicates me that you have consciousness is that you are a human being.

This is what many people don't see. Consciousness is related to behaviour, but in a very abstract way. The more abstract a concept, the harder it is to think about it, and the easier it is to get confused and see problems where they don't exist.

As I pointed out, I don't think that consciousness has much to do with behavior. I even envision the possibility that consciousness IN NO WAY influences our behavior which is probably dictated by the running of a biochemical computer program. Even our thinking is not influenced by our consciousness. Our consciousness just subjectively observes what our (non-conscious) body is doing and thinking.
I acknowledge that this is an extreme viewpoint, but I consider it an interesting thought that consciousness CANNOT influence the behavior of a human being. It's just there passively observing what's being done, said and thought. And undergoes feelings.

cheers,
Patrick.
 
  • #358
Fliption said:
I have a particular feature of my existence that I observe. I can then observe that I'm not sure anyone else has this same feature. I can inductively decide they probably do. But the nature of this feature forces me to decide this inductively and it is this nature that results in the inability to reductively understand it. I don't see where definitions change any of this that I've written.

EXACTLY. I think I'm on the same wavelength as Fliption (but he's putting his arguments in a much more professional way :-)

I would like to point out that the reasoning:
<<
a) with concept A we mean such and such.

b) clearly, concept A has property B.

c) now from property B, we can derive a difficult problem

so there's something wrong with the way you define concept A >>

as a wrong way of reasoning.

It is almost as if in mathematics, you write down a function,
f(x) = integral sin(t)/t dt

and then you say, yeah, well there's something wrong with your
definition of f(x) because I don't know how to work out the integral !

It is not because from some concepts follows a difficult problem that the statement of the problem is wrong (or the concepts).

cheers,
Patrick.
 
  • #359
selfAdjoint said:
because the brain features that produce them are being actively studied, and if the hard problem means anything, it has to confine itself to the complement of those features.

I think the ultimate conscious experience is the fact that pain hurts. Pain is the physiological manifestation (neurotransmitters etc...) and the behavioural consequences (trying to avoid it, and screaming if we can't avoid it) ; but the fact that it HURTS cannot be studied actually (except for ASKING "did it hurt?" and assuming the answer is honest ;-)

For instance, I am pretty convinced that trying to factorise big numbers on my PC does pain to my PC (it gets hot, it takes a long time to answer, everything seems to run slowly etc...). My PC even regularly reboots in order to avoid it (or I might have a virus). But I don't think my PC FEELS the pain. Although my program prints out that it does if the number is really big...

cheers,
Patrick.
 
  • #360
If you stick a pin in a baby, it will respond with behavior, but it can't tell you that it hurts. Neverthelass, because the baby is human, we INFER that it hurts, and say "Nasty man! Stop hurting that baby". When your PC indicates harm with behavior by getting warm, you don't infer pain because it is a machine. Maybe you should? After all it wouldn't be much of a programming job to adapt some natural language program to produce "Ow! That hurts!" from your PC's speakers when it overheats.
 
  • #361
selfAdjoint said:
If you stick a pin in a baby, it will respond with behavior, but it can't tell you that it hurts. Neverthelass, because the baby is human, we INFER that it hurts, and say "Nasty man! Stop hurting that baby". When your PC indicates harm with behavior by getting warm, you don't infer pain because it is a machine. Maybe you should? After all it wouldn't be much of a programming job to adapt some natural language program to produce "Ow! That hurts!" from your PC's speakers when it overheats.

Right. That's exactly what the hard problem is all about :-)
In fact, I don't know if a newborn baby is conscious and feels pain. For all safety, I assume it does (because legally I think I'm in trouble if I'd act as if it wasn't :cool:. But it might very well not. And only slowly turn on its conscousness, say, at 1 or 2 years old. How can we know ?

cheers,
patrick.
 
Last edited:
  • #362
Fliption said:
I'm not understanding the definitional problem you're pointing out. Perhaps too much is being made of the word "belief"? The point is simply that there is no reason to believe that a zombie with identical A-consciousness to you would behave any differently from you. So if you believe you have P-consciousness, a zombie with identical A-consciousness must also behave as if it has the same belief. To suggest it really "believes" is a stumbing block because it implies an inner life, which by definition there is none. That's why I posted the clarification above that when we say belief, we are talking only about the functional aspects of it. It is probably best that the word not be used at all.

It is not the issue of belief but the definition of a zombie. Could you say clearly whether a zombie in your definition does or does not posses the properties of sensation, memory of particular sensation, imagination of sensation, and the ability to compare remembered, sensed and imagined sensations. I claim that AIs can be programmed to do these things (perhaps poorly, but it's the categories I'm talking about, not the efficiency). If your zombie has some of these but not others would you indicate which?

Thank you.
 
  • #363
selfAdjoint said:
It is not the issue of belief but the definition of a zombie. Could you say clearly whether a zombie in your definition does or does not posses the properties of sensation, memory of particular sensation, imagination of sensation, and the ability to compare remembered, sensed and imagined sensations. I claim that AIs can be programmed to do these things (perhaps poorly, but it's the categories I'm talking about, not the efficiency). If your zombie has some of these but not others would you indicate which?

Thank you.

I would say it can do the functional aspects of all those things. But it has no experience of doing them.
 
  • #364
vanesch said:
Right. That's exactly what the hard problem is all about :-)
In fact, I don't know if a newborn baby is conscious and feels pain. For all safety, I assume it does (because legally I think I'm in trouble if I'd act as if it wasn't :cool:. But it might very well not. And only slowly turn on its conscousness, say, at 1 or 2 years old. How can we know ?

cheers,
patrick.
Did Homo erectus experience pain? Do chimpanzees experience pain? Do cats? mice? lizards? trees? mosquitos?

If you say that they do (never mind why you say they do), does that mean they also possesses p-consciousness?

On the operating table, you do not experience pain (unless the anaesthetist fails to do her job!), and you are not conscious of your lack of consciousness. Do you possesses p-consciousness? What if you're in a coma?
 
  • #365
Fliption said:
I would say it can do the functional aspects of all those things. But it has no experience of doing them.

Ummm, OK. That leaves me with a problem. For in doing those things, it IS experiencing them in what I would call a reasonably not overspecialized use of the verb "to experience". Perhaps we could agree that it is not AWARE of experiencing them?

But awareness doesn't seem to require unphysical assumptions:

It is not impossible to bring the fact of experience into an AI system as data, and to allow it to be "sensed, imagined, remembered, compared". I recall a couple of years ago a plan for self repairing satellites and rovers that would do just this; monitor their own behavior, compare it with norms, and apply problem solving algorithms to search the behavior stream to find and repair the cause of any devience. Yes, this is what our autonomic nervous systems do more or less, but I would argue that pace your philosophy there is no sharp line between this kind of thing and percieving one's feelings.

Feelings have recently been a hot study area in the fMRI brain scan field. Quite siimple physical processes in the hippocampus have resulted in complex reported feelings.
 
  • #366
selfAdjoint said:
Ummm, OK. That leaves me with a problem. For in doing those things, it IS experiencing them in what I would call a reasonably not overspecialized use of the verb "to experience". Perhaps we could agree that it is not AWARE of experiencing them?

But awareness doesn't seem to require unphysical assumptions:

It is not impossible to bring the fact of experience into an AI system as data, and to allow it to be "sensed, imagined, remembered, compared". I recall a couple of years ago a plan for self repairing satellites and rovers that would do just this; monitor their own behavior, compare it with norms, and apply problem solving algorithms to search the behavior stream to find and repair the cause of any devience. Yes, this is what our autonomic nervous systems do more or less, but I would argue that pace your philosophy there is no sharp line between this kind of thing and percieving one's feelings.

Feelings have recently been a hot study area in the fMRI brain scan field. Quite siimple physical processes in the hippocampus have resulted in complex reported feelings.

I think there are some semantic issues with using the words this way. Of course, you can use them however you like but I don't think using them in this context makes any philosophical issues go away. From what I've seen in discussions in this forum, I think people might reverse your use of the words awareness and experience. For example I've seen people say that a video camera is aware of the data it receives. But I've never seen the word experience used in the same way. Regardless of which word it is we use, there is a feature that seems to have no functional explanation such as "the hippocampus does x". That is the feature that we're calling P-consciousness.
 
  • #367
selfAdjoint said:
But awareness doesn't seem to require unphysical assumptions:

It is not impossible to bring the fact of experience into an AI system as data, and to allow it to be "sensed, imagined, remembered, compared". I recall a couple of years ago a plan for self repairing satellites and rovers that would do just this; monitor their own behavior, compare it with norms, and apply problem solving algorithms to search the behavior stream to find and repair the cause of any devience. Yes, this is what our autonomic nervous systems do more or less, but I would argue that pace your philosophy there is no sharp line between this kind of thing and percieving one's feelings.
In another 'artificial machine' area, such awareness is already alive and flourishing ... in modern communications networks, the 'self-healing network' has been extensively researched, standards written, and commercial companies sell such systems to large telecom companies, who hire teams of SI experts to tweak these systems, so as to reduce even further the number of human techs needed to monitor and maintain such systems. Do such systems actually work? Yes, and you bet your life on them every day that you make a 000 (911 in the US) call!
 
  • #368
Nereid said:
Did Homo erectus experience pain? Do chimpanzees experience pain? Do cats? mice? lizards? trees? mosquitos?

If you say that they do (never mind why you say they do), does that mean they also possesses p-consciousness?

I don't know what is meant with a-consciousness and p-consciousness. I'd only say that *IF* they experience pain, then they are conscious.
And the difficult problem is indeed, to find out if chimps, cats, mice, lizards, trees, and mosquitos feel pain. I'm not talking about their behavior that would "indicate us they'd feel pain".

As I said, I don't know these definitions of a and p consciousness. But I can guess it: It seems from what is said above, that "a-consciousness" is just the intelligence of a computer program to dictate behavior "as if" the entity were conscious, and "p-consciousness" is what I simply call consciousness, namely the awareness of it, and the subjective experiences. I think p-consciousness (for me for short consciousness) doesn't influence behavior, and a-consciousness is not consciousness but the physical description of the input-response mechanism, be it a computer program, a brain or whatever.

cheers,
Patrick.
 
  • #369
vanesch said:
I think the ultimate conscious experience is the fact that pain hurts. Pain is the physiological manifestation (neurotransmitters etc...) and the behavioural consequences (trying to avoid it, and screaming if we can't avoid it) ; but the fact that it HURTS cannot be studied actually (except for ASKING "did it hurt?" and assuming the answer is honest ;-)

For instance, I am pretty convinced that trying to factorise big numbers on my PC does pain to my PC (it gets hot, it takes a long time to answer, everything seems to run slowly etc...). My PC even regularly reboots in order to avoid it (or I might have a virus). But I don't think my PC FEELS the pain. Although my program prints out that it does if the number is really big...

cheers,
Patrick.

Correct...the computer probably does not feel any pain, have you or any of the learned members of this gathering thought of any additional ability or abilities at the engineering level to be given to this computer to enable it to feel pain? There is equally another consideration...perhaps pain may not be a requirement of an efficient or perfect state of being. Robots are now being made not only physically flexible but also are being empowered with more abilities that closely resemble those of the humans.

The debate tends to be moving from Mary to Zombie to computers without any readiness for anyone to commit him or herself as to what additional abilities are neended to make these different systems structurally and functionally more efficient. And in terms of the human system, there are so many displayed abilities and functions that cannot stand the test of efficiency, let alone be grounded as fundamentally necessary.
 
Last edited:
  • #370
Philocrat said:
Correct...the computer probably does not feel any pain, have you or any of the learned members of this gathering thought of any additional ability or abilities at the engineering level to be given to this computer to enable it to feel pain?


We can think of the following: I take a big metal box (say, 2m on 2m on 2m), in which I put my PC, with the original program, but with the display, speakers and keyboard outside of the box. I also bribe one of the doctors of the nearby hospital that when there's a hopeless case coming in the emergencies, with a broken spine, paralized and without a voice, that he quickly does the necessary reparations to the victim, and then handles her over to me. I put her in the box, put a few electrodes on her body and connect them to my computer. Now when I ask my computer to factorize a large number, it not only prints out "Aw, that hurt" on my screen, but also connects (through a simple controller card) the mains (220V) to the electrodes on the victim's body, which is conscious. She can't move and can't scream ; I don't see her, because she's inside the big box. But I'd say now that my "box computer", when it prints out "Aw, that hurt", feels pain...

cheers,
Patrick.
 
  • #371
Many contemporary philosophers have already suspected the concept of 'awareness of being aware' or 'self-awareness' as the essential component of consciousness in general. For those of you who understand computers up to the programming and engineering levels, you should know that many new generations of computers are already 'environmentally aware'. In fact on this aspect, many of these computers would outsmart or outfox the humans, as far as the notion of safety or avoidance of evironmental dangers is concerned. There are now so many sophisticated devices that if you fit them onto modern computers they would cause these computers to become 'Super aware' of their external environments.

The BIG question now is:

What technical difficulties do we have to overcome both at the detailed hardware engineering level and at the detailed schematic Programming level in order to empower computers with self-wareness.

The issue is no longer about argueing whether computers can think or be conscious. Computer is nearly human! The question should therefore concentrate on what is left to be done to make computers fully human, given that being human is thought to be the benchmark or measure of being alive. For all we know, being human may afterall not be the only route of getting to design superbeings. For it seems as if we are currently thinking that we must first design human-like machines before setting about the important yet well-overdue project of structurally and functionally improving the physical state of the human-like beings. I dont't know why we think in this way, but so it seems. Bad habits die hard!
 
Last edited:
  • #372
Philocrat said:
The BIG question now is:

What technical difficulties do we have to overcome both at the detailed hardware engineering level and at the detailed schematic Programming level in order to empower computers with self-wareness.

The problem still stands: how would you know you've succeeded ?

There's no behavioral way to know. Look at my "computer in a box". The output on the screen (the only behavioral access I have) is identical: it prints out "aw that hurt!". But in the case the victim is connected to the mains, there is an awareness of pain in my box, and if the victim is not connected, it is a simple C-program line that printed out the message. The computer works in identical ways.
If you now replace that human victim (of which we can assume that it consciously experiences pain) by a machine, how can we know ? The behavir is identical.

cheers,
Patrick.
 
Last edited:
  • #373
vanesch said:
As I pointed out, I don't think that consciousness has much to do with behavior. I even envision the possibility that consciousness IN NO WAY influences our behavior which is probably dictated by the running of a biochemical computer program. Even our thinking is not influenced by our consciousness. Our consciousness just subjectively observes what our (non-conscious) body is doing and thinking.
I acknowledge that this is an extreme viewpoint, but I consider it an interesting thought that consciousness CANNOT influence the behavior of a human being. It's just there passively observing what's being done, said and thought. And undergoes feelings.

Why would a biochemical computer program, have written into itself to self destruct?

How would you account for the fact that, I pushed my wife out of the way of getting hit by a car and almost getting killed myself? Why would I want to do that? What influenced my choice then?
 
  • #374
Rader said:
How would you account for the fact that, I pushed my wife out of the way of getting hit by a car and almost getting killed myself? Why would I want to do that? What influenced my choice then?

Heroic behavior can be naturally selected for, in that related groups of individuals, of whom some have "heroic behaviour" (running the risk of sacrificing themselves for the well-being of the group), have a survivalistic advantage over a "bunch of cowards". The heroic subject diminishes of course his own chances of getting his genetic material to the next generation, but his relatives will have a higher chance in doing so.
Also, if a heroic subject *survives* to his heroic deed, often there is a lot of compensation, and even survivalistic advantage (success with members of opposite sex).

What makes you think that this behavior is unthinkable without consciousness ?

But the very behavioural observation for "altruistic selfdestruction" cannot be the proof of consciousness.
Dogs do this too. Some security systems do that too. Even a fuse does it, inside electronic equipment. Are fuses conscious ?

cheers,
Patrick.
 
Last edited:
  • #375
vanesch said:
Heroic behavior can be naturally selected for, in that related groups of individuals, of whom some have "heroic behaviour" (running the risk of sacrificing themselves for the well-being of the group), have a survivalistic advantage over a "bunch of cowards". The heroic subject diminishes of course his own chances of getting his genetic material to the next generation, but his relatives will have a higher chance in doing so.
Also, if a heroic subject *survives* to his heroic deed, often there is a lot of compensation, and even survivalistic advantage (success with members of opposite sex).

I would agree with you, that all those factors could be computized in my brain subconciously but it was apparently overrided. My subconscious actions had nothing to do with calculations of survival of the human race. What went through my head was anxiety fear hate relief love, in that order.

It seems we never get past KP to KP4 with this issue of who is conscious. What if we could guess what is in each others head? Bobby Fisher seemed to, what of his competitors? Why did Big Blue beat Spatsky? Could anything be conscious, meat or machine?

What makes you think that this behavior is unthinkable without consciousness?

I am aware of being aware, is one primary reason. The second reason I would give is I never seen anyone walking around that had no consciounsess and was dead, doing these things. I realize I have no proof that anything is either conscious or alive. This could have consequence as you have stated in your previous post. Maybe your right, consciousness is observing but something is aware of being observed. I know that, from my own experience. The world is weird enough now without giving the property to consciousness of being able to descriminate, whereby I or maybe humans are only conscious.

But the very behavioural observation for "altruistic selfdestruction" cannot be the proof of consciousness.

Or you or I can know that.

Dogs do this too. Some security systems do that too. Even a fuse does it, inside electronic equipment. Are fuses conscious?

You know by your posts you seem to be very interested and educated to answer your last question and this one. Is not the basic difference between measurement of coherent states and non-cohent states, the observer? HUMANS DOGS FUSES show the same results, only if we can determine if they observe. Does it not come down to the fact that all elecromagnetic waves observe each other?
 
  • #376
Rader said:
My subconscious actions had nothing to do with calculations of survival of the human race. What went through my head was anxiety fear hate relief love, in that order.

You misunderstood my point. If there is a natural selection for a certain behavior, then that behavior is not necessarily instilled with a conscious thought of "I have to optimize my natural selection" :-) You asked how it could be that you had an altruistic heroic behavior if it weren't for a conscious descision (against all odds) to act that way. I pointed out that your "biochemistry computer" could have been programmed to behave that way by natural selection, and that such behavior is no proof of consciousness.
There are now 2 possibilities left: one is that (as I propose) your "biochemistry computer" runs its unconscious program as any other computer, and your consciousness is just passively watching and having feelings associated with it, without the possibility of intervening. The other possibility is that your consciousness is "in charge" of your brain, and influences behavior.

cheers,
Patrick.
 
  • #377
vanesch said:
You misunderstood my point. If there is a natural selection for a certain behavior, then that behavior is not necessarily instilled with a conscious thought of "I have to optimize my natural selection" :-) You asked how it could be that you had an altruistic heroic behavior if it weren't for a conscious descision (against all odds) to act that way. I pointed out that your "biochemistry computer" could have been programmed to behave that way by natural selection, and that such behavior is no proof of consciousness.

I think I understand you correctly but do you understand me. I could have let the car run over her if I was programed or not for this trait. I choose not to. If I was getting a divorce maybe I would have had a second thought about it and let the car run over her. Now do you understand my point. That takes a conscious thought.

There are now 2 possibilities left: one is that (as I propose) your "biochemistry computer" runs its unconscious program as any other computer, and your consciousness is just passively watching and having feelings associated with it, without the possibility of intervening. The other possibility is that your consciousness is "in charge" of your brain, and influences behavior.

01-The world would be totally deterministic and there be no choice. Your claiming then that a "biochemistry computer", I take that to mean the "brain parts" would cause consciousness while conciousness produced, looks on. This would be a classical explanation, and if the "biochemistry computer", was quantum in nature?

02-If your consciousness is "in charge" of your brain, and influences behavior, then all behavior would be totally deterministic, only if there was a classical explanation of the brain. If the brain was quantum in nature, then it would seem to be more understandable why we make choices.
 
  • #378
Rader said:
I think I understand you correctly but do you understand me. I could have let the car run over her if I was programed or not for this trait. I choose not to. If I was getting a divorce maybe I would have had a second thought about it and let the car run over her. Now do you understand my point. That takes a conscious thought.

What is a conscious thought? All of the brain and biochemical acitvity required for you to "think" about this decision can, in principle, be completely accounted for. None of these activities have anything to do with consciousness. What Vanesch is saying is that there is no way for you to know whether your consciousness is actually participating in the process or whether it is just experiencing the physical activities that particpate in the process. The "conscious thought" you're referencing can be completely explained using physical processes of the brain; none of which are associated with consciousness. This is why there is a 'hard problem'.
 
  • #379
vanesch said:
We can think of the following: I take a big metal box (say, 2m on 2m on 2m), in which I put my PC, with the original program, but with the display, speakers and keyboard outside of the box. I also bribe one of the doctors of the nearby hospital that when there's a hopeless case coming in the emergencies, with a broken spine, paralized and without a voice, that he quickly does the necessary reparations to the victim, and then handles her over to me. I put her in the box, put a few electrodes on her body and connect them to my computer. Now when I ask my computer to factorize a large number, it not only prints out "Aw, that hurt" on my screen, but also connects (through a simple controller card) the mains (220V) to the electrodes on the victim's body, which is conscious. She can't move and can't scream ; I don't see her, because she's inside the big box. But I'd say now that my "box computer", when it prints out "Aw, that hurt", feels pain...

cheers,
Patrick.

The scenario that you are describing here may very well reflect the current state of our progress at the design, engineering and programming levels. Yes, true this may very well be so, but it still doesn't alter the fact that we need to clearly state and classify the notions of (1) intelligence (2) thinking and (3) consciousness. For example, given that we knew what (1) or (2) or (3) clearly means, we need to take stock of all the things that humans can do that computers cannot do and vice versa and the things that both can equally do that come under (1), (2) or (3). All that I have seen so far is that people just argue away in a point-scoring manner without much attention to these questions. This problem is captured much more clearly in my next posting below.

The state that you are describing is admittedly problematic, but I am saying that we need to move away from this level of sentiment and take hard stock of what is going on at the detailed engineering and programming levels. As to the puzzle of why we want to first replicate the human-like intelligence, or thinking or consciousness in machines before thinking about any form of progress in the subject, well that's another matter. I leave that to your imagination.
 
  • #380
vanesch said:
The problem still stands: how would you know you've succeeded ?

There's no behavioral way to know. Look at my "computer in a box". The output on the screen (the only behavioral access I have) is identical: it prints out "aw that hurt!". But in the case the victim is connected to the mains, there is an awareness of pain in my box, and if the victim is not connected, it is a simple C-program line that printed out the message. The computer works in identical ways.
If you now replace that human victim (of which we can assume that it consciously experiences pain) by a machine, how can we know ? The behavir is identical.

cheers,
Patrick.

How could we not know? Yes, I agree with you, behaviourism does have some drawbacks but it never completely undermines sucessful existence. In terms of the humans, we are naturally lazy and reluctant about taking control of things on our causal and relational pathways. The claim that we cannot intervene with our own nature and make an effort to re-negineer and improve our state of being is not only wrong but fundamentally dangerous. We do know, and have always known, when we succeed in the public realm, even behaviourally. If we could not do this, we would probably not be here today. Perhaps, the measure is only in degrees or minimal, but at least we are still here. On this very same token, when we do succeed in replicating human-like intelligence in other non-human systems, I personally see nothing that would stop us from knowing this. In fact this is even the more reason why we must have courage to take control and use the right and clear approach in dealing with this issue.
 
Last edited:
  • #381
The Turing Universal Machine and Consciousness

The dispute is not, and has never been, about whether a machine can think or act intelligibly because the original Turing Machine had all the necessary ingredients to do so. Rather, it’s wholly about whether thinking or acting intelligibly is a conscious act. The notion of awareness (introspective or extro-spective) ought to already have been captured by the notion of thinking or intelligence, given that we knew what this meant in the first place. I am saying that it is more than well overdue for all the inter-disciplinary researchers to commence the process of schematically yet quite naturally coming to a concrete agreement on this subject. The agreement that I am referring to here could be captured in the following schema:

SCHEMA I

(1) A Conscious act is an intelligent act
(2) All intelligent acts are conscious acts
(3) Anything that can produce an intelligent act is conscious
(4) Computer can produce intelligent act
-------------------------------------------------------------------------------
Therefore, computer is conscious

Immediately after this argument, the next most important question to ask is this:

What then constitutes an intelligent act?

In an honest and genuine response to this question, the researchers on this subject should then move on to create a ‘reference table’ of all the things that count as intelligent acts.

This argument may equivalently be stated as:

(1) A conscious act is an act of thinking
(2) All acts of thinking are conscious acts
(3) Anything that can think is conscious
(4) Computer can think
-------------------------------------------------------------------------------------
Therefore, computer is conscious

You are then required to state clearly:

What constitutes thinking?

The researcher must then create a reference table of all the things classed under thinking.

SCHEMA II

On the other hand, if it turns out that there are some thinking or intelligent acts that are conscious and some that are not, the schema should take the form:


(1) Some acts of thinking are conscious acts
(2) Thinking is conscious if you are aware not only of what your are thinking about but also of the fact that you are thinking
(3) Anything that can do this is conscious
(4) Computer has some thinking acts that are conscious
-------------------------------------------------------------------------------------
Therefore, computer is conscious

The researchers who opt for this alternative schema must classify thinking acts or intelligent acts into (1) those that are conscious and (2) those that are not conscious. Perhaps there may be a third or more schmas to prove otherwise, but I am going to leave it at this point for now.


NOTE: The implication of the Universal Turing Machine is such that it does not presuppose consciousness, therefore any schema that any researcher may opt for still has to decide on the relevance or non-relevance of consciousness. Even if he or she successfully avoids the issue of consciousness at the level of engineering or re-engineering to improve the intelligent system in questions, he or she may not avoid this issue at the level of structural and functional comparison of the system in question to the human system. Researchers in the end must either accept it as relevant or reject it as not.
 
Last edited:
  • #382
Rader said:
I think I understand you correctly but do you understand me. I could have let the car run over her if I was programed or not for this trait. I choose not to. If I was getting a divorce maybe I would have had a second thought about it and let the car run over her. Now do you understand my point. That takes a conscious thought.
How do you know your consciousness was MAKING the decision and your body was acting that way, or your body decided to act that way, and your consciousness was feeling all right with that decision (without a means of intervening) and "thought" it took it.

01-The world would be totally deterministic and there be no choice. Your claiming then that a "biochemistry computer", I take that to mean the "brain parts" would cause consciousness while conciousness produced, looks on. This would be a classical explanation, and if the "biochemistry computer", was quantum in nature?

02-If your consciousness is "in charge" of your brain, and influences behavior, then all behavior would be totally deterministic, only if there was a classical explanation of the brain. If the brain was quantum in nature, then it would seem to be more understandable why we make choices.
[/QUOTE]

This is indeed, more or less, the point. Although I do not need the idea of determinism: you can have randomly generated phenomena without conscious influence. I also tend to think - but I'm very careful here - that quantum theory might have something to say about the issue. But I think we are still very far from finding out, it is the "open door" in actual physics to consciousness.

Our mutual understanding of our viewpoints is converging, I think.

cheers,
patrick.
 
  • #383
Philocrat said:
(2) All intelligent acts are conscious acts

I do not agree. I do not see the link between intelligence (the ability to solve difficult problems) and consciousness.


The researchers who opt for this alternative schema must classify thinking acts or intelligent acts into (1) those that are conscious and (2) those that are not conscious.

Hehe, yes, they have to solve the hard problem :-)
Because it is not the problem category, nor the problem solving strategy, that will indicate this. So what remains of the intelligent act on which we base the separation ? What will be the criterion ? Also, assuming we're talking about a Turing machine, do you mean it is the _software_ that is conscious ? Independent of the machine on which it runs ? When it is written on a CD ?
I have a hard time believing that a Turing machine, no matter how complex, can be conscious. But I agree that I cannot prove or disprove this.

But we should avoid the confusion between intelligence and consciousness here. Now it might very well be that certain levels of intelligence are only attainable if the entity is conscious. But personally, I do not see a link, especially if consciousness is just sitting there passively watching. You could just as well look at power consumption and say that if you reach the density of power consumption of a human brain, the machine is conscious, and then jump into the research on power resistors. I think that "intelligence" (the ability to solve difficult problems) is a property just as power consumption, when related to consciousness.

cheers,
Patrick.
 
  • #384
Fliption said:
What is a conscious thought?

Cognitive awareness. http://www.hedweb.com/bgcharlton/awconlang.html

All of the brain and biochemical acitvity required for you to "think" about this decision can, in principle, be completely accounted for. None of these activities have anything to do with consciousness. What Vanesch is saying is that there is no way for you to know whether your consciousness is actually participating in the process or whether it is just experiencing the physical activities that particpate in the process. The "conscious thought" you're referencing can be completely explained using physical processes of the brain; none of which are associated with consciousness. This is why there is a 'hard problem'.

Fliption, actually there seems to be evidence of both. When something is born into existence, it apppears to be conscious, until such time, I can say I am conscious. If this is explanable some how, some day, this will eliminate the "hard problem" This would explain what is conscious and what physcial states determine how much something is conscious. Consciouness would have to be a fundamental property of nature.
 
  • #385
vanesch said:
How do you know your consciousness was MAKING the decision and your body was acting that way, or your body decided to act that way, and your consciousness was feeling all right with that decision (without a means of intervening) and "thought" it took it.

Good question, the only way for me to answer that is that, my consciounsess is aware of being aware and has evolved to an understanding of an order of the way the world ought to be. Sometimes my consciousness acts right but my body says no. Sometimes my body acts right when my consciousness knows better. So it appears that consciousness is watching and we make the decision how to act. :wink:

This is indeed, more or less, the point. Although I do not need the idea of determinism: you can have randomly generated phenomena without conscious influence. I also tend to think - but I'm very careful here - that quantum theory might have something to say about the issue. But I think we are still very far from finding out, it is the "open door" in actual physics to consciousness.
Our mutual understanding of our viewpoints is converging, I think.

That happens sometimes to our dissapointments later that nobody has the same view.
 

Similar threads

Back
Top