Epothilone B study connected to 'Hard Problem of Consciousness' Model

  • Thread starter .Scott
  • Start date
  • #1
.Scott
Science Advisor
Homework Helper
3,515
1,625
TL;DR Summary
A new study published in eNeuro reports the results of administering Epothilone B to rats that are being anesthetized with isoflurane gas. Those results are described in light of the microtubule involvement on consciousness as theorized by Stuart Hameroff's - and famously advocated by Roger Penrose as supporting his Orch OR model.
A new study published in eNeuro reports the results of administering Epothilone B to rats that are being anesthetized with isoflurane gas. Those results are described in light of the microtubule involvement on consciousness as theorized by Stuart Hameroff's - and famously advocated by Roger Penrose as supporting his Orch OR model.

This asserted connection with the Hard Problem of Consciousness is now also featured in a Neuro Science News article.

I have been an advocate of QM-based hard consciousness for about 4 decades, and since I have posted in these Forums early and often on the subject, you might think that I would herald any advance with enthusiasm.

But my reaction to these articles is quite mixed - because the argument they present connecting the experimental results to the hard problem of consciousness is very flawed. Perhaps the biggest difficulty is that the subject matter, born in Philosophy and in this case from a Biology experiment, crosses deep into Physics and Information Systems Engineering.


Background for what is Argued in the article:
Microtubules: Think "cell bones". The cytoskeleton is what gives shape to a cell. Microtubules are load-bearing members of the cytoskeleton.
Roger Penrose argues (I would say correctly) that human consciousness must be QM based. While holding these views, he met up with anesthesiologist Stuart Hameroff and starting in the early 1990's proposed "Orch Or" as a mechanism for QM data processing in the "warm and wet environment" of a living cell. The basic precept behind this theory was that information was processed as the superposition of resonances among microtubules within a cell. In this theory, MAPs (Microtubule-associated proteins) provide the informational interface between the QM-encoded information and the classical information processing. Stuart Hameroff's obvious contribution was that he knew his way around cell biology and the associated molecular paths.
Here is the "Significance Statement" quoted verbatim from the article. I have no problem with this statement:
Our study establishes that action on intracellular microtubules (MTs) is the mechanism, or one of the mechanisms, by which the inhalational anesthetic gas isoflurane induces unconsciousness in rats. This finding has potential clinical implications for understanding how taxane chemotherapy interferes with anesthesia in humans and more broadly for avoiding anesthesia failures during surgery. Our results are also theoretically important because they provide support for MT-based theories of anesthetic action and consciousness.
Notice that the last sentence of that statement refers to "consciousness", not "hard consciousness".

What is Argued in the article:
Here is another excerpt from the article, this time from the Introduction where the purpose for conducting the experiments is described:
... MTs (composed of tubulin subunits) ... remain a candidate for a unitary site of anesthetic action. MTs are the major components of the cytoskeleton in all cells, and they also play an essential role in cell reproduction—and aberrant cell reproduction in cancer—but in neurons, they have additional specialized roles in intracellular transport and neural plasticity (Kapitein and Hoogenraad, 2015). MTs have also been proposed to process information, encode memory, and mediate consciousness (S. R. Hameroff et al., 1982; S. Hameroff and Penrose, 1996; S. Hameroff, 2022). While classical models predict no direct role of MTs in neuronal membrane and synaptic signaling, Singh et al. (2021a) showed that MT activities do regulate axonal firing, for example, overriding membrane potentials. The orchestrated objective reduction (Orch OR) theory proposes that anesthesia directly blocks quantum effects in MTs necessary for consciousness (S. Hameroff and Penrose, 2014). Consistent with this hypothesis, volatile anesthetics do bind to cytoskeletal MTs (Pan et al., 2008) and dampen their quantum optical effects (Kalra et al., 2023), potentially contributing to causing unconsciousness.

So what's Wrong with their Argument?
As soon as they mentioned "Orch OR", they crossed the line from consciousness to hard consciousness. The anesthesiologist's main purpose is to make certain that what happens in the Operating Room, stays in the Operating Room. He (or she) wants you to have no pain-filled recollections about your surgical ordeal. It is also great if you are sedated and physically calm, but what really counts is memory.
Still, the actions of taxane and the other anethetics does include "all of the above" (amnesia, sedation, and physical calm). And all of those bear on the "easy" problems of consciousness.
In effect, their experimental results argue that microtubules support consciousness through their actions in the classical (non-QM) information domain - leaving their additional participation in hard consciousness open.

What Exactly is "Hard Consciousness" and what Experimental Result would show it Silenced?
To spell is out, it's the "Hard Problem of Consciousness".
It's Freshman stuff for Philosopher's, but I introduce it this way for the experimentally-minded:
Ask yourself (and as many others as you prefer):
1) When you are conscious, are you always conscious of some sort of information - such as the appearance of a tree, a memory, or thoughts about a problem? Is it possible to be conscious of nothing - not even the thought of nothing?
The key point here is that there is a close association between consciousness and information. If you are rendered thoughtless, it isn't clear that you can be conscious of anything - or conscious at all.
2) When you are conscious, how much information (measured in bits) are you conscious of in any one moment?
This is where the connection to Physics comes in. If the answer is ever more than 1, then you need to explain how that information is collected into a single conscious thought. If you think that the information can be scattered throughout your skull, what makes you skull special? Why not your brain matter and someone else's? If you think that it is because information in your skull is "connected", then what about being "connected" allows it to be selected for what your current moment of consciousness?
The answer is that Physics only has one mechanism for associating many bits of information into a single state - quantum entanglement.
3) Does this "internal" consciousness affect your behavior? There are two ways of answering this, the simple way and the one that leads to more discovery. The simple answer is that people report being aware and conscious (and such reports are "behavior"). Of course, they could be lying - but are you lying when you claim to be conscious?
But there is also the Darwinian argument. Why be conscious if it isn't going to help you spread your likeness to the next generations? It would be a waste of brain matter and energy. But there are potential benefits to QM data processing. There are some problems quickly solved in the QM domain that are really time-consuming in the classical domain. Since most of us cannot factor large composite numbers in our heads, we can eliminate Shor's Algorithm. But Shor's Algorithm has at it's roots the QFT, Quantum Fourier transform. There are some mental functions, like eyesight and hearing, that might benefit from the QFT. But it hard to imagine such circuitry evolving generation to generation through Darwinian pressures. Could any of the intermediate result have survival value? And then, my choice, Grover's Search Algorithm. If you can score all your options in the quantum domain, Grover will find you one with a really high score - perhaps the highest score. What a great little tool for decision-making!

And so what would the Experimental Result be to show Hard Consciousness Silenced?
It could be as simple as the person no longer able to honestly report a conscious awareness.
While the QM is impaired, there could be impairments in visual perception or hearing.
Loss of QM functions would not directly impair memory. So, what I think is more likely is that it could be an impairment in creative decision-making - which upon QM restoration, the person recognizes because now they can apply those creative thoughts to what they remember. In any case, the impairment would reflect the loss of some beneficial analytic process not readily provided with classic information processing.
 
Last edited:
Biology news on Phys.org
  • #2
This is different from my understanding of the consciousness problem and the hard consciousness problem.

The non-hard consciousness problem concerns how to explain how the brain works and mediates the information transfers in the brain that can become conscious, while the hard consciousness problem involves how whatever is going on in the brain leads to the specific way that conscious phenomena feel. Such as why does red look (feel) red and not green.

This is like a Descartes separation of body and spirit or soul. Non-hard is in the realm of the physical and measurable by external devices ((in theory). The hard problem is in the realm of the spirit, thus difficult to access independently of the singular observer (observing their inner states) using any kind of experimental device. Its the idea behind "I know I'm conscious, but I have no evidence that you are other than your reports of your internal states which I can not verify independently.

The non-hard consciousness problem is only concerned with how information get around in the nervous system and how it is processed (a strict neural physiological approach is the usual approach to this).

WRT the non-hard consciousness problem, it is not clear to me what the MT stuff gives you that can not be done neurophysiologically. There are many subtle and small scale physiological effects in neurons that a lot of people are unaware of that may be getting overlooked. This is a long term problem I have had with this theory. I don't see a lot of unique predictions.
If the microtubules were to be disassembled in side the cells (which can be done with drugs) would that make consciousness go away (if the neurons were still functioning)?

How could you tell if consciousness were affected (you wouldn't do it with people)?
How many neurons of parts of neurons interacting would it take to make something conscious?

How would MT theory predict a split brain operation would affect consciousness? Would the consciousness of a single brain be split into two?
Could you make a device with several quantum interacting parts that would be conscious?

My take on consciousness is that the nervous system (in humans since that is the only place we get reports of consciousness from) creates and internal information construct that has a "world" (3D space plus time) within which a representation of yourself and other things (people or not) can move around and interact in according to different sets of rules. illusions can be revealing of this.
These interactions looping back around to themselves in the brain provide pathways for feedback and self-awareness.
All this is generally thought to be possible by standard neurophysiological processes.

Combining different sensory streams into a unified signal is a standard thing found in textbooks, so it does not seem like a big issue to me.

Also why microtubules and not neurofilaments (another structural component of neurons?

Much of this argument escapes me.
 
Last edited by a moderator:
  • #3
Many of your questions bear on looking at an information processing system from the outside (or in the case of the human brain from incomplete information), and attempting to deduce the logic that makes this run. As a Software Engineer, this is not an alien exercise. I need to do it whenever I try to modify someone else's code.

In the case on consciousness, the element called "qualia" is clearly information related, but is not something that can be "coded up" in software or designed into classical electronic logic. It's not that the qualia is too complicated to work with, it's that in involves a fundamental process that is not available in classical computers.
In contrast, human conscious behavior can be well-implemented in a classical computer system - time, energy, and complexity issues aside. In short, it is easy to see how the human brain could address the "easy problems of consciousness".

BillTre said:
How could you tell if consciousness were affected (you wouldn't do it with people)?
If you mean "easy consciousness", then the article answers your question. Presuming you mean "hard consciousness":
If you don't catch the problem with being conscious of anything that must be described by multiple bits, then you won't see the purpose in invoking QM for states involving multiple bits. If you do, then you will run into the Darwin questions - which drives you to wonder what is the real survival purpose of this "qualia". Then you can start to look for potential disabilities that would result from its loss.

BillTre said:
How many neurons of parts of neurons interacting would it take to make something conscious?
So your question connects to two points where Penrose and myself are in perfect sync. As hard as it is to imagine quantum superpositioning and qubit processing occurring within the warm and wet cell environment, it would be even more difficult to imagine it occurring between cells. So qualia, the "hard" part of hard consciousness, would have to live at the cell level - not any particular cell - but only one cell gets to be "you" at a time.
The other presumption is that it is the superpositioning or the collapse of that superpositioning that is associated with (or the generator of) the qualia. So our brain, in taking advantage of this Physical information processing method, creates a structured form of qualia out of Physics qualia - so there's a difference between consciousness and human consciousness.
I don't see this as a huge jump. The difference between our universe and a detailed story of our universe is that our universe includes consciousness.

BillTre said:
How would MT theory predict a split brain operation would affect consciousness? Would the consciousness of a single brain be split into two?
Could you make a device with several quantum interacting parts that would be conscious?
The MT Orch OR theory belongs to Penrose and Hameroff - who speculated little about the overall purpose of the qualia. So, your question, as phrased, is unanswered. However, setting the exact QM processing bio-technology aside, I would presume that there are many seats of conscious within any human brain, but only one gets the control stick (and the story memory) at a time. So when you reflect on what you have just done, you are quizzing a memory that only recorded one seat of consciousness at a time.

And to the second question: Yes, but it would not be human consciousness.

BillTre said:
These interactions looping back around to themselves in the brain provide pathways for feedback and self-awareness.
All this is generally thought to be possible by standard neurophysiological processes.
All of that can be handled by classical information processing - so long as "self-awareness" refers only to information processing and not a type of "qualia".

BillTre said:
Combining different sensory streams into a unified signal is a standard thing found in textbooks, so it does not seem like a big issue to me.
"Combining streams" could describe time-based multiplexing. That technology is all about NOT mixing the signals.
But perhaps you mean something like this. I program a robot to examine a room and identify the geometry and materials use in the room with a combination of auditory and video input. In the process, it might determine that the wall in front of it is a flat blue surface with a light texture (3mm) and consisting of a soft sound-absorbing backing. So, let's just focus on the "blue" part. The robot is not conscious (in the qualia sense) of the blue because it stores the wall color in a 24-bit RGB value - and those 24 bits are isolated from each other. There is no one place or state within the robots information system where the "blue" exists. But, we could take those bits and use them to encode a single physical state - a quantum superposition. This alone would not give it a human appreciation for "blue", but it creates a Physical object that could support a qualia experience of the blue. Without it, those 24 bits are no more Physically associated than any other randomly selected 24 bits in the universe.
The quantum superposition provides a neccesary (but perhaps not sufficient) condition for Physically supporting multi-bit qualia.
 
  • #4
.Scott said:
It's not that the qualia is too complicated to work with, it's that in involves a fundamental process that is not available in classical computers.
What is that fundamental process?

.Scott said:
If you mean "easy consciousness", then the article answers your question. Presuming you mean "hard consciousness":
generally speaking either one. I did not get out of the article how consciousness was determined other than by personal reports.

.Scott said:
If you don't catch the problem with being conscious of anything that must be described by multiple bits, then you won't see the purpose in invoking QM for states involving multiple bits.
I don't have any problem with being aware of multiple bits. The visual system, many times on the way from the eye to the visual cortex combines many different visual streams to produce higher order information about the visual field. Single activated photoreceptors feed to other cells in the retina to detect lines for example.
This is textbook since at least the 1980 and probably earlier.

Also, I don't think bits should often be discussed with respect to brain functioning. I am not aware of any successful significant applications of bits to brain functioning. How do you count up the bits? By action potentials?

.Scott said:
what is the real survival purpose of this "qualia".
Most likely it is to better attract the conscious attention to stimulus (or internal image) so that the organism can better respond behaviorally.

.Scott said:
qualia, the "hard" part of hard consciousness, would have to live at the cell level - not any particular cell - but only one cell gets to be "you" at a time.
Then you could only be conscious of the inputs to that one cell?
There are a lot of neurons in the human brain. Would they take turns? What would decide who got to be the boss?

When I'm aware of the red flower smelling nice, is that from the combining of two different sensory streams. It sounds like that doesn't fit in the theory.

.Scott said:
So when you reflect on what you have just done, you are quizzing a memory that only recorded one seat of consciousness at a time.
I thought there were cases of memories of events people were not conscious of. I'll try to look that up.

.Scott said:
The other presumption is that it is the superpositioning or the collapse of that superpositioning that is associated with (or the generator of) the qualia.
Why could not a neural network also have superpositioning and its collapse onto a specific meaning.
Visual illusions seem like this to me and also seem entirely understandable from a standard neurophysiological point of view.

.Scott said:
"Combining streams" could describe time-based multiplexing. That technology is all about NOT mixing the signals.
I don't know about forcing the nervous system into some time-based multiplexing *whatever that is). If its not about mixing signals (combining and analysing them) then it deoes not reflect biology.

.Scott said:
I program a robot to examine a room and identify the geometry and materials use in the room with a combination of auditory and video input. In the process, it might determine that the wall in front of it is a flat blue surface with a light texture (3mm) and consisting of a soft sound-absorbing backing. So, let's just focus on the "blue" part. The robot is not conscious (in the qualia sense) of the blue because it stores the wall color in a 24-bit RGB value - and those 24 bits are isolated from each other. There is no one place or state within the robots information system where the "blue" exists. But, we could take those bits and use them to encode a single physical state - a quantum superposition. This alone would not give it a human appreciation for "blue", but it creates a Physical object that could support a qualia experience of the blue. Without it, those 24 bits are no more Physically associated than any other randomly selected 24 bits in the universe.
This example is confusing to me.
Are you saying a robot could not find a blue ball that is next to a red chair in a room?


Here is another issue I have:
Neurons can be very large cells with many branches (dendrites and axons). There are MTs all over the place. Are they all taking part in this quantum processing? Some can be more than a meter apart.
If not which parts of the neuron are the conscious parts?
 
  • #5
BillTre said:
What is that fundamental process?
( Response to "It's not that the qualia is too complicated to work with, it's that in involves a fundamental process that is not available in classical computers.")

I am referring to a process that creates conscious experience. This qualia is not just a feeling, it's the full experience - the sense of reality. It's the whole conscious experience of everything in our life story as it happens.

For example, a robot can process video information and respond to it. It can even log the fact that it is processing information. But it has no conscious experience of what it is looking at. People have that conscious experience of what they are looking at.

As a Software or EE, if you want your robot to have that same experience as a person, you need to design it so that it can have a conscious experience of what it is watching. I am calling it "fundamental" because it cannot be build up from the same type of gates and devices that allow it to process the video information. No assemblage of Boolean gates will do it. If a program of mine evoked a "conscious experience" in itself, it would at best be a bug. In actuality, it just couldn't happen.

The most common explanations for where this conscious experience comes from are divine intervention and neural complexity. Attributing it to complexity is just giving up on the problem. But if you recognize conscious experience (ie, qualia) as a fundamental function of selected information, information that is assembled into a single Physical state, the QM entanglement is an obvious fit - and the only fit.
BillTre said:
generally speaking either one. I did not get out of the article how consciousness was determined other than by personal reports.
In the article, the measure was whether or not the patient (ie, the rat) fell into an anesthetized state.

BillTre said:
I don't have any problem with being aware of multiple bits. The visual system, many times on the way from the eye to the visual cortex combines many different visual streams to produce higher order information about the visual field. Single activated photoreceptors feed to other cells in the retina to detect lines for example.
This is textbook since at least the 1980 and probably earlier.
The edge detection actually happens early in the visual cortex - as explained in articles from the 1980s in experiments with cats. But video signal processing is not synonymous with being consciously aware of the image.
Let me ask you, when you look at a tree, how much of the tree do you experience in a single moment? I am not talking about making good use of the information - just your conscious experience of it. The conscious experience that a robot will not have. So you have a notion that it's a tree - and its size - and perhaps whether it is healthy. But it's all one experience. It isn't a list. It isn't one part of your brain experiences the color and another experiences the size. There's a single part of your brain that has the a whole gist of the tree at once - not just 1 bit of it. So how do you get all the bits into a single part? And they have to be in the same place in that single part - or else the question can be put to the parts of that part.
How do you have a conscious experience of a tree? This is the key question. If you can't see the fundamental problem, the rest of my arguments are baseless.


BillTre said:
Also, I don't think bits should often be discussed with respect to brain functioning. I am not aware of any successful significant applications of bits to brain functioning. How do you count up the bits? By action potentials?
Although neurons do not work with bits, the information can still be measure that way. In the simplest case, it's log base 2 of the number of possible states that are meaningful to the process being measured. So it needn't be an integer.

BillTre said:
Most likely it is to better attract the conscious attention to stimulus (or internal image) so that the organism can better respond behaviorally.
OK. So we agree that qualia is useful and effects behavior.

BillTre said:
Then you could only be conscious of the inputs to that one cell?
Yes. I should add, that I have no "favorite" biological mechanism. But exchanging qubits across a cell membrane seems very unlikely to me.

BillTre said:
There are a lot of neurons in the human brain. Would they take turns? What would decide who got to be the boss?
Of course, what we know from strokes, brain trauma injury, and more recently from fMRI, is that the brain has many specialized areas. And even substantial injury does result in loss of consciousness.
I would guess that there are hundreds of "special knowledge experts" through out the brain each including a "seat of consciousness" cell where a Grover-type algorithm is checking for potential value or "opportunity". Any one can bid for conscious control.

BillTre said:
When I'm aware of the red flower smelling nice, is that from the combining of two different sensory streams. It sounds like that doesn't fit in the theory.
Combining sensory stream is not an issue for hard consciousness. You can be conscious of a thought, a memory, or a sensed object or event. You are seldom conscious of individual rod or code light receptors, which segment of your ear is actually hearing a tone, which taste bud is doing the tasting, or which nerve ending is itching. Consciousness deals with fully processed information generally from a combination of different senses and many earlier experiences - ready for decision-making. Even the green experience in looking at a tree is a description of the tree produced by classical circuitry estimating the average color of the foliage portion of the tree given lighting angles, shadows, and intensities. So even a simple color is a highly assembled piece of information.
I would wonder whether "red flower smelling nice" isn't a combination of concepts that are not experience exactly at the same time - perhaps its red flower, and something available smelling nice.

BillTre said:
I thought there were cases of memories of events people were not conscious of. I'll try to look that up.
Actually, I may be able to help you with that one. I had my gall bladder removed 6 weeks ago. My recovery from the anethesia followed this sequence:
1) A sound was pasted into my memory - exactly like an audio recording. I know I have a short term "audio memory". For example, I can recite the sounds I hear from an unfamiliar accent, even though I do not know what I am saying. And I have been told on many occasions that I had successfully reproduce the sentence. I have usually refered to this as my "sing song" memory because it is audio only - no interpretation. This is in contrast to "story memory" which is what I recall when trying to report what I have done.
2) I looked back on that memory and realized that I was not conscious when I heard it. But it was a yell followed by a coughing fit and another voice reacting. Since I had a cough and would have reacted to a loud yell from myself with a coughing fit, I knew I was the yeller. I also realized that I could not be confident about how long ago that audio recording was made. And I knew (incorrectly) that I was in surgery, so I thought that the anesthesia was not applied correctly.
3) A moment later, I gained contact with the rest of my body and realized that I was a human laying on a flat surface. At that my perception of time was accurate - that I had screamed about 15 seconds earlier.
4) Soon after that, I openned my eyes. Later one nurse reported to another that I would not remember - likely refering to the scream.

BillTre said:
Why could not a neural network also have superpositioning and its collapse onto a specific meaning.
Visual illusions seem like this to me and also seem entirely understandable from a standard neurophysiological point of view.
There are two types of neural networks. The term "neural net" is used in AI to refer to a particular neuron model. And, of course, it can also refer to a small ganglia.
Superpositioning is a feature of QM. To say that an information processing device is in a superposition means that the information in that device is in a superposition - and that the device is not classical (ie, cannot be replicated ona classical computer). I would say that many "neural networks" (ie, small ganglia) are in a superposition of states - but likely owing to only a single critical member neuron operating that superpositioning. As I said earlier, if it's more than one cell - that's okay.

I am leaving that second setence alone because it is ambiguous. I see several possible points you might be making.

BillTre said:
I don't know about forcing the nervous system into some time-based multiplexing *whatever that is). If its not about mixing signals (combining and analysing them) then it does not reflect biology.
At the time, I was uncertain by what you meant by "combining streams". So I answered both types of "combining". Since then, it is clear that you mean combining infromtion from multiple senses into a single situational description.
BillTre said:
This example is confusing to me.
Are you saying a robot could not find a blue ball that is next to a red chair in a room?
The robot would find the blue ball next to the red chair. But that is different from being conscious of the experience. Logging the story is also different than being conscious of the experience.
BillTre said:
Here is another issue I have:
Neurons can be very large cells with many branches (dendrites and axons). There are MTs all over the place. Are they all taking part in this quantum processing? Some can be more than a meter apart.
If not which parts of the neuron are the conscious parts?
Your talking about the Penrose/Hameroff Orch OR model. They were only interested in demonstrating that quantum data processing was, in principle, possible in a wet/warm environment. Your "a meter apart" refers to long neurons. (Their mechanism cannot cross cell membranes.) They did go into the classical/QM interface - with MTs affecting dendrite states. There are many MTs in a neuron - and certainly, on the face, opportunity for a lot of data processing with wide (many bits) outputs. I am sure they would balk at 1 meter - but they did not address their design to that level. It was intended more as a technology assessment than a detailed design.
 
  • #6
  • Like
Likes .Scott

Similar threads

Back
Top