How can brain activity precede conscious intent?

  • Thread starter Math Is Hard
  • Start date
  • Tags
    Delay
In summary, Benjamin Libet and Bertram Feinstein found that there is a half second delay between the cortical stimulation and the reported sensation. There are also pre-conscious signals associated with a person's chosen motor task that precede the conscious intent to act.
  • #71
antfm said:
I try again to get some help from someone reading this thread to understand a bit better Libet's experiment. My doubt is the delay it is actually measuring, as I said in earler post.
I guess it is a delay between the neuronal firing that indicates the beggining of a, supposedly intentional, action and the thought of having the intention to start that action.
Ok, it seems we start the action half a second before the thought of having the intention to.
But, in my view, perhaps a basic or raw feeling of having the intention is prior to the thought of having that intention. I mean, I can start an action when "I feel like doing something", as language says, which could be before "I think I feel like doing something".
An example: the athlete could start running when he hears the shot, not when he thinks "I've heard the shot" (it would be too late); the athlete starts running half a second before the thought, but not half a second before hearing the shot (otherwise he would be disqualified).
Something like that. I'd appreciate some help. Thanks.
I think we can divide self-consciousness into pre-reflective self-consciousness and reflective or introspective self consciousness. The pre-reflective kind is our mode (probably) a majority of the time, as we are immersed in our activity in the world. Asking the subject to monitor awareness brings things into reflective mode, which it is reasonable to assume introduces some (additional?) delay. Commentators who divide modes simply into conscious and unconscious miss this important nuance.

Now it still is a meaningful result that the self which is felt to exist in our reflective mode can't be responsible for initiating action. To the extent this really is the folk concept of free will, then it seems to be refuted by the evidence.
 
Physics news on Phys.org
  • #72
Thanks, Steve. We meet again. Yes, I totally agree. That is the part I thought that was missed from the usual interpretation of the experiment.

I am not especially interested in saving free will from refutation, but as much as the results of the experiment seem to prove that the folk concept of free will doesn't work, they could also point to the failure of the folk concept of self. As you say, we often dismiss that pre-reflective self-consciousness (and it would be a part of the self)

In the example of the athlete, she knows beforehand that she has to start running when hearing the shot. Start running when she hears it is part of her self behaviour, though it is not perhaps reflective self behaviour.

It is the same, I think, that when we drive in autopilot and in many other daily actions that happen without that reflective aspect. Even so, we claim that our selves are always in charge.

Anyway, I find your explanation very insightful. Thanks.
 
  • #73
Tournesol said:
That isn't even correct as a definiton of FW. The feature of FW that creates problems with regard to determinism is the ability-to-have-done-otheewise.

That simply isn't true. An electron in any given state had the option and could-have-done-otherwise, according to quantum mechanics. This hardly means that the electron has free will.
 
  • #74
loseyourname said:
That simply isn't true. An electron in any given state had the option and could-have-done-otherwise, according to quantum mechanics. This hardly means that the electron has free will.

That the electron could-have-done otherwise means there is an incompatibility between QM and strict causal deteminism.

That the people could-have-done-otherwise if hey have FW meansthere is an incompatibility between FW and strict causal deteminism.

If you are saying that could-have-done-otherwise is not sufficient for
FW, that would be correct, but I am not maintaining that it is.
 
  • #75
Tournesol said:
If you are saying that could-have-done-otherwise is not sufficient for FW, that would be correct, but I am not maintaining that it is.

Yes, that is what I'm saying. Perhaps we should enumerate exactly what we think the sufficient conditions are for a freely willed action in a volitional agent.
 
  • #76
loseyourname said:
Yes, that is what I'm saying. Perhaps we should enumerate exactly what we think the sufficient conditions are for a freely willed action in a volitional agent.

1) lack of external compulsion ( a gun pointed at ones head)

2) lack of internal compulsion (addicition) or other interference (sanity)

3) possession of the appropriate faculty of volition in the first place.

(1) and (2) are familiar from legal arguments, which take (3) for granted.

What (3) actually consists of is the philosophical point. Compatiblists and
incompatiblists disagree about whether could-have-done-otherrwise is a necessary
ingredient. Hardly anyone thinks it is sufficient.
 
  • #77
Tournesol said:
The question is whether a complex system like the brain can utilise
randomness to obtain "elbow-room" (the ability to have done otherwise)
without sacrificing rationallity. Given the limits on de-facto rationallity,
Ithink the answer is yes.
This brings to my mind a simple thought experiment. Suppose I have written a very complex computer program (one might think of virtual war game implementation or maybe even just a chess playing program) where extended computation of possible consequences are implemented at every step and reckoned against some value reference. Now, when the values at a step were equal and the computation power of the machine would be exceeded (or might take too long if all possible paths were followed and please note that such a thing might even occur during the value reckoning phase), suppose we use a random number generator governed by a phenomena within the Heisenberg uncertainty limitation. Now several things happen here. First, I can certainly have the computer print out that final result which yielded the scenario with the highest value (which we could call the computers hoped for final result). Second, I could also have the computer print out the sequence which it went through to reach that final scenario and where his doubts lay (the places he relied on the random number generator or he was "guessing") and finally, the result certainly would not be completely predictable as it depends directly on a number of absolutely random events.

Now the machine will make decisions for reasons it can list. Would one say it has "free will". I think it would at least act as if it had free will.

Have fun -- Dick
 
  • #78
Tournesol said:
Why can't FW be both what it is traditionally assumed to be and a "succesful behaviour" ?
I think I misunderstood you when you posted this originally. I thought you were asserting that it was exactly "what it is traditionally assumed to be". My position is, "of course, it could be." However, exactly "what it is traditionally assumed to be" needs to be considerably cleaned up before the meaning of the statement is clear.

Have fun -- Dick
 
  • #79
Tournesol said:
That is back to front. If you have reason to believe FW is impossible (such as reason to belieive in determinism and to reject compatiblism) , then you have reason to conclude FW can only be an illusion. But you are cetainly not entitled to start off on that basis.
Why not?

Have fun -- Dick
 
Last edited:
  • #80
Necessary and Sufficient Conditions for Free Will

loseyourname said:
Perhaps we should enumerate exactly what we think the sufficient conditions are for a freely willed action in a volitional agent.
In my humble and specultive opinion, the sufficient conditions for FW are:
1. A two-way communication link between brain (or robot) and the conscious agent.
2. A working connection between perception-related components of the brain (or robot) and the output side of that link.
3. A working connection between the motor function components of the brain (or robot) and the input side of that link.

The necessary conditions are (again IMHASO):
1. The conscious agent must know that multiple options for action are available.
2. The conscious agent must know at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
3. The conscious agent must be able to choose and execute one of the options in the folklore sense of FW.
 
  • #81
Math Is Hard said:
... I find it baffling - it just doesn't seem possible - and I wondered what your thoughts were on this.
For what it's worth, here are my thoughts.

The delay is accounted for by the time it takes for information about the perception of the signal to travel on the link from brain to conscious agent, for the conscious agent to exercise a FW action, and for the signal to execute this action to travel back across the link to the brain. Part of the motor action is the expression of the report that conscious awareness of the stimulus and action has occurred.

What I would suggest people consider when trying to interpret this experiment is the possiblilty that consciousness is not seated in the brain but instead is somewhere that requires a measurable amount of time for a signal to travel between them. For an extreme analogy, think of the brain as the computer on a Mars rover and the conscious agent as the scientist at JPL driving the rover. The signal delay in this case is substantial.

If we perform Libet's experiment on the rover, we will stimulate the on-board computer and measure the reaction time. If the response can be made strictly from the rover without requiring communication with JPL, then this would be equivalent to a reflex action and consciousness would not be involved.

If the stimulation needs conscious attention before an action can be taken, then a round-trip communication with JPL must take place causing a long delay.

To duplicate Libet's "baffling" case, suppose the scientist at JPL wants to initiate some rover action, verify that the action occurred, and then report from the rover to an observer on Mars that the scientist knows that the action took place. The command would be sent to the rover initiating the action. The rover would then transmit back to JPL information about the results of the action. The scientist would then become aware of the action and send the signal back to the rover reporting that the action occurred. The delays involved would be obvious.
 
Last edited:
  • #82
Doctordick said:
Why not?

It's begging the question.
 
  • #83
Tournesol said:
It's begging the question.
It's begging what question? If you are going to go around discounting possibilities, it seems to me that your position is quite closed minded. I certainly do not claim infalible knowledge on any point.

You seem so rational when you talk to others (at least, in the great majority of cases, I find your responces to be quite rational) but your responses to my comments almost always surprise me. The only explantation I can comprehend at the moment is that you just don't understand what I am saying and I don't know where the fault lies.

Totally in the blind --Dick
 
  • #84
Paul Martin said:
For what it's worth, here are my thoughts.
Hi Paul,
Thank you for your thoughts. I'm sorry I have been taking a long time to think through this. I am slow. :redface: I thought about this some this morning just as I was waking and then I got up and drew some diagrams and tried to understand your analogy better.

What I still can't get is that this "conscious agent" that you mentioned seems to be an un/pre/sub conscious (still searching for the right word) agent since it is acting before any processing that occurs in the physical brain. Can we still call it a conscious agent if its commands occur before conscious awareness of giving the instructions?

On another topic: Here is a possibility that I am considering. I send an instruction to the Mars Rover and this algorithm says, "over the next 3 minutes, at random intervals you will turn in a random direction". So consciously I have made the decision that the robot will perform random actions during the time span I have specified. This only happens because I decided it. This is why I don't buy any of these arguments against free will. No matter what the robot randomly chooses to do, it was I who gave the placed the order to act randomly (but in the desired fashion) in the first place.

I'd be happy to hear your thoughts on this. I apologize if I misunderstood any of your comments in my naivete. :redface:
 
  • #85
Paul Martin said:
In my humble and specultive opinion, the sufficient conditions for FW are:
1. A two-way communication link between brain (or robot) and the conscious agent.
2. A working connection between perception-related components of the brain (or robot) and the output side of that link.
3. A working connection between the motor function components of the brain (or robot) and the input side of that link.

The necessary conditions are (again IMHASO):
1. The conscious agent must know that multiple options for action are available.
2. The conscious agent must know at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
3. The conscious agent must be able to choose and execute one of the options in the folklore sense of FW.

What's a "conscious agent" if you define it as necessarily separate from the brain and/or robot? If you take that phrase out of your formulation, the Mars rover meets your standards (unless you're using an experiential, rather than functionalist definition of the verb 'to know').
 
  • #86
Tournesol said:
The question is whether a complex system like the brain can utilise randomness to obtain "elbow-room" (the ability to have done otherwise)
without sacrificing rationallity. Given the limits on de-facto rationallity,
Ithink the answer is yes.
IMHO, "free will" is not in any way dependent on the presence of randomness.

Perhaps you would care to explain how you can take an agent bereft of free will, and then suddenly endow free will simply by introducing some randomness into it's thought processes?

The idea is a non-starter.

See this thread for a much deeper discussion of the concepts involved :

https://www.physicsforums.com/showthread.php?t=71281

MF
:smile:
 
  • #87
Sorry to sound like a stuck record, but I've noticed that the debate in this thread revolves around the concepts of "free will" and "consciousness" - but have the participants agreed definitions of these concepts? (I quickly scanned the thread so I apologise if these definitions have been agreed already).

In so many debates I see people taking sides and arguing endlessly against each other, when in fact they are just wasting so much time because they are not defining things the same way.

Can anyone summarise the definitions of "free will" and "consciousness" that are pertinent to this debate?

Cheers!

MF
:smile:
 
  • #88
Doctordick said:
It's begging what question?

Let me reconstruct...

Tournesol said:
That is back to front. If you have reason to believe FW is impossible (such as reason to belieive in determinism and to reject compatiblism) , then you have reason to conclude FW can only be an illusion. But you are cetainly not entitled to start off on that basis.

<<ie not entitled to start off on the basis that FW can only be an illusion>>

DD said:
Why not?

Tournesol said:
It's begging the question

<<ie starting of on the basis that FW can only be an illusion is begging the question>>


If you are going to go around discounting possibilities,

Assuming FW must be illusory is discounting possibilities.
 
  • #89
moving finger said:
IMHO, "free will" is not in any way dependent on the presence of randomness.

Perhaps you would care to explain how you can take an agent bereft of free will, and then suddenly endow free will simply by introducing some randomness into it's thought processes?
How can an agent have FW without the ability to have done otherwise ?
 
  • #90
Tournesol said:
How can an agent have FW without the ability to have done otherwise ?
Randomness ensures that an outcome is indeterministic. What does this have to do with "free will"?

How does the introduction of an indeterministic outcome suddenly endow "free will" to an agent that was previously bereft of "free will"?

Can you give an example?

MF
:smile:
 
  • #91
moving finger said:
Randomness ensures that an outcome is indeterministic. What does this have to do with "free will"?

It's to do with the ability to have done otherwise .

Again.
 
  • #92
Tournesol said:
It's to do with the ability to have done otherwise .
But INDETERMINISM DOES NOT ENDOW FREE WILL.

I note that you choose to attempt answers only to the questions that you can answer

I also asked :
moving finger said:
How does the introduction of an indeterministic outcome suddenly endow "free will" to an agent that was previously bereft of "free will"?
and :

moving finger said:
Can you give an example?
Both of which you ignored.

With respect, Tournesol, it seems obvious to me from your reluctance to provide explanations that you do not understand the problem.

This is why I asked you to give an example of how your “randomness” is supposed to endow an otherwise deterministic agent with “free will”. You have not given such an example (I suspect because you cannot give one).
MF
:smile:
 
Last edited:
  • #93
Tournesol said:
It's to do with the ability to have done otherwise .
The Libertarian hypothesises that indeterminism is supposed to somehow mysteriously "allow the agent to have done otherwise" - in other words that the action of indeterminism at some stage in the agent's decision-making process somehow (but mysteriously) endows "free will" upon that agent.

Conversely, I suggest that the association of "free will" with indeterminism is erroneous, and the MOST that can ever be accomplished by the introduction of indeterminism anywhere into the agent's decision-making process is ...indeterminism!

Let us try to examine how the Libertarian hypothesis could possibly work.

Let us assume that at a particular point in time an agent is able to follow one of many different possible courses of action, and hence needs to make a very generic decision about "which course of action to follow" from the alternative possibilities available. The Libertarian would say that the agent is able to make a "free will" decision if and only if we can somehow correctly introduce indeterminism into the agent's decision-making process.

Now, if we introduce the indeterminism into the process BEFORE the agent makes a decision (antecedent indeterminism), then this could possibly be translated to "throwing up a different alternative course of action" for the agent to consider in it's decision-making process. But there are in fact no "alternative courses of action" that indeterminism can "throw up" which would not also be accessible to the agent via a purely deterministic process. In other words, a purely deterministic agent would have just as many possible different alternative courses of action available to it as would the agent operating with antecedent indeterminsim.

The introduction of indeterminism BEFORE the moment of the agent's decision does not therefore necessarily lead to a different range of possible alternative courses of action, it simply "introduces indeterminism" into the proceedings prior to the decision-making process and cannot in fact make any difference to the agent's "free will" to choose between the different alternative courses of action.

Now the Libertarian may say therefore that the indeterminism needs to be introduced subsequent to (rather than prior to) the agent's decision-making process. But I hope it is transparently obvious (without me having to explain the details) that any indeterminism in the process subsequent to the agent's decision simply makes the outcome indeterministic, and cannot possibly have any bearing on any free will of the agent during decision making!

Conclusion : There appears to be no way that introducing indeterminism into the agent's decision-making process can actually endow the agent with free will, therefore if an agent does not already possesses free will in the absence of indeterminism (as the Libertarian suggests), then no free will is possible. The Libertarian concept of free will is thus inconsistent.

MF
:smile:
 
  • #94
loseyourname said:
What's a "conscious agent" if you define it as necessarily separate from the brain and/or robot?
I'm not sure how to parse your question. If you are asking what I mean by "conscious agent", I mean any agent capable of experiencing consciousness as I experience it.

If you are asking whether I require that the conscious agent necessarily be separate from the brain and/or robot, the answer is "no".

loseyourname said:
If you take that phrase out of your formulation, the Mars rover meets your standards (unless you're using an experiential, rather than functionalist definition of the verb 'to know').
I'm not sure I know exactly what you mean by the terms 'experiential' and 'functionalist', but when I said "The conscious agent must know..." I meant that it must have the same sort of experience I have when I realize that I know something. I do not consider that a thermometer "knows" the temperature in that same way, nor does the computer "know" my account number in that same way.

If you take the phrase "conscious agent" out of my formulation, you will have obliterated my standards altogether. In my view, a conscious agent is absolutely necessary for the concept of free will to have the meaning I would ascribe to it. No matter how sophisticated an algorithm you might incorporate into a machine, so that it can pass the Turing test, convince Dennett that it is as conscious as he, act and respond like an intelligent human, or better, I would still maintain that it would not have free will unless it actually experienced consciousness the way I do. It would have to meet my three necessary conditions and know what it was doing in order to have free will.

In my view, the Mars rover has free will as long as the JPL scientist is attending to its operation; the free will is just not seated in the rover vehicle. And, in my view, humans have free will as long as they are awake; I just don't think the free will (or the consciousness in general) is seated in the brain.
 
  • #95
Math Is Hard said:
Thank you for your thoughts. I'm sorry I have been taking a long time to think through this. I am slow.
You're welcome. No need to be sorry; I assure you that you are no slower than I am.

Math Is Hard said:
What I still can't get is that this "conscious agent" that you mentioned seems to be an un/pre/sub conscious (still searching for the right word) agent since it is acting before any processing that occurs in the physical brain.
Before I start, I should point out that my views are very different from those of most other people. So be careful if you try to reconcile what I say with other things you read.

I would suggest that you call off your search for the right word. I think we are heading for trouble whenever we think that words are magic and that if we only pick the right one, everything will become clear.

I think the notion of free will is that you can do something you want to do, if and when you decide to do it.

Now that statement is loaded with words we need to pick apart too so we don't get into trouble. First, I used the term 'you' to identify the actor in this scenario. We are making an assumption we should acknowledge if we consider that the actor, "you", is the same in all three actions. In my view, that is a bad assumption. I think that "you" are composed of two separable entities: Your consciousness, and your physical body/brain. If you don't acknowledge that separation, then my analogy won't make sense.

There are three different kinds of "action" going on here: "wanting", "deciding", and "doing". Since your concern has to do with timing let's consider the sequence of events. I think you would agree that wanting, deciding, and doing should occur in that order, even though some actions like impulse buying might interchange some of them.

But, as I listed in my necessary conditions for free will, in order to really be a free will action, at least the "deciding", and "doing" must be accompanied by conscious knowing. (The "wanting" may be below the conscious radar in some "un/pre/sub consciousness".) So the question is, where does the "knowing" fit into the sequence of "deciding" and "doing"? It may fit in several places. You may know you want to do something long before you do it. Or, you may not consciously know you want to even though you decide to do it. Then if you actually make a conscious decision to do the thing, then, by the very nature of consciousness you know you are making the decision all the while during the transition from indecision to decision.

There might be some delay between having made the decision and actually doing the thing. You might have decided to let the action be triggered by some stimulus or you might just go ahead and do it as soon as you decided. At any rate, you know that you are doing it as soon as you do it. And, finally, you probably get some immediate feedback so that you know that you have done it soon afterwards.

All of this "knowing" is going on in consciousness. We shouldn't be hasty in assuming how this knowing correlates with brain functions, or "processing that occurs in the physical brain" as you put it.

The whole point of my Mars rover analogy was to clearly separate the functions of consciousness and knowing (resident in the JPL scientist) from the "processing that occurs in the physical brain" (resident in the rover and its on-board computer) and to exaggerate the delays in communication between them. So it is clear, as you say, that the conscious agent is acting before any processing that occurs in the physical brain. But the conscious agent is involved in "knowing" at several points along the process, and there will be a delay in the reporting of any of these incidences of "knowing".

Math Is Hard said:
Can we still call it a conscious agent if its commands occur before conscious awareness of giving the instructions?
Keep in mind that in my view conscious awareness occurs only in the conscious agent. The reporting of conscious awareness is a different thing. That would involve the conscious agent deciding to issue a report of the conscious experience and then doing it, along the same lines as we just discussed for doing anything else. Thus there would be a delay between the commands being issued and the reporting of the conscious awareness of the commands being issued. So the commands don't really occur before conscious awareness of giving them.

Math Is Hard said:
On another topic: Here is a possibility that I am considering. I send an instruction to the Mars Rover and this algorithm says, "over the next 3 minutes, at random intervals you will turn in a random direction". So consciously I have made the decision that the robot will perform random actions during the time span I have specified. This only happens because I decided it. This is why I don't buy any of these arguments against free will. No matter what the robot randomly chooses to do, it was I who gave the placed the order to act randomly (but in the desired fashion) in the first place.
I think there are two things going on here that are pretty easy to separate: a willful action and a random action. This is the same as me deciding to flip a coin. The decision to flip and the action of flipping are the result of free will on my part. But the result, of a tail or a head, is strictly random and not the result of my will. I don't think this presents any argument for or against free will.
 
  • #96
Paul Martin said:
The necessary conditions are (again IMHASO):
1. The conscious agent must know that multiple options for action are available.
2. The conscious agent must know at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
3. The conscious agent must be able to choose and execute one of the options in the folklore sense of FW.

I hope you don’t mind if I suggest that we add a fourth necessary condition (which I know is implicit in your conditions, but here I am making it explicit) :

4. The agent must be conscious.

I would also respectfully suggest (IMHO) that “know” in the above is too strict, and in fact any agent which simply “believes that it knows” has the necessary condions for free will (reasoning : I suggest we can never have infallible foreknowledge, thus in a strict sense it is never possible to infallibly “know” about future options, the best we can do is to believe, or to believe that we know) and the 4 necessary conditions then become :

1. The agent must believe that multiple options for action are available.
2. The agent must know (or believe that it knows) at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
3. The agent must be able to choose and execute one of the options in the folklore sense of FW.
4. The agent must be conscious.

Would you agree?

Would you also agree that all of the above necessary conditions are compatible with determinism?

If not, why not?

MF
:smile:
 
  • #97
Wow, a potentially very interesting thread became yet another playground for word games on determinism/indeterminism. You would do good to make that discussion more fruitful by focusing on its practical implications.

For instance, what good is restricting the freedom of an individual to be in a particular building for at least 7 hours a day, 5 days a week, 3/4 year, for most of the first two decades of her life? In the same manner what is good about restricting the freedom of her mind to study the same things at the same pace and from the same person?

When you start talking about freedom philosophically, please apply it to something practical. It helps to elucidate what you're talking about.


Now that that's done with, the first time I read about this half-second delay was in Fred Alan Wolf's The Dreaming Universe.. The author attributes this delay to mystical phenomenon where our actions are actually guided by the future, therefore, time and space are not what we think they are and blah blah blah blah. He has a Ph.D. in quantum physics, but apparently that doesn't guard him from being slightly off the wall. I think his interpretation is contrived and practically useless, but someone mentioned earlier that he/she would appreciate all links on the subject. Perhaps someone else here could make better use of that book.
 
Last edited:
  • #98
Telos said:
For instance, what good is restricting the freedom of an individual to be in a particular building for at least 7 hours a day, 5 days a week, 3/4 year, for most of the first two decades of her life? In the same manner what is good about restricting the freedom of her mind to study the same things at the same pace and from the same person?

It prepares her to sit in a cubicle for 8 or more hours a day, at least five days a week, for thrity or forty years, in order to earn her living. Unless she spend years cooped up in a house with immature children, which is almost worse!

Did you think we were called into this world to enjoy it?
 
  • #99
moving finger said:
I hope you don’t mind if I suggest that we add a fourth necessary condition (which I know is implicit in your conditions, but here I am making it explicit) :

4. The agent must be conscious.
I don't mind at all. Not only should this condition be included but I think it should be listed as number one. Furthermore, I think we should always use the adjective 'conscious' when mentioning the agent just so we don't lose sight of the important and necessary fact of consciousness.

moving finger said:
I would also respectfully suggest (IMHO) that “know” in the above is too strict, and in fact any agent which simply “believes that it knows” has the necessary condions for free will
Here I respectfully disagree. I tried to be careful in writing my conditions, and after reviewing them in the light of your suggestion, I stand by what I wrote. In my judgment, the 'ability to know' is the most fundamental of all of the aspects of consciousness. I suspect that most, if not all, the rest can be derived from the ability to know.

moving finger said:
I suggest we can never have infallible foreknowledge, thus in a strict sense it is never possible to infallibly “know” about future options
I agree with the fallibility of foreknowledge. I agree that in a strict sense it is not possible to infallibly know much if anything about future options. But I insist that the conscious agent must know *that* there are options available in order for there to be free will. If the conscious agent only suspected that there were options, or believed that there were options, then an action might be induced on that basis. But I would disqualify such an action as a free will action and lump it in with coin tosses.

moving finger said:
2. The agent must know (or believe that it knows) at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
I would not agree to weaken this condition by including the parenthetical phrase for the same reason as above. I think I weakened it enough by including the "at least something about" and "at least some of" qualifiers.

moving finger said:
Would you agree?
No.

moving finger said:
Would you also agree that all of the above necessary conditions are compatible with determinism?
No. Not also, and not at all.

moving finger said:
If not, why not?
I am on thin ice here because I am never comfortable with any word ending in "ism". I just don't understand well enough what those words mean, and there is usually a society of specialists who claim ownership of those kinds of words, which together is enough to make me hesitant. But since you asked me, I'll try to answer your question.

First, let me define what I would mean if I were to use the term 'determinism'. To me, determinism means that the evolution of states over which determinism holds can follow only a single course. That is, there can be only one outcome in a deterministic system. In principle, this can be tested by restoring the initial conditions of the system and letting it evolve again. As many times as this is done, the outcome will always be the same.

If my necessary conditions for free will obtain, and you ran this "playback" thought experiment several times, the conscious agent could choose different options for the same conditions in different runs, thus producing different outcomes.
 
  • #100
I remember reading a paper about a week ago (God, I wish I could remember where I found it) that was talking about this same issue, and how to create a machine that could emulate the apparent freedom of human behavior. You simply create a program that can develop hypotheses based on memory about the outcomes of different courses of action. Based on initial programming along with whatever it has learned through experience, it chooses the course of action that is most desirable. If multiple outcomes are equally desirable or multiple actions will bring about the same outcome, then a random number generator is used to select one arbitrarily.

This machine would display all of the behavior you guys want from a free agent. It weighs options, choosing the best based on its preferences, and it could, in principle, choose differently each time if the possible courses of action make little difference to it. Its behavior would not be any more predictable than human behavior. The only thing it is lacking is consciousness. Do we really want to say that being conscious of your behavior is all that is required for free will? Does that mean a conscious rock would have free will?
 
  • #101
loseyourname said:
This machine would display all of the behavior you guys want from a free agent.
Behavior,yes. But behavior is a very unimportant aspect of this topic.
loseyourname said:
The only thing it is lacking is consciousness.
But... that's the only thing that *is* important in a discussion of consciousness. I also happen to think it is the most important thing that exists in the universe, but you don't have to buy into that just yet.
loseyourname said:
Do we really want to say that being conscious of your behavior is all that is required for free will?
Not me. I specified earlier in this thread exactly what I think is required for free will, the most important of which necessary conditions is consciousness.
loseyourname said:
Does that mean a conscious rock would have free will?
It would if and only if it met the other necessary conditions. The one about being able to execute a willful action would be the one where the rock would probably fail.
 
Last edited:
  • #102
selfAdjoint said:
Did you think we were called into this world to enjoy it?
You didn't ask me, but I'll give you my answer to your question anyway. Yes. I think we were called into this world to enjoy it. I think there are three other reasons as well: To create new things to enjoy, To help others enjoy, and to figure out how it all works.

I think each of us has some in-born compulsion to do some weighted combination of these things, the weightings varying quite a bit from individual to individual.

You had to ask.
 
  • #103
selfAdjoint said:
As the links make clear, Libet's own defense of free will is that the individual can "veto" the brain's action after it has begun and before the actual physical action begins. This seems to me as much sheer desperate invocation of magic as every other explanation of strong free will.
I would be interested to know what you mean by this. I agree with your general idea, and I was thinking that the 'veto power' is itself nothing more than an action of the brain, and therefore subject to the delay. Can't we say that the veto action also needs a readiness potential? And that the physical expression of that particular readiness potential (for the veto) is the supression of some former readiness potential (perhaps remaining motionless instead of throwing a punch)? This would support the illusion, but wouldn't Libet have thought of this?
 
  • #104
kcballer21 said:
I would be interested to know what you mean by this. I agree with your general idea, and I was thinking that the 'veto power' is itself nothing more than an action of the brain, and therefore subject to the delay. Can't we say that the veto action also needs a readiness potential? And that the physical expression of that particular readiness potential (for the veto) is the supression of some former readiness potential (perhaps remaining motionless instead of throwing a punch)? This would support the illusion, but wouldn't Libet have thought of this?

Well I have no problem with at least conjecturing that kind of thing subject to experimental investigation. But the point of Libet's expressed veto was that it be non-deterministic, that it have no explainable chain of causes. And as others have pointed out, that is really an incoherent desire.
 
  • #105
Paul Martin said:
Behavior,yes. But behavior is a very unimportant aspect of this topic.

What the heck? We're discussing whether or not actions are free. Are actions not a form of behavior? Don't you agree that being free to control your behavior against deterministic outputs should be manifested somehow in your behavior? Could a being with no behavior be free? Free to do what? It couldn't do anything.

But... that's the only thing that *is* important in a discussion of consciousness.

But . . . this is a discussion of free will, at least at this point. It isn't a discussion of consciousness. In order to make it a discussion of consciousness, we'll have to first conclude that no non-conscious being could ever have free will. Presumably this is because consciousness in this conception is a causal agent that is non-deterministic yet not competely random. So what does that mean? We're just back at step one. Saying something is free because it is conscious doesn't solve anything. Is conciousness an uncaused cause? Some kind of agent that makes decisions out of the blue according to no set of rules?

I also happen to think it is the most important thing that exists in the universe, but you don't have to buy into that just yet.

What is meant by 'important.' It's certainly important to me. Without it, I wouldn't have much else going for me.

Not me. I specified earlier in this thread exactly what I think is required for free will, the most important of which necessary conditions is consciousness. It would if and only if it met the other necessary conditions. The one about being able to execute a willful action would be the one where the rock would probably fail.

So what about our super Mars Rover, complete with learning software and a random number generator. Let's say that it is also designed in such a way that it is conscious. Its actions are still dictated by the same set of dynamic rules and random output and its behavior is exactly the same. Is it then free?
 
Back
Top