How can brain activity precede conscious intent?

  • Thread starter Math Is Hard
  • Start date
  • Tags
    Delay
In summary, Benjamin Libet and Bertram Feinstein found that there is a half second delay between the cortical stimulation and the reported sensation. There are also pre-conscious signals associated with a person's chosen motor task that precede the conscious intent to act.
  • #106
Paul Martin said:
Here I respectfully disagree. I tried to be careful in writing my conditions, and after reviewing them in the light of your suggestion, I stand by what I wrote. In my judgment, the 'ability to know' is the most fundamental of all of the aspects of consciousness. I suspect that most, if not all, the rest can be derived from the ability to know.
Then I believe there is a fundamental problem with your concept of free will.
By "the agent knows" (as opposed to "the agent believes that it knows") I assume that you mean "the agent knows infallibly"? ie that the agent's knowledge is guaranteed to be 100% absolutely correct with no possibility of it being wrong?
I believe that such infallible epistemic "knowledge" is in principle not possible for an agent. IMHO therefore this "necessary condition" could never be met.

Paul Martin said:
I agree with the fallibility of foreknowledge. I agree that in a strict sense it is not possible to infallibly know much if anything about future options. But I insist that the conscious agent must know *that* there are options available in order for there to be free will.
This to me seems a contradiction.
Part of the "foreknowledge" of a future option is actually to "know whether it will be available or not". If infallible foreknowledge of future options is not possible (as you agree), then it seems to me that it follows trivially that the agent cannot know infallibly whether any particular future option will be available or not, ie it cannot know infallibly *that* there are options available. It can "believe that it knows" (I agree), but it cannot “know infallibly”.

Paul Martin said:
If the conscious agent only …. believed that there were options, then an action might be induced on that basis. But I would disqualify such an action as a free will action …...
Such an action may indeed not qualify as free will under your definition of free will, but your definition is not the only possible definition, and as I said above I do not see how your necessary condition (2) can ever be met if you insist on infallible knowledge.

Paul Martin said:
I would not agree to weaken this condition by including the parenthetical phrase for the same reason as above. I think I weakened it enough by including the "at least something about" and "at least some of" qualifiers.
My same reply as above.

Paul Martin said:
I am on thin ice here because I am never comfortable with any word ending in "ism". I just don't understand well enough what those words mean, and there is usually a society of specialists who claim ownership of those kinds of words, which together is enough to make me hesitant.
OK, please rest assured I am not trying to pull any “tricks” here. Let me provide my definition of determinism :

Definition of Determinism : The universe, or any self-contained part thereof, is said to be evolving deterministically if it has only one possible state at time t1 which is consistent with its state at some previous time t0 and with all the laws of nature.

Paul Martin said:
But since you asked me, I'll try to answer your question.

First, let me define what I would mean if I were to use the term 'determinism'. To me, determinism means that the evolution of states over which determinism holds can follow only a single course. That is, there can be only one outcome in a deterministic system. In principle, this can be tested by restoring the initial conditions of the system and letting it evolve again. As many times as this is done, the outcome will always be the same.
OK, I believe my definition agrees completely with this.

Paul Martin said:
If my necessary conditions for free will obtain, and you ran this "playback" thought experiment several times, the conscious agent could choose different options for the same conditions in different runs, thus producing different outcomes.
Interesting.
Why do you say the agent “could choose different options for the same conditions in different runs”?

And is what you say here derived logically from your stated definition of free will and necessary conditions for free will (in which case can you show how it follows), or is it simply an intuitive feeling that you have?

Some things to ponder on :
If the world is operating deterministically then the agent is also covered by this, hence it follows that the agent could NOT in fact "choose different options for the same conditions in different runs".

Thus if you are suggesting that the agent can "choose different options for the same conditions in different runs" this would seem to imply that the world (at least the part that is concerned with the agent's choice) is not operating deterministically.

But if the agent's choice is not determinisitic, then what is it? Indeterministic?

Would you care to explain how the introduction of indeterminism into the agent's method of choice endows that agent with "free will"?

Thanks!

MF
:smile:
 
Last edited:
Physics news on Phys.org
  • #107
selfAdjoint said:
Did you think we were called into this world to enjoy it?
I must be deaf, I never heard any call! :biggrin:

MF
:smile:
 
  • #108
loseyourname said:
I remember reading a paper about a week ago ... how to create a machine that could emulate the apparent freedom of human behavior. You simply create a program that can develop hypotheses based on memory about the outcomes of different courses of action. Based on initial programming along with whatever it has learned through experience, it chooses the course of action that is most desirable. If multiple outcomes are equally desirable or multiple actions will bring about the same outcome, then a random number generator is used to select one arbitrarily.

This machine would display all of the behavior you guys want from a free agent. It weighs options, choosing the best based on its preferences, and it could, in principle, choose differently each time if the possible courses of action make little difference to it. Its behavior would not be any more predictable than human behavior.
Interesting.
All of what has been described above about free will in a machine (allowing for some woolliness in the language) I can see as being entirely compatible with a deterministic world.

loseyourname said:
The only thing it is lacking is consciousness. Do we really want to say that being conscious of your behavior is all that is required for free will? Does that mean a conscious rock would have free will?
It’s not suggested that being "conscious of your behaviour" is all that is required – read the “necessary conditions” posts above.

Show me a conscious rock, and if it also meets the other necessary conditions, then I'll show you a rock with free will. :biggrin:

MF

:smile:
 
  • #109
selfAdjoint said:
the point of Libet's expressed veto was that it be non-deterministic, that it have no explainable chain of causes. And as others have pointed out, that is really an incoherent desire.
That's a euphemism if ever there was one!

Let's call a spade a spade. If the veto is "non-deterministic" then this is the same as saying it is "random" or "indeterministic".

"Incoherent desire" is therefore a sugar-coated "random event".

MF
:smile:
 
  • #110
Definition of Free Will

loseyourname said:
So what about our super Mars Rover, complete with learning software and a random number generator. Let's say that it is also designed in such a way that it is conscious. Its actions are still dictated by the same set of dynamic rules and random output and its behavior is exactly the same. Is it then free?
Whether it is acting with "free will" depends on your chosen definition of "free will" and (importantly) whether that definition is self-consistent or not (ie free will defined in a non-self-consistent way simply cannot exist, no matter how intuitively "right" it feels).

I'll show you mine if you show me yours.

MF
:smile:
 
  • #111
moving finger said:
Then I believe there is a fundamental problem with your concept of free will.
Good. I am eager to examine any beliefs that challenge my own beliefs. I figure that is the best way to change my own beliefs, if they are due for a change, to bring them closer to the truth. Let's have a look.

moving finger said:
I assume that you mean "the agent knows infallibly"?
Yes.

moving finger said:
ie that the agent's knowledge is guaranteed to be 100% absolutely correct with no possibility of it being wrong?
Yes, to the 100% part and following. I am not aware of any guarantee, though. I strongly suspect that at least there is not one in writing.

moving finger said:
I believe that such infallible epistemic "knowledge" is in principle not possible for an agent. IMHO therefore this "necessary condition" could never be met.
I can understand how that belief could lead you to that opinion. And if "such infallible epistemic "knowledge" is in principle not possible for an agent" then I agree that we could logically conclude that my "necessary condition" could not be met.

But I don't share your first belief here. What exactly is the "principle" on which you seem to base it?

moving finger said:
This to me seems a contradiction.
Yes. And I think I see reason it seems that way.

moving finger said:
Part of the "foreknowledge" of a future option is actually to "know whether it will be available or not".
The reason there seems to be a contradiction is that our definitions of 'foreknowledge' are inconsistent. Here you claim that knowledge of whether the option is available is part of foreknowledge. I specifically excluded that knowledge from being part of foreknowledge. Here's what I said:

Paul Martin said:
I agree that in a strict sense it is not possible to infallibly know much if anything about future options. But I insist that the conscious agent must know *that* there are options available in order for there to be free will.

moving finger said:
then it seems to me that it follows trivially that the agent cannot know infallibly whether any particular future option will be available or not, ie it cannot know infallibly *that* there are options available. It can "believe that it knows" (I agree), but it cannot “know infallibly”.
It seems that way to you because you are using your definition of 'foreknowledge'. Since we are working on understanding my sufficient conditions, I must respectfully ask you to consider them using my definition for 'foreknowledge'. Otherwise my intent will be hopelessly confused and lost. Using my definition, the conscious agent can "know infallibly" that it has options.

Here's how I would sum up my view of this in plain words: The conscious agent could truthfully say the following about some free-will choice: "I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.


moving finger said:
My same reply as above.
As is mine, although I should explain that when I said "don't exactly know" I mean that at least something must be infallibly known.

moving finger said:
OK, I believe my definition agrees completely with this.
Yes, I think we see eye-to-eye as to what determinism is.

Where we might differ is in the identification of exactly what is deterministic and what is not. As I have said many times, but since it doesn't seem to take so it bears repeating, in my view reality consists of a conscious agent which has free will, and the thoughts of that conscious agent. Those thoughts constitute the *rest of* reality; it is the mysterious Void filled with nothing and at the same time physical universes.

So, in my view, free will inheres only in the conscious agent (hence the absolute requirement for consciousness in my conditions). The "rest of" reality, the universe(s), etc. may operate deterministically in part, or some actions within it may be determined by conscious will (always and only exercised by the one conscious agent.)

You can think of this picture as a person sitting at a computer running an implementation of a cellular automaton program. The program allows the person to hit a key at any time during the evolution of the patterns and stop the action, change any cell, and then resume the action. The evolution of the automaton's patterns are deterministic except for those times in which the person deliberately and consciously changes one or more cells. I think that's how reality works.
 
  • #112
Paul Martin said:
if "such infallible epistemic "knowledge" is in principle not possible for an agent" then I agree that we could logically conclude that my "necessary condition" could not be met.

But I don't share your first belief here. What exactly is the "principle" on which you seem to base it?
Heisenberg’s uncertainty principle would be a good starting point – I guess you have heard of it? This principle basically says that the world is indeed epistemically indeterminable. How would you incorporate this principle into your philosophy?

Since you seem to believe that infallible knowledge of possible future options (contrary to Heisenberg) is possible, would you care to give an example of what you consider to be such infallible knowledge?

Paul Martin said:
Part of the "foreknowledge" of a future option is actually to "know whether it will be available or not".
Paul Martin said:
The reason there seems to be a contradiction is that our definitions of 'foreknowledge' are inconsistent. Here you claim that knowledge of whether the option is available is part of foreknowledge.
This issue seems trivial.
Either the choice has not yet been made, and the agent believes choice options to be available, in which case these are “future options”, hence any supposed knowledge about them is knowledge about the future, hence foreknowledge is required.
Or the choice has been made, in which case I agree no foreknowledge is involved, but neither are there any options available (the choice has been made).

Paul Martin said:
I specifically excluded that knowledge from being part of foreknowledge. Here's what I said:

I agree that in a strict sense it is not possible to infallibly know much if anything about future options. But I insist that the conscious agent must know *that* there are options available in order for there to be free will.
With respect, all this achieves is that it includes the precondition “the conscious agent must know *that* there are options available” as part of your definition of free will. You have not actually shown that the precondition “the conscious agent must know *that* there are options available” can be met, you have simply asserted that this precondition needs to be met as part in order to render your definition of free will consistent.

Your definition of free will may be inconsistent.

Paul Martin said:
Using my definition, the conscious agent can "know infallibly" that it has options.
Your agent “can know infallibly” only because you have defined that the agent MUST know infallibly as part of your definition of free will. But defining that the agent MUST know infallibly in order to have free will does not in fact allow us to conclude that the agent CAN know infallibly. In other words, it may be the case that your definition of free will is not consistent (eg if it is not possible for an agent to know infallibly).

An analogy : It is possible to define free will as “the ability of an agent to have chosen differently to what it did actually choose”. It follows from this definition that for an agent to have free will, it must have been able to choose differently from what it did choose. But this does NOT prove that the agent could have chosen differently. All it proves is that IF the agent could have chosen differently then it also could have had free will, whereas if the agent could not have chosen differently then free will (as defined) is not possible.

In summary : What I am suggesting is that free will according to your definition implies EITHER that an agent has infallible knowledge of possible future options (this seems to be your interpretation) OR that free will as you have defined it is not possible (my interpretation, since I do not believe that an agent can have infallible knowledge of possible future actions).

Paul Martin said:
Here's how I would sum up my view of this in plain words: The conscious agent could truthfully say the following about some free-will choice: "I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.

Here is how I would re-phrase your summary in plain words :
The conscious agent could truthfully say the following about some free-will choice: "I believe that I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I believe that I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.

Paul Martin said:
You can think of this picture as a person sitting at a computer running an implementation of a cellular automaton program. The program allows the person to hit a key at any time during the evolution of the patterns and stop the action, change any cell, and then resume the action. The evolution of the automaton's patterns are deterministic except for those times in which the person deliberately and consciously changes one or more cells. I think that's how reality works.
Unfortunately, though it is clear that the cellular automaton program works deterministically, this does not give a clear idea of how the “person” operates. It seems to me that you have simply moved the problem from one level to another – it is not clear whether the “person” operates deterministically or not. How this free will actually works is still (in your model) a mystery.
MF

:smile:
 
  • #113
moving finger said:
Whether it is acting with "free will" depends on your chosen definition of "free will" and (importantly) whether that definition is self-consistent or not (ie free will defined in a non-self-consistent way simply cannot exist, no matter how intuitively "right" it feels).

I'll show you mine if you show me yours.

MF
:smile:

I don't personally believe in any concept of strong free will. All it means to me for an action to be free is that it is compelled by something internal to my own psyche, rather than by external coercion or pathology.
 
  • #114
loseyourname said:
I don't personally believe in any concept of strong free will. All it means to me for an action to be free is that it is compelled by something internal to my own psyche, rather than by external coercion or pathology.
I have no idea what you mean by strong free will
(but from the rest of your post I suspect we have some similar beliefs)

May I ask - do you believe your concept of free will is compatible with determinism?
MF
:smile:
 
  • #115
moving finger said:
Heisenberg’s uncertainty principle would be a good starting point – I guess you have heard of it? This principle basically says that the world is indeed epistemically indeterminable. How would you incorporate this principle into your philosophy?
Yes, I have heard of it. I would incorporate it into my philosophy by saying that the Uncertainty Principle applies to the world, which includes the physical universe, human bodies/brains, and the information available to the bodies/brains. I would say that it does not apply to reality as a whole which includes CC in addition to the world.

moving finger said:
Since you seem to believe that infallible knowledge of possible future options (contrary to Heisenberg) is possible
No. You missed the distinction again. I said that I believe infallible knowledge *that* options are available is possible. I admitted that knowledge *of* future options is probably incomplete or wrong.

moving finger said:
would you care to give an example of what you consider to be such infallible knowledge?
The certain knowledge I have that I can continue typing this response or I can take a break and have lunch. (You should interpret my use of 'I' here as 'TEOPM'. Readers who may be baffled should see my discussions with Moving Finger in the General Philosophy thread "A Constructive Critique of Libertarianism" for a definition of 'TEOx'."

moving finger said:
This issue seems trivial.
Yes, I agree it is a trivial issue. Nevertheless, I don't think we have successfully communicated what we each have been trying to say about the issue.

moving finger said:
Either the choice has not yet been made, and the agent believes choice options to be available, in which case these are “future options”, hence any supposed knowledge about them is knowledge about the future, hence foreknowledge is required.
OK, let's say the choice has not yet been made. I say that the conscious agent *knows* that the choice is available. If not, then this example would not qualify as a free will option. And, yes, it is a "future option" in the sense that the conscious agent knows that the option exists before the choice is made to exercise the option. This knowledge, "that the option exists", is required in my view. Moreover, I require that it be infallible knowledge. So the trivial issue is whether we include this infallible knowledge in the scope of the definition of 'foreknowledge'. I really don't care as long as you understand that I mean the infallible knowledge *that* an option exists must exist in order to have free will, even though much or all of the rest of the foreknowledge related to the option may be in doubt or unreliable.

moving finger said:
Or the choice has been made, in which case I agree no foreknowledge is involved, but neither are there any options available (the choice has been made).
I agree. Furthermore, this case has nothing to do with free will.

moving finger said:
With respect, all this achieves is that it includes the precondition “the conscious agent must know *that* there are options available” as part of your definition of free will.
The respect is graciously acknowledged, and with respect, I would submit that including preconditions is an expected part of making a definition. That is all I was attempting to achieve.

moving finger said:
You have not actually shown that the precondition “the conscious agent must know *that* there are options available” can be met, you have simply asserted that this precondition needs to be met as part in order to render your definition of free will consistent.
True.

moving finger said:
Your definition of free will may be inconsistent.
True. That is why I invite anyone to demonstrate any inconsistency. I would like to be among the first to know about it.

moving finger said:
Your agent “can know infallibly” only because you have defined that the agent MUST know infallibly as part of your definition of free will. But defining that the agent MUST know infallibly in order to have free will does not in fact allow us to conclude that the agent CAN know infallibly.
True. and True.

moving finger said:
In other words, it may be the case that your definition of free will is not consistent (eg if it is not possible for an agent to know infallibly).
I can see how this would make my definition vacuous, but I don't see any inconsistency if in fact infallible knowing were impossible.

moving finger said:
An analogy : It is possible to define free will as “the ability of an agent to have chosen differently to what it did actually choose”. It follows from this definition that for an agent to have free will, it must have been able to choose differently from what it did choose. But this does NOT prove that the agent could have chosen differently. All it proves is that IF the agent could have chosen differently then it also could have had free will, whereas if the agent could not have chosen differently then free will (as defined) is not possible.
I agree. Both this definition and mine have the same "weakness" in that we can't prove that the definition is not vacuous.

moving finger said:
In summary : What I am suggesting is that free will according to your definition implies EITHER that an agent has infallible knowledge of possible future options (this seems to be your interpretation) OR that free will as you have defined it is not possible (my interpretation, since I do not believe that an agent can have infallible knowledge of possible future actions).
I agree with this summary (except that I would insert 'some' in front of the first appearance of 'infallible'.)

I would further summarize it by saying, Either free will exists as I have defined it, or there is no such thing. You believe the latter.

moving finger said:
Here is how I would re-phrase your summary in plain words :
The conscious agent could truthfully say the following about some free-will choice: "I believe that I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I believe that I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.
We disagree here. I'd say that if this is all there were, then there is no such thing as free will.

Ummm. I think I have a choice to stop here and have lunch or to keep typing, but I'm not sure. Can I decide or not? Hmmm. I don't seem to be able to. I just keep typing for some reason. I'll bet it is because the entire history of the universe and my history as a body/brain moving about in it has set the stage so that right now I am typing away even though I am hungry. That's probably it. I probably couldn't stop and eat if I wanted to. There is no free will at all.

moving finger said:
Unfortunately, though it is clear that the cellular automaton program works deterministically, this does not give a clear idea of how the “person” operates.
**Exactly!** This is one of the main messages I was trying to get across. I think there is very little hope of getting a clear idea of *how* CC operates. But the cellular automaton example clearly shows *that* the "person" operates in a way that interferes with the otherwise deterministic evolution of the automaton.

moving finger said:
It seems to me that you have simply moved the problem from one level to another
**Exactly!** That is exactly what my world-view does. It takes the great mystery of the Hard Problem and moves it to another level which is outside the physical world. That leaves the physical world explainable and understandable and it reduces the mysteries of reality as a whole down to just this single mystery. It's like moving the mystery of music coming out of a radio back to the transmitter where it really originates and where it belongs.

moving finger said:
How this free will actually works is still (in your model) a mystery.
Yes. But then again, it is a mystery in every model.
 
  • #116
loseyourname said:
What the heck? We're discussing whether or not actions are free. Are actions not a form of behavior? Don't you agree that being free to control your behavior against deterministic outputs should be manifested somehow in your behavior? Could a being with no behavior be free? Free to do what? It couldn't do anything.
OK. OK. I should have said "relatively unimportant" rather than "very unimportant". Yes actions are a form of behavior, but by far and away most actions in this universe do not enter into the question of free will. What we are trying to figure out is the determinant for those actions which we suspect might be influenced or determined by free will. It is that determinant which I think is relatively important while the action itself (the behavior) is relatively unimportant. What the heck. I wasn't very clear. I'm sorry.

loseyourname said:
But . . . this is a discussion of free will, at least at this point. It isn't a discussion of consciousness. In order to make it a discussion of consciousness, we'll have to first conclude that no non-conscious being could ever have free will.
Good point. I have certainly jumped to that conclusion myself as is evident from my list of conditions for free will. I will be glad to retreat if someone can tell me the difference between conscious free will and non- or unconscious free will that makes any sense.

loseyourname said:
Presumably this is because consciousness in this conception is a causal agent that is non-deterministic yet not competely random.
For this to be the reason I think you would have to strengthen it by saying that consciousness is the *only* non-deterministic yet not completely random causal agent. But I agree that it is premature to make such a claim.

loseyourname said:
So what does that mean? We're just back at step one. Saying something is free because it is conscious doesn't solve anything.
I agree. I think you have to include my entire list of necessary and sufficient conditions.

loseyourname said:
Is conciousness an uncaused cause?
I think so.

loseyourname said:
Some kind of agent that makes decisions out of the blue according to no set of rules?
I think it can do that.

loseyourname said:
What is meant by 'important.'
What I meant was that I think consciousness is a necessary ingredient in any complete explanation for what goes on in reality, in particular for what goes on in the behavior of people.

loseyourname said:
So what about our super Mars Rover, complete with learning software and a random number generator. Let's say that it is also designed in such a way that it is conscious.
OK.

loseyourname said:
Its actions are still dictated by the same set of dynamic rules and random output and its behavior is exactly the same.
Not necessarily. If it is conscious, and if it met my necessary and sufficient conditions, then in different runs of my thought experiment the outcomes could be different even when the random number generator returned identical sequences in the different runs (which it must do if you run the thought experiment carefully and correctly).

loseyourname said:
Is it then free?
In your scenario, where its actions were still dictated by the same mechanisms, then no, it is not free. In my scenario where some actions may be determined by my necessary condition number 3, then yes, it would be free. IMHO.
 
  • #117
moving finger said:
would you care to give an example of what you consider to be such infallible knowledge?
Paul Martin said:
The certain knowledge I have that I can continue typing this response or I can take a break and have lunch.
Please let me just clarify and replay your examples here (this is important to ensure there is no misunderstanding caused by any ambiguity). I hope you do not mind if I also re-phrase your example in terms of an independent (conscious) agent rather than “I” or “me” (because of the confusion this has caused already).

What you are actually saying (correct me if I am wrong) is the following :
1 : The agent has certain knowledge that it will be able to continue typing a response (ie it has certain knowledge that an option, the option “to continue typing a response”, will be available to it, as an option, in the future).
2 : The agent has certain knowledge that it will be able to take a break and have lunch (ie it has certain knowledge that an option, the option “to take a break and have lunch”, will be available to it, as an option, in the future).

Firstly : Can you explain how it is (what is the mechanism whereby) the agent can acquire this “certain knowledge” that these options will in fact be available (as opposed to it simply BELIEVING that they will be available)?

Secondly : I suggest that the agent does not in fact have “certain knowledge” that these options (or any other options) will be available to it. In an extreme (admittedly improbable, but nevertheless possible) example, the agent could be destroyed in the next instant by an asteroid which hits its home town. This would wipe out its ability both to continue, and to take a break and have lunch, and all other options. The agent in fact does not have certain knowledge that it will not be destroyed in this way (or any other way) in the next instant, therefore it does not have “certain knowledge” that the options you have described will in fact be available to it. Generalising, I conclude that no agent can have certain knowledge that any particular future option will be available.

Paul Martin said:
I say that the conscious agent *knows* that the choice is available. If not, then this example would not qualify as a free will option. And, yes, it is a "future option" in the sense that the conscious agent knows that the option exists before the choice is made to exercise the option. This knowledge, "that the option exists", is required in my view.
I understand that you stipulate (as part of your definition of free will) it is REQUIRED that the “agent knows infallibly that the option exists” in order for the agent to have “free will” according to your definition of “free will”. With respect, this is not the issue. The issue is whether it is in fact POSSIBLE for an agent to know infallibly that an option exists. I believe that I have shown above such infallible foreknowledge is not possible. Conclusion : “Free will” according to your definition is not possible.

Paul Martin said:
I would further summarize it by saying, Either free will exists as I have defined it, or there is no such thing. You believe the latter.
I believe free will exists. But I would define free will differently to you (as indicated already by my suggested changes to your necessary conditions, which changes you do not accept)..

moving finger said:
Here is how I would re-phrase your summary in plain words :
The conscious agent could truthfully say the following about some free-will choice: "I believe that I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I believe that I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.
Paul Martin said:
We disagree here. I'd say that if this is all there were, then there is no such thing as free will.
According to your definition of free will, yes. According to my definition of free will, this is exactly what free will is.

moving finger said:
Unfortunately, though it is clear that the cellular automaton program works deterministically, this does not give a clear idea of how the “person” operates.
Paul Martin said:
**Exactly!** This is one of the main messages I was trying to get across. I think there is very little hope of getting a clear idea of *how* CC operates. But the cellular automaton example clearly shows *that* the "person" operates in a way that interferes with the otherwise deterministic evolution of the automaton.
And if the person is also operating deterministically?

moving finger said:
It seems to me that you have simply moved the problem from one level to another
Paul Martin said:
**Exactly!** That is exactly what my world-view does. It takes the great mystery of the Hard Problem and moves it to another level which is outside the physical world. That leaves the physical world explainable and understandable and it reduces the mysteries of reality as a whole down to just this single mystery. It's like moving the mystery of music coming out of a radio back to the transmitter where it really originates and where it belongs.
Moving the problem around without actually addressing the problem seems (with respect) to be rather pointless?

moving finger said:
How this free will actually works is still (in your model) a mystery.
Paul Martin said:
Yes. But then again, it is a mystery in every model.
I disagree. It depends on how one defines free will.
If one takes an idealistic approach and defines free will such that free will is impossible (the intuitive feeling of free will), then explaining how such free will operates will also be impossible (this to me seems to be your approach).
If however one takes a pragmatic approach and defines free will such that free will is possible (even though it may not provide a very satisfying or intuitively “nice” result in terms of the "feeling" of free will), then explaining how free will operates is also possible (this is my approach).

MF
:smile:
 
Last edited:
  • #118
moving finger said:
I hope you do not mind if I also re-phrase your example in terms of an independent (conscious) agent rather than “I” or “me” (because of the confusion this has caused already).
Not at all. Sorry for contributing to the confusion.

moving finger said:
Can you explain how it is (what is the mechanism whereby) the agent can acquire this “certain knowledge” that these options will in fact be available (as opposed to it simply BELIEVING that they will be available)?
No. And after thinking more carefully, I should amend my example by saying, "The certain knowledge I have that[, barring any malfunction of the PNS of Paul Martin (PNSPM),] I can continue typing this response...".

As for the mechanism, it is probably similar to the mechanism used to acquire the certain knowledge in the agent, when, working through PNSPM, the agent knows what green looks like as reported to the agent via the sensory and perceptive mechanisms of PNSPM.

moving finger said:
I suggest that the agent does not in fact have “certain knowledge” that these options (or any other options) will be available to it.
Would you say that the agent does not have certain knowledge of what green looks like as reported by a PNS?

moving finger said:
In an extreme (admittedly improbable, but nevertheless possible) example, the agent could be destroyed in the next instant by an asteroid which hits its home town.
Not in my cosmos, it couldn't. In my cosmos the agent does not live in the home town. The asteroid could wipe out the PNS -- and I have just corrected for that eventuality -- but in my view, not the agent.

moving finger said:
I believe that I have shown above such infallible foreknowledge is not possible.
I believe you have not.

moving finger said:
The issue is whether it is in fact POSSIBLE for an agent to know infallibly that an option exists.

moving finger said:
I believe free will exists. But I would define free will differently to you (as indicated already by my suggested changes to your necessary conditions, which changes you do not accept)..
I am beginning to waffle.

Your statement of the issue above got me wondering, "What does it mean 'to know infallibly'?". Simply to say "the agent knows" implies infallibility by the definition of the word 'know'. But that's hardly convincing. Your argument would say that it is never appropriate to assert "Y knows X" for any X or Y. But that would make the word 'know' useless.

But suppose the agent knows that it knows X. If indeed the agent knows X in the first place, knowing that it knows X in addition wouldn't strengthen the claim that it knows X. It would only provide additional knowledge which is outside or above the first circumstance, and which could in principle even inhere in a separate agent. We could have, for example, Agent B knows that Agent A knows X.

This led me in three or four different directions. First is to note that you and I, in this discussion, are in that circumstance. We are questioning whether we can know that Agent A knows X. That is a different question from, "Can Agent A know X". I think it may be possible that Agent A can know X while at the same time it is impossible for Agent B to know that Agent A knows X. If that possibility turns out to be the case, then we may not be able to resolve this issue here.

The second direction I am led is to extend the chain by supposing that the agent knows that it knows that it knows X. Does that help any? It seems to because now there is even more knowledge than before. What about extending the chain to a million links?

The third direction is to salt this chain with one or more 'believes': Can the agent believe it knows X? Know it believes X? Know it believes it knows? Know it knows it believes? Believe it believes it knows? Etc.

The fourth is to reintroduce Agent B to appear here and there in different versions of all those chains. For example, Can Agent B know that Agent A believes that Agent B knows X?

This is not meant to be silliness or sophistry, although it sounds like both. Instead, the point I am trying to make is that the issue you articulated is very complex. I think that to resolve it, we would need not only to identify X (the example of a fact that can be known), but we would also need to identify all the players (Agent A, Agent B, TEOMF, TEOPM, "I", "you", PNSMF, PNSPM) and the relationships among them, as well as the answers to many, if not all of those "chain" questions.

I am not prepared even to guess at the outcome of a resolution, but at this point I am willing to concede that my requirement for infallible knowledge may be unnecessarily strong. I'm not sure your proposed substitutions are the right ones either, however. Maybe it should be a longer chain of knowing and believing.

For the record, my view of the relationships among the players I listed are,

Agent A = Agent B = TEOMF = TEOPM = CC

PNSMF and PNSPM are separate and distinct chemical vehicles being driven by CC.

"I" and "you" are used ambiguously and should be identified with each use.

moving finger said:
And if the person is also operating deterministically?
The automaton was an analogy. Little is to be gained by staking much on the details of one of the analogs. But the analogy aside, you are asking about the consequences of the case where the conscious agent operates deterministically. I'd say in that case there is no free will.

moving finger said:
Moving the problem around without actually addressing the problem seems (with respect) to be rather pointless?
I don't think it is pointless. The point is that it provides a different hypothesis from which to work. My only suggestion is that we explore the hypothesis of a single consciousness and see where it leads. My suspicions are that it will be more fruitful than the hypothesis of "PNSx contains TEOx", or even "The physical world of PNSx contains TEOx".

moving finger said:
If however one takes a pragmatic approach and defines free will such that free will is possible (even though it may not provide a very satisfying or intuitively “nice” result in terms of the "feeling" of free will), then explaining how free will operates is also possible (this is my approach).
That may be true. But unless and until you actually produce that explanation for how free will operates, the mystery remains. As of this date, I still maintain that free will is a mystery in every model.

Much fun talking with you, MF. Thanks.

Paul
 
  • #119
Hi Paul.

I just thought I would comment on this statement. I hope you don't mind.
Paul Martin said:
But that would make the word 'know' useless.
Not at all, everyone may agree that knowing means exactly what you want it to mean, and they may even "know" some things; however, all they can really be sure of is that they think they know. That is the central issue of my work.

Have fun -- Dick
 
  • #120
Doctordick said:
Not at all, everyone may agree that knowing means exactly what you want it to mean, and they may even "know" some things; however, all they can really be sure of is that they think they know. That is the central issue of my work.
Yes, I agree there would still be a use for the word. But that's not the issue. I think the questions here are:

1. Is "knowing" the same thing as "knowing infallibly"?

2. Is it possible in principle to know anything?

3. Is it possible in principle to know that you know anything?

4. Is it possible in principle to know that another knows anything?

I think we all agree that 1=yes.

It sounds like you are saying 2=yes.

I think Moving Finger is saying 2=no.

I think you are saying 3=no, and that that is the central issue of your work.

I think MF would have to say 3=no since 2=no.

I think both of you would have to say 4=no since 3=no.

I would say that 1=2=3=yes and that 4 is a non-question since there is only one knower.

(Good to hear from you, Dick. I started another letter to you this morning, but I didn't get it finished or sent. You give me too much homework.)

Paul
 
  • #121
moving finger said:
Can you explain how it is (what is the mechanism whereby) the agent can acquire this “certain knowledge” that these options will in fact be available (as opposed to it simply BELIEVING that they will be available)?
Paul Martin said:
No. And after thinking more carefully, I should amend my example by saying, "The certain knowledge I have that[, barring any malfunction of the PNS of Paul Martin (PNSPM),] I can continue typing this response...".
Here you agree that a malfunction of the PNSPM could render the option unavailable to the agent, hence to ensure that the agent’s knowledge is infallible you need to add the constraint that there will be no malfunction of the PNSPM. Correct?
But (by the same reasoning that the agent cannot have infallible foreknowledge that an option will be avilable), the agent cannot have infallible foreknowledge that there will be no malfunction of the PNSPM.
In other words, the agent cannot be sure that the option will be available, because the agent cannot be sure that the PNSPM will not malfunction.
With your amended example you have simply replaced “uncertainty that the option will be available” with “uncertainty that the PNSPM will not malfunction”. The former is conditional upon the latter. The uncertainty (the fallibility) is still there.
Conclusion : The agent cannot have infallible foreknowledge.

Paul Martin said:
As for the mechanism, it is probably similar to the mechanism used to acquire the certain knowledge in the agent, when, working through PNSPM, the agent knows what green looks like as reported to the agent via the sensory and perceptive mechanisms of PNSPM.
With respect, I suggest you are trying to compare different types of knowledge.
Knowledge of “what green looks like” is not foreknowledge, it is acquired knowledge. With all due respect to Nagel, IMHO an agent cannot “know” what green looks like unless and until it has experienced seeing the colour green. Once it has had this experience, then it also has acquired the knowledge of “what green looks like”.
It should be self-evident that the agent cannot use this particular experiential mechanism to acquire such “knowledge” about future options (ie about the possibility that a particular “option” exists that has not yet “happened”).
The question as to how your agent might acquire such foreknowledge thus remains unanswered.

BTW – to try and avoid introducing additional confusion I humbly suggest it may be better to focus our debate on discussing the nature of the “free will” of a 3rd-party “agent”, rather than discussing the free will of either PM or MF. Would you agree?

Paul Martin said:
Would you say that the agent does not have certain knowledge of what green looks like as reported by a PNS?
As per above, these (the “acquired knowledge of what green looks like” and “the foreknowledge that a future option is available to it”) are different kinds of knowledge that the agent possesses, and they should not be confused with each other.
Your definition of free will is dependent on infallible foreknowledge, it is not dependent on infallible acquired knowledge.

Paul Martin said:
In an extreme (admittedly improbable, but nevertheless possible) example, the agent could be destroyed in the next instant by an asteroid which hits its home town.
Paul Martin said:
Not in my cosmos, it couldn't. In my cosmos the agent does not live in the home town. The asteroid could wipe out the PNS -- and I have just corrected for that eventuality -- but in my view, not the agent.
I do not understand your suggestion “The asteroid could wipe out the PNS -- but in my view, not the agent.”
Are you suggesting that the agent is immortal, indestructible?
That it is impossible for the agent to be destroyed?
Are you suggesting that the agent somehow exists outside of the physical world?
Can you elaborate please?

If you are indeed suggesting that an agent must necessarily be indestructible in order to have free will, then this needs to be explicit in your necessary conditions?

However, even postulating an indestructible agent does not avoid the problem. In the extreme example that I provided, the insertion of an indestructible agent simply changes the example to :

the agent’s PNS (plus associated material body and all causal contact between the agent and the physical world) could be destroyed in the next instant by an asteroid which hits its home town. This would wipe out its ability both to continue, and to take a break and have lunch, and all other options.

(explanation : even if the agent exists somehow “outside of the physical world”, the agent only acts via the physical world – the options “to continue typing a response” and “to take a break and have lunch”, are options dependent on the agent’s interaction with the physical world, and these options would no longer be available to the agent, even if the agent was somehow existing somewhere outside of the physical world and indestructible, if the agent’s PNS, body and all other associated links with the physical world were destroyed.)

Paul Martin said:
Your statement of the issue above got me wondering, "What does it mean 'to know infallibly'?". Simply to say "the agent knows" implies infallibility by the definition of the word 'know'.
That is why I inserted the word infallibly.
Because there are two interpretations of “to know” – one is the interpretation that you wish to use (which is “the agent knows infallibly”), and the other is the one I offered but which you rejected (which is “the agent believes that it knows”).
These are very different. But when most people say that they “know” something then it can mean one or the other, depending on the source of their knowledge.
When an agent says “I know that it will rain tomorrow” then it actually means “I believe that I know that it will rain tomorrow” and not “I know infallibly that it will rain tomorrow”.
The same is true (IMHO) of all foreknowledge. Infallible foreknowledge (IMHO) is not possible.

Paul Martin said:
But that's hardly convincing. Your argument would say that it is never appropriate to assert "Y knows X" for any X or Y. But that would make the word 'know' useless.
No, I did not say that no infallible knowledge is possible (but in fact it might be true that infallible knowledge is not possible). We simply need to be clear in definitions whether we are referring to infallible knowledge or not – it is important.
We are talking here specifically about foreknowledge. And IMHO infallible foreknowledge is not possible.

Paul Martin said:
But suppose the agent knows that it knows X.
Does it infallibly know that it infallibly knows X, or does it believe that it knows that it believes that it knows X? Or maybe it believes that it infallibly knows X, or maybe it infallibly knows that it believes it knows X?

Paul Martin said:
If indeed the agent knows X in the first place, knowing that it knows X in addition wouldn't strengthen the claim that it knows X.
Agreed. If the agent infallibly knows X, then that is the end of the issue.

Paul Martin said:
It would only provide additional knowledge which is outside or above the first circumstance, and which could in principle even inhere in a separate agent. We could have, for example, Agent B knows that Agent A knows X.
And we could have 4 different permutations of this based on belief and infallibility.

Paul Martin said:
This led me in three or four different directions. First is to note that you and I, in this discussion, are in that circumstance. We are questioning whether we can know that Agent A knows X. That is a different question from, "Can Agent A know X". I think it may be possible that Agent A can know X while at the same time it is impossible for Agent B to know that Agent A knows X. If that possibility turns out to be the case, then we may not be able to resolve this issue here.
And solipsism may be true. I may be the only conscious agent in the universe, and the rest of you exist in my imagination. But that leads us nowhere. We can only make sense of what is going on if we make some initial reasonable assumptions (axioms) and proceed from there.

Paul Martin said:
The second direction I am led is to extend the chain by supposing that the agent knows that it knows that it knows X. Does that help any? It seems to because now there is even more knowledge than before. What about extending the chain to a million links?
Extending the chain (IMHO) does not help. Either the agent infallibly knows X, or it does not.

Paul Martin said:
The third direction is to salt this chain with one or more 'believes': Can the agent believe it knows X?
Yes, I see no reason why an agent cannot believe anything it wishes to believe.

Paul Martin said:
Know it believes X? Know it believes it knows? Know it knows it believes? Believe it believes it knows? Etc.
Exactly.

Paul Martin said:
This is not meant to be silliness or sophistry, although it sounds like both. Instead, the point I am trying to make is that the issue you articulated is very complex.
I never thought otherwise!

Paul Martin said:
I am not prepared even to guess at the outcome of a resolution, but at this point I am willing to concede that my requirement for infallible knowledge may be unnecessarily strong. I'm not sure your proposed substitutions are the right ones either, however. Maybe it should be a longer chain of knowing and believing.
I do not see what can be gained from a longer chain. The starting point is either “the agent infallibly knows that” or “the agent believes that it knows that”, and all else (IMHO) flows from there.

Paul Martin said:
And if the person is also operating deterministically?
Paul Martin said:
The automaton was an analogy. Little is to be gained by staking much on the details of one of the analogs. But the analogy aside, you are asking about the consequences of the case where the conscious agent operates deterministically. I'd say in that case there is no free will.
Can you explain why you think your definition of free will is necessarily incompatible with determinism?

(for the record, I believe your type of free will does not exist because your definition requires infallible foreknowledge, which I do not believe is possible, in either a deterministic or an indeterministic world)

It is possible to define free will such that it is compatible with determinism.

Paul Martin said:
The point is that it provides a different hypothesis from which to work. My only suggestion is that we explore the hypothesis of a single consciousness and see where it leads. My suspicions are that it will be more fruitful than the hypothesis of "PNSx contains TEOx", or even "The physical world of PNSx contains TEOx"
I’m not sure what relevance those acronyms have to this thread. Can you explain what you mean?

Paul Martin said:
If however one takes a pragmatic approach and defines free will such that free will is possible (even though it may not provide a very satisfying or intuitively “nice” result in terms of the "feeling" of free will), then explaining how free will operates is also possible (this is my approach)
Paul Martin said:
That may be true. But unless and until you actually produce that explanation for how free will operates, the mystery remains. As of this date, I still maintain that free will is a mystery in every model.

MF definition of free will : the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action, and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable.

I am not saying that the world is necessarily determinsitic, but I think you will find that the above definition is entirely consistent with determinism, and also consistent with the way that humans (who claim to have free will) actually behave. There is no mystery involved in this definition or in the way that free will operates. I agree the MF definition does not accord with the naïve conception of free will - but that is because the naïve conception of free will is based on unsound reasoning, and leads to a kind of free will which is not possible.

No mystery.

MF
:smile:
 
  • #122
moving finger said:
This is why I asked you to give an example of how your “randomness” is supposed to endow an otherwise deterministic agent with “free will”. You have not given such an example (I suspect because you cannot give one).

It is true that we would not consider an individual to 'own' a
an action or decision if it had nothing to do with his beliefs
and aims at the time he made it -- that is, if we assume
that indeterminism erupts in-between everything that happened
to make him the individual he is, and the act itself.

I call this the Burridan's Ass model, in which the only
useful role indeterminism can have is as a 'casting vote'
when there are no strong preferences one way or the other.
An alternative is the Darwinian model, according to which
an indeterministic process plays a role analogous to random
mutation , in that it throws up ideas and potential solutions
to problems which another, more rational and deterministic process
selects between. This role of indeterminism places it where
it can do least harm to rationality; it is only called on
where creativity and imagination are required, and it does
not get translated into action without being being subject to a
rational veto. This answers the common charge that indeterminism
would lead to capricious behaviour in all circumstances,
which is equivalent to saying that Darwinian evolution would be
'just random' and unable to explain the orderliness of the natural
world. Both objections look at only the random process in isolation.
 
  • #123
Tournesol said:
It's to do with the ability to have done otherwise .
Can you clarify please just what you mean by "the ability to have done otherwise"?

Thank you

MF
:smile:
 
  • #124
moving finger said:
With respect, I suggest you are trying to compare different types of knowledge.
Knowledge of “what green looks like” is not foreknowledge, it is acquired knowledge.
Hmmmmm.

moving finger said:
BTW – to try and avoid introducing additional confusion I humbly suggest it may be better to focus our debate on discussing the nature of the “free will” of a 3rd-party “agent”, rather than discussing the free will of either PM or MF. Would you agree?
Yes, I agree. I think I have done that.

moving finger said:
Your definition of free will is dependent on infallible foreknowledge, it is not dependent on infallible acquired knowledge.
Hmmmmmm. It does appear that way.

moving finger said:
I do not understand your suggestion “The asteroid could wipe out the PNS -- but in my view, not the agent.”
The asteroid is in the physical world; the agent is not. Thus the agent is immune from the asteroid.

moving finger said:
Are you suggesting that the agent is immortal, indestructible?
That it is impossible for the agent to be destroyed?
No. Just not by an asteroid.

moving finger said:
If you are indeed suggesting that an agent must necessarily be indestructible in order to have free will, then this needs to be explicit in your necessary conditions?
No. It's just that as soon as the agent is destroyed, it no longer has free will.

moving finger said:
Are you suggesting that the agent somehow exists outside of the physical world?
Yes, absolutely. That is one of the most significant assumptions in my view of the world. It is probably second only to my assumption of the existence of only a single consciousness, since I think "a single consciousness" implies a non-physical world.

moving finger said:
Can you elaborate please?
Yes. I'd be delighted to do so. Thank you for asking.

I don't think it would be appropriate to go into elaborate detail here so I will give you some references and then address what I think you might be getting at by asking.

If you read my recent posts to other threads in this forum, virtually all of them express some notion or other of my world view. You can also check out my essays at http://www.paulandellen.com/essays/essays.htm and if you only want to read one, start with my "World-view 2004".

Now I suspect that what you are asking about is, Where, for heaven's sake, is that other "place" which is outside the physical world? In my view it is in manifolds in higher dimensional space/time which are separate from our 4D manifold and which have more than 4 dimensions. This, I know, I know, has been a very contentious idea since it was proposed to Einstein by Kaluza, and I know that it is falling out of favor today, but I have yet to hear any argument sufficient to dismiss it IMHO.

For more elaboration, I'll let you prompt me with questions or comments.

moving finger said:
We are talking here specifically about foreknowledge. And IMHO infallible foreknowledge is not possible.
I see your point.

moving finger said:
Can you explain why you think your definition of free will is necessarily incompatible with determinism?
I tried to explain that with my thought experiment of re-running identical circumstances and getting different results. Determinism would say that the results would be identical.

moving finger said:
(for the record, I believe your type of free will does not exist because your definition requires infallible foreknowledge, which I do not believe is possible, in either a deterministic or an indeterministic world)
Your points about foreknowledge are beginning to sink in.

moving finger said:
MF definition of free will : the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action, and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable.
Except for a small quibble, I find this definition to make sense and I would accept it.

moving finger said:
I am not saying that the world is necessarily determinsitic, but I think you will find that the above definition is entirely consistent with determinism, and also consistent with the way that humans (who claim to have free will) actually behave.
Yes, you are right. I did find it consistent with determinism and with human behavior.

moving finger said:
There is no mystery involved in this definition or in the way that free will operates.
OK, but there still remains the slightly nagging question of whether or not there is free will in the naïve sense. (Could I really have taken a lunch break? I just don't know.)

moving finger said:
I agree the MF definition does not accord with the naïve conception of free will - but that is because the naïve conception of free will is based on unsound reasoning, and leads to a kind of free will which is not possible.
That could very well be the reason.

moving finger said:
No mystery.
Not one worth debating anyway.

Thank you for the insights.

Paul
 
  • #125
Tournesol said:
An alternative is the Darwinian model, according to which
an indeterministic process plays a role analogous to random
mutation , in that it throws up ideas and potential solutions
to problems which another, more rational and deterministic process
selects between. This role of indeterminism places it where
it can do least harm to rationality; it is only called on
where creativity and imagination are required, and it does
not get translated into action without being being subject to a
rational veto.
Calling this a Darwinian model is IMHO (and with respect) a little insulting to Charles Darwin, and lends the mechanism suggested above a little too much scientific credibility. The processes underlying the evolution of species are completely compatible with determinism, the so-called “random mutations” need not in fact be due to any ontically indeterministic process. Out of respect to Mr Darwin I suggest the mechanism suggested above be re-named the Random Alternatives (RA) mechanism.

If I understand this RA mechanism correctly, the source of indeterminism is postulated to be introduced prior to the agent’s point of decision (prior to the agent’s moment of choice), and the agent’s choice is still intended to be a deterministic process? Indeterminism is supposed to “generate” a series of random alternative courses of action (much like a random number generator or RNG in a computer) for the agent to consider and from which to choose.

Thus, if we could “re-play” a particular choice that the agent had already made, keeping everything as it was before but allowing the RNG to generate different alternatives, then we may find that the agent “apparently” chooses differently in each re-play, depending upon the random alternative courses of action that are generated by the RNG. This “apparently” different choice by the agent in each re-play is then supposed to be a reflection of the agent’s “free will”.

In fact, if we re-play a particular choice that the agent has made, keeping everything as it was before but allowing the RNG to generate alternative courses of action apparently randomly, then we necessarily must observe one of two alternative scenarios :

EITHER (A) the RNG happens (probabilistically) to generate the same alternatives on the second “run”, in which case the agent (operating deterministically) will necessarily make the same choice as on the first run. In other words, if we could re-play the agent’s moment of choice with all of the conditions exactly as they were before including the alternatives that are generated for the agent to consider, then the agent will necessarily make the same choice as it did before. This is a completely deterministic scenario and is completely compatible with determinism (ie re-play with the same starting conditions and one obtains the same result).

OR (B) the RNG generates different alternatives on the second “run”, in which case the agent (still operating determinsitically) might make a choice which is different to the choice that it made on the first run. In other words, if we could re-play the agent’s moment of choice with all of the conditions exactly as they were before EXCEPT that the alternatives for consideration are different, then the agent will not necessarily make the same choice as it did before. This (the agent’s choice) again is a completely deterministic scenario and is again completely compatible with determinism (ie re-play with different starting conditions and one may obtain a different result).

The only difference between re-play (A) and re-play (B) is that in (A) the conditions are indeed set to the way they were the first time round, whereas in (B) the conditions (at the moment of choice of the agent) are not the same as they were before. THIS FACT ALONE (and not any supposed “free will” on the part of the agent) is the source of the agent’s ability to make different choices in each run.

In fact, we do not need the RNG in the proposed mechanism to be ontically indeterministic. It need only be an RNG in the sense of a computer software RNG, which operates to generate epistemically random, but ontically deterministic, numbers. What matters in the RA mechansim (the “apparent source of free will”) is ONLY that the agent is provided with DIFFERENT ALTERNATIVES in each re-play (this will ensure that the agent will not necessarily make the same choice in each re-play, scenario B above), and NOT that these alternatives are generated by a genuinely (ontically) indeterministic process.

To show how \"silly\" this notion of random generation of "free will" is, consider the following :

The Libertarian Free Will Computer

I could quite easily \"build\" such models of \"free-will\" agents using computer software, incorporating an RNG to \"generate\" apparently random alternatives for my deterministic software agent to consider, and from which to choose. Since I am generating the computer agent\'s alternatives randomly (thus ensuring that it\'s choice need not be the same each time) does that mean my computer agent now has \"free will\", where it had no \"free will\" before (prior to me introducing the RNG)? I think everyone would agree that this notion is very silly. And does it make any difference if the RNG is genuinely random (ontically indeterministic), or whether it simply appears to be random (epistemically indeterminable)? No, of course not. It does not matter what we do with the RNG, we cannot use indeterminsim to \"endow\" the Libertarian version of free will onto an otherwise deterministic machine

I think one will find that if one models the above RA mechanism and examines it rationally and logically, looking at the possible sequences generated, then one will find that the introduction of the RNG prior to the moment of choice acts in much the same way as introducing the RNG after the moment of choice. In both cases, there is a point at which a deterministic choice is made by the agent based on alternatives available, but in both cases the final result is in fact random. This is not free will. This is simply a random-choice-making mechanism.

MF
:smile:
 
Last edited:
  • #126
moving finger said:
OR (B) the RNG generates different alternatives on the second “run”, in which case the agent (still operating determinsitically) might make a choice which is different to the choice that it made on the first run. In other words, if we could re-play the agent’s moment of choice with all of the conditions exactly as they were before EXCEPT that the alternatives for consideration are different, then the agent will not necessarily make the same choice as it did before. This (the agent’s choice) again is a completely deterministic scenario and is again completely compatible with determinism (ie re-play with different starting conditions and one may obtain a different result).

No, this isn't completely deterministic , because determinism requires
a rigid chain of cause and effect going back to the year dot. One part
of the process, the selection from options may be deterministic, but
the other part, the generation of options to be selected from, isn't.
D-ism doesn't mean that cause cause effects every now and then,
it means everyhting happens with iron necessity and no exceptions.
The only difference between re-play (A) and re-play (B) is that in (A) the conditions are indeed set to the way they were the first time round, whereas in (B) the conditions (at the moment of choice of the agent) are not the same as they were before. THIS FACT ALONE (and not any supposed “free will” on the part of the agent) is the source of the agent’s ability to make different choices in each run.

But they are different becuase of indeterminism in the chain
of causes leading up to that moment, and in my naturalistic
account of FW, that indeterminism is one of the things that constitutes
FW. You seem to be assuming that FW is supernatural or nothing;
I am not making that assumption.

In fact, we do not need the RNG in the proposed mechanism to be ontically indeterministic. It need only be an RNG in the sense of a computer software RNG, which operates to generate epistemically random, but ontically deterministic, numbers. What matters in the RA mechansim (the “apparent source of free will”) is ONLY that the agent is provided with DIFFERENT ALTERNATIVES in each re-play (this will ensure that the agent will not necessarily make the same choice in each re-play, scenario B above), and NOT that these alternatives are generated by a genuinely (ontically) indeterministic process.


Pseudo-random numbers (which are really deterministic)
may be used in computers, and any indeterminism the brain
calls on might be only pseudo-random. But it does not have
to be, and if we assume it is not, we can explain realistically
why we have the sense of being able to have done otherwise.
People sometimes try to explain this as an 'illusion', but
it do not make it clear why we would have that particular illusion.


I could quite easily \"build\" such models of \"free-will\" agents using computer software, incorporating an RNG to \"generate\" apparently random alternatives for my deterministic software agent to consider, and from which to choose. Since I am generating the computer agent\'s alternatives randomly (thus ensuring that it\'s choice need not be the same each time) does that mean my computer agent now has \"free will\", where it had no \"free will\" before (prior to me introducing the RNG)? I think everyone would agree that this notion is very silly. And does it make any difference if the RNG is genuinely random (ontically indeterministic), or whether it simply appears to be random (epistemically indeterminable)? No, of course not. It does not matter what we do with the RNG, we cannot use indeterminsim to \"endow\" the Libertarian version of free will onto an otherwise deterministic machine

Naturalists think it is not impossible to artificially duplicate human
mentality, which would have to include human volition, since there
is not 'ghost' in the human machine. You are levelling down, saying huamns have
no FW and computers don't either. I am levelling up, saying humans have FW and appropriate computers could have it as well. It all depends on what
you mean by FW. The contentious issue, vis a vis determinism, is the
ability to have doen otherwise, and that is explainable naturalistically in an indeterministic universe.

I think one will find that if one models the above RA mechanism and examines it rationally and logically, looking at the possible sequences generated, then one will find that the introduction of the RNG prior to the moment of choice acts in much the same way as introducing the RNG after the moment of choice. In both cases, there is a point at which a deterministic choice is made by the agent based on alternatives available, but in both cases the final result is in fact random. This is not free will. This is simply a random-choice-making mechanism.

No it isn't the same. Intorducing randomness after choice removes 'ownership'.
The hypothetical AI wouldn't be able to explain why it did as it did.
 
  • #127
moving finger said:
OR (B) the RNG generates different alternatives on the second “run”, in which case the agent (still operating determinsitically) might make a choice which is different to the choice that it made on the first run. In other words, if we could re-play the agent’s moment of choice with all of the conditions exactly as they were before EXCEPT that the alternatives for consideration are different, then the agent will not necessarily make the same choice as it did before. This (the agent’s choice) again is a completely deterministic scenario and is again completely compatible with determinism (ie re-play with different starting conditions and one may obtain a different result) .
Tournesol said:
No, this isn't completely deterministic , because determinism requires a rigid chain of cause and effect going back to the year dot.
Please read what I wrote.
“re-play with different starting conditions and one may obtain a different result”
This is completely deterministic.

Tournesol said:
One part of the process, the selection from options may be deterministic, but the other part, the generation of options to be selected from, isn't. D-ism.
Agreed - this is the point of “indeterminism”. But introducing indeterminism into the process simply introcduces indeterminism into the results – how do you think it introduced free will?

moving finger said:
The only difference between re-play (A) and re-play (B) is that in (A) the conditions are indeed set to the way they were the first time round, whereas in (B) the conditions (at the moment of choice of the agent) are not the same as they were before. THIS FACT ALONE (and not any supposed “free will” on the part of the agent) is the source of the agent’s ability to make different choices in each run.
Tournesol said:
But they are different becuase of indeterminism in the chain of causes leading up to that moment, and in my naturalistic account of FW, that indeterminism is one of the things that constitutes FW. You seem to be assuming that FW is supernatural or nothing.
No, you seem to be assuming that introducing indeterminism also introduces free will.

Tournesol said:
Pseudo-random numbers (which are really deterministic) may be used in computers, and any indeterminism the brain calls on might be only pseudo-random. But it does not have to be, and if we assume it is not, we can explain realistically why we have the sense of being able to have done otherwise. People sometimes try to explain this as an 'illusion', but it do not make it clear why we would have that particular illusion.
You have not explained anything. You have assumed that indeterminism is equivalent to free will simply because indeterminism results in an indeterministic outcome.

Tournesol said:
Naturalists think it is not impossible to artificially duplicate human mentality, which would have to include human volition, since there is not 'ghost' in the human machine. You are levelling down, saying huamns have no FW and computers don't either.
I am not saying that humans do not have free will, I am saying that free will as defined by you cannot exist, period.

Tournesol said:
I am levelling up, saying humans have FW and appropriate computers could have it as well. It all depends on what you mean by FW.
Do you agree that the computer I have just described has free will? The computer “could have done otherwise” since it’s choices were dependent on a RNG input – therefore according to your philosophy it must have free will? Yes? Or no? And if no then why not?

moving finger said:
I think one will find that if one models the above RA mechanism and examines it rationally and logically, looking at the possible sequences generated, then one will find that the introduction of the RNG prior to the moment of choice acts in much the same way as introducing the RNG after the moment of choice. In both cases, there is a point at which a deterministic choice is made by the agent based on alternatives available, but in both cases the final result is in fact random. This is not free will. This is simply a random-choice-making mechanism.
Tournesol said:
No it isn't the same. Intorducing randomness after choice removes 'ownership'. The hypothetical AI wouldn't be able to explain why it did as it did.
Incorrect. Why do you think the AI would not be able to explain why it did as it did? It operates deterministically, there is no reason why it should not understand the reason for its choices…..

MF

:smile:
 
  • #128
moving finger said:
With respect, I suggest you are trying to compare different types of knowledge.
Knowledge of “what green looks like” is not foreknowledge, it is acquired knowledge.
Paul Martin said:
Hmmmmm.
Hmmmmm? Is that a yes or a no?

moving finger said:
Your definition of free will is dependent on infallible foreknowledge, it is not dependent on infallible acquired knowledge.
Paul Martin said:
Hmmmmmm. It does appear that way.
Thank you.

moving finger said:
I do not understand your suggestion “The asteroid could wipe out the PNS -- but in my view, not the agent.”
Paul Martin said:
The asteroid is in the physical world; the agent is not. Thus the agent is immune from the asteroid.
Interesting. Can you please define what else your “agent” is also immune to? The common cold?

moving finger said:
Are you suggesting that the agent is immortal, indestructible?
That it is impossible for the agent to be destroyed?
Paul Martin said:
No. Just not by an asteroid.
“Just” by an asteroid? Thus, your agent can be destroyed by absolutely anything else…… but not by an asteroid?
Really?
Strange.

moving finger said:
If you are indeed suggesting that an agent must necessarily be indestructible in order to have free will, then this needs to be explicit in your necessary conditions?
Paul Martin said:
No. It's just that as soon as the agent is destroyed, it no longer has free will.
Well that does seem logical. You are not suggesting that your agent is necessarily indestructible then.

moving finger said:
Are you suggesting that the agent somehow exists outside of the physical world?
Paul Martin said:
Yes, absolutely. That is one of the most significant assumptions in my view of the world. It is probably second only to my assumption of the existence of only a single consciousness, since I think "a single consciousness" implies a non-physical world.
Perhaps you should therefore include this in your “necessary conditions” for free will?

Paul Martin said:
For more elaboration, I'll let you prompt me with questions or comments.
OK, maybe later.

moving finger said:
We are talking here specifically about foreknowledge. And IMHO infallible foreknowledge is not possible.
Paul Martin said:
I see your point.
Thank you. Does that mean you “agree”?

moving finger said:
Can you explain why you think your definition of free will is necessarily incompatible with determinism?
Paul Martin said:
I tried to explain that with my thought experiment of re-running identical circumstances and getting different results. Determinism would say that the results would be identical.
Sorry, I still don’t understand how you introduce “different results”, unless this is purely due to indeterminism? (but if it is indeterminism, then what has this to do with free will?)

moving finger said:
(for the record, I believe your type of free will does not exist because your definition requires infallible foreknowledge, which I do not believe is possible, in either a deterministic or an indeterministic world)
Paul Martin said:
Your points about foreknowledge are beginning to sink in.
Sink in? Does this mean you agree?

moving finger said:
MF definition of free will : the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action, and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable.
Paul Martin said:
Except for a small quibble, I find this definition to make sense and I would accept it.
Wonderful!

moving finger said:
I am not saying that the world is necessarily determinsitic, but I think you will find that the above definition is entirely consistent with determinism, and also consistent with the way that humans (who claim to have free will) actually behave.
Paul Martin said:
Yes, you are right. I did find it consistent with determinism and with human behavior.
Even more wonderful!

moving finger said:
There is no mystery involved in this definition or in the way that free will operates.
Paul Martin said:
OK, but there still remains the slightly nagging question of whether or not there is free will in the naïve sense. (Could I really have taken a lunch break? I just don't know.)
If you would define “free will in the naïve sense” then I could tell you.

moving finger said:
I agree the MF definition does not accord with the naïve conception of free will - but that is because the naïve conception of free will is based on unsound reasoning, and leads to a kind of free will which is not possible.
Paul Martin said:
That could very well be the reason.
Wonderful!

moving finger said:
No mystery.
Paul Martin said:
Not one worth debating anyway.
Even more wonderful!

Does this mean that you now accept my suggested changes to your necessary conditions? (ie that agents "believe that thay have infallible knowledge" of options, rather than agents "have infallible knowledge" of options?)
MF
:smile:
 
  • #129
moving finger said:
Does this mean that you now accept my suggested changes to your necessary conditions? (ie that agents "believe that thay have infallible knowledge" of options, rather than agents "have infallible knowledge" of options?)
Yes. I accept your changes. I think you have improved on my original conditions. Thank you.

I am by nature slow but persistent. It took me a while but after thinking about your arguments, I finally saw that you are right. Sorry it took so long, and thank you for your effort.

Paul
 
  • #130
Paul Martin said:
Yes. I accept your changes. I think you have improved on my original conditions. Thank you.

I am by nature slow but persistent. It took me a while but after thinking about your arguments, I finally saw that you are right. Sorry it took so long, and thank you for your effort.

Paul

You are most welcome.

We have arrived at our necessary conditions for free will :
1. The agent must be conscious.
2. The agent must believe that multiple options for action are available.
3. The agent must know (or believe that it knows) at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
4. The agent must be able to choose and execute one of the options in the folklore sense of FW.

The above conditions (IMHO) are compatible with a deterministic world; they are also compatible with my definition of free will, as well as being (IMHO) an accurate description of exactly what humans experience when they claim to be acting as free agents.

moving finger said:
MF definition of free will : the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action, and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable.

As for the concern you expressed about the existence naïve free will :
Paul Martin said:
OK, but there still remains the slightly nagging question of whether or not there is free will in the naïve sense. (Could I really have taken a lunch break? I just don't know.)
What I believe you mean here is : “If I had my time over again, could I have done otherwise than what I did?”. This IMHO is the naïve concept of free will, it is the concept usually espoused by Libertarians, and it is the concept we naturally think of based on “gut feeling” and “intuition” without really thinking rigorously about the issue.

My answer : Does it really matter whether you “could” have taken a lunch break or not? The fact is that “you were able to consider the option of taking a lunch break”, and "you believed at the time that this was an option available to you", and “you were able to evaluate the advantages and disadvantages of taking a lunch break”, and at the time of your decision you were NOT coerced into NOT taking a lunch break, and (most importantly) you did what you wanted to do at the time, which was "not take a lunch break".

If you could replay that time over again, with literally everything the same way as it was before, then the same things would happen – you would consider the option, you would believe the option is available, you would evaluate advantages and disadvantages, you would not be coerced, and you would once again DO WHAT YOU WANT TO DO, which is "not take a lunch break".

What I believe most Libertarians ACTUALLY MEAN when they ask “if I could replay the same situation exactly as before, could I have done otherwise than what I actually did?” is in fact that they want the "freedom" to NOT replay it exactly as it was before, they want to be able to "choose differently" which means in turn they want to be able to "want to choose differently", which is NOT REPLAYING EXACTLY AS IT WAS BEFORE. The Libertarian who thinks he can replay and choose differently is therefore (IMHO) deceiving himself into thinking that he is actually replaying the same situation, when in fact he is not.

To the naïve question of free will expressed as “if I could replay the same situation EXACTLY as before, could I have done otherwise than what I actually did?” the answer is (IMHO) NO, YOU COULD NOT HAVE DONE OTHERWISE, BUT IT DOESN’T MATTER!

MF
:smile:
 
Last edited:
  • #131
moving finger said:
I think one will find that if one models the above RA mechanism and examines it rationally and logically, looking at the possible sequences generated, then one will find that the introduction of the RNG prior to the moment of choice acts in much the same way as introducing the RNG after the moment of choice. In both cases, there is a point at which a deterministic choice is made by the agent based on alternatives available, but in both cases the final result is in fact random. This is not free will. This is simply a random-choice-making mechanism
.

Tournesol said:
it isn't the same. Intorducing randomness after choice removes 'ownership'. The hypothetical AI wouldn't be able to explain why it did as it did.
Tournesol,

I just realized that I misunderstood your comment here. Apologies. Let me reply correctly this time :

I agree that in the case of the RNG after the moment of choice, the agent would not be able to explain why it chose what it did choose.

But on the other hand, in the case of the RNG before the moment of choice, the agent would not be able to explain why it considered the alternatives that it did consider – it would in fact have no control over the alternatives being considered because those alternatives are being generated, not by any rational process within the agent, but randomly.

In both case, the outcome is random.

In both cases, the agent does not completely control what it does.

MF
:smile:
 
  • #132
Tournesol said:
It's to do with the ability to have done otherwise .
Seems like a harmless expression doesn't it? Surely it stands to reason that all free will agents "have the ability to have done otherwise"?

I wish to show that this naive Libertarian concept of free will is an impossibility.

Libertarians seem to believe that "free will" is somehow associated with the fact that "if one could replay the circumstances exactly the same as before, then one must have been able to have done otherwise than what one actually did".

For example, one hour ago I could have chosen to take a lunch break, or I could have chosen to continue typing. In fact, I chose to continue typing. The Libertarian would say that if I could replay the circumstances exactly the same as before, then (if I have free will) I must have been able to choose to take a lunch break rather than to continue typing.

At first sight, this idea seems intuitively "right"; our naive impression of free will is surely that we can choose to do whatsoever we wish, and therefore (our intuition tells us), if we have free will then that also means that, given identical circumstances, we still must have been able to do otherwise than what we actually did?

Let us analyse this seemingly "obvious" statement a little more closely.

Firstly, what do we mean by "circumstances exactly the same as before"? Do we mean simply that the circumstances should be similar, but not necessarily identical? No, of course not, because obviously if the circumstances were even slightly different then that might affect our choice anyway, regardless of whether we "choose freely" or not.
Therefore, when we say "circumstances exactly the same as before" we do mean precisely the same, including our own internal wishes, desires, volitions.

Secondly, what do we mean by "able to have done otherwise"?
Do we mean "physically able", in the sense that one is physically capable of carrying out different actions? No of course not.
Do we mean "able to choose", in the sense that one is capable of selecting one of among various alternatives?
This seems closer to what we actually mean. But surely "our choice" is determined by "us"; we "freely" decide our choice based upon the prevailing circumstances.

Now combine these two. Repeat the scenario, with "circumstances exactly the same as before".

If circumstances are indeed exactly the same as before, then all of our internal wishes, desires, volitions etc will also be exactly the same as before. In which case, why one Earth would we WANT to choose differently than the way we did before? Replay the scenario with exactly the same conditions, and any rational "free thinking" agent will choose exactly the same way each and every time. The only reason why it should ever "choose differently" in the carbon-copy repeat is if there is some element of indeterminism in the choice - but do Libertarians REALLY want to say that their free will choices are governed by indeterminism? I think not.

My answer to this naive Libertarian concept of free will : Does it really matter whether I “could” have taken a lunch break or not?

The fact is that “I was able to consider the option of taking a lunch break”, and in addition "I believed at the time that this was an option available to me", and even “I was able to evaluate the advantages and disadvantages of taking a lunch break”, and furthermore at the time of my decision I was NOT coerced into NOT taking a lunch break, and (most importantly) I did what I wanted to do at the time, which was "not take a lunch break".

If I could replay that time over again, with literally everything the same way as it was before, then the same things would happen – I would consider the options, I would believe the options are available, I would evaluate advantages and disadvantages, I would not be coerced, and I would once again DO WHAT I WANTED TO DO, which is (because the circumstances are identical) "not take a lunch break".

What I believe most Libertarians ACTUALLY MEAN when they say "if one could replay the circumstances exactly the same as before, then one must have been able to have done otherwise than what one actually did" is in fact that they want to have the "freedom" to NOT replay it EXACTLY as it was before, they want in fact to be able not only to "choose differently" to the way they did before, but they also want to "want to choose differently", which is then NOT REPLAYING EXACTLY AS IT WAS BEFORE.

The Libertarian who thinks he can replay perfectly and still choose differently is therefore (IMHO) deceiving himself.

The naïve concept of free will is expressed as “"if one could replay the circumstances exactly the same as before, then one must have been able to have done otherwise than what one actually did"

- and the rational response is (IMHO) YOU COULD NOT HAVE DONE OTHERWISE THAN WHAT YOU DID, BUT IT REALLY DOESN’T MATTER!

MF
:smile:
 
  • #133
moving finger said:
I have no idea what you mean by strong free will
(but from the rest of your post I suspect we have some similar beliefs)

May I ask - do you believe your concept of free will is compatible with determinism?
MF
:smile:

Sure. I suppose I can formulate what I mean when I say that an action of mine is free:

Any action X is a freely willed action if, and only if, the impulse to carry it out was internal to my own psyche and I was conscious of this impulse. This basically just means that so long as I ordered the action, then it's freely willed. Even if this I is nothing more than a particular unique confluence of physical and historical forces networking to determine the behavior of my body, that's fine with me. It doesn't even matter if I couldn't have done otherwise. 'Strong' free will is just that and seems to be what everyone else wants - contracausal, non-deterministic, and could have done otherwise.

By the way, Paul Martin asked a while back what I meant by by distinction between 'experiential' and 'functionalist' knowledge. Functionalist isn't the best word to use, as it conjurs up images of psychological theories that I'm not endorsing, and these aren't accepted technical terms or anything, so I probably should explain. Maybe a better distinction would be between conscious and non-conscious knowledge, since the point that I was trying to make was simply that I don't agree that knowledge is just the experiential state that one is one when one acquires knowledge. For instance, everyone in this thread likely knows that 2+37=39, even though they may not have been thinking about it at that time. Given the results of hypnotic therapy and such, it's entirely possible that you have knowledge of the past that you are not and may never be conscious of. This knowledge (suppressed memories) would fit the non-conscious knowledge mold but wouldn't fit what I meant by functional knowledge, as functional knowledge has to be usable in some way. A good example of what I meant by functional, non-experiential knowledge, is a typist's knowledge of the keyboard. I know exactly where all of the keys are on the board and use that knowledge to type out words on a screen. Rarely am I conscious of where the keys are, however. I'm certainly not thinking about it; I'm just thinking about the words I want to produce. In the same way, a good pitcher never thinks about the mechanics needed to produce a good curveball; he just throws the pitch. Nonetheless, he must have knowledge of those mechanics in order to have the ability to throw a curveball in the first place.
 
Last edited:
  • #134
And loseyourname, you could add that if some of those causes were randomly altered, that would change either the given parameters, or your desires, and you either would want to do differently because of the different causes, or else you would want differently because your desires were different, but in neither case would you be acting freely.
 
  • #135
selfAdjoint said:
And loseyourname, you could add that if some of those causes were randomly altered, that would change either the given parameters, or your desires, and you either would want to do differently because of the different causes, or else you would want differently because your desires were different, but in neither case would you be acting freely.

Well, they'd be free under my conception, but I suppose not in a strong sense. I'm kind of with Stace's language analysis, though, when he demonstrates that the common usage of the term 'free' only denotes that an action was not compelled by an external force as a proximate cause. As someone said (I can't remember who), we may be free to do as we please, but we are not free to please as we please.
 
  • #136
selfAdjoint said:
... in neither case would you be acting freely.
Can you please define what you mean by "acting freely"?

Thanks

MF
:smile:
 
  • #137
loseyourname said:
Sure. I suppose I can formulate what I mean when I say that an action of mine is free:

Any action X is a freely willed action if, and only if, the impulse to carry it out was internal to my own psyche and I was conscious of this impulse. This basically just means that so long as I ordered the action, then it's freely willed. Even if this I is nothing more than a particular unique confluence of physical and historical forces networking to determine the behavior of my body, that's fine with me. It doesn't even matter if I couldn't have done otherwise.
Agreed. And this kind of free will is indeed compatible with determinism.

My preferred definition of free will I think you will find agrees completely with your above description :
"Free will is the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable."

loseyourname said:
'Strong' free will is just that and seems to be what everyone else wants - contracausal, non-deterministic, and could have done otherwise.
This kind of “Strong” free will, IMHO, is a “will-o-the-wisp” and cannot exist. This seems to be the kind of “free will” that Libertarians want, but I have yet to find anyone who can both unambiguously define it and rationally defend it.

MF
:smile:
 
  • #138
loseyourname said:
As someone said (I can't remember who), we may be free to do as we please, but we are not free to please as we please.
Peter O'Toole, as the character T E Lawrence in the epic Lawrence of Arabia, says (in a memorable scene with Omar Sharif, where he finally comes to terms with his limited ability to change the course of history in the Arabian peninsular) :

"We are free to do what we want. But we are not free to want what we want."

MF
:smile:
 
  • #139
lyn said:
Sure. I suppose I can formulate what I mean when I say that an action of mine is free:

Any action X is a freely willed action if, and only if, the impulse to carry it out was internal to my own psyche and I was conscious of this impulse.

But you can be consciously aware of impulses that are not your conscious wish. A kleptomaniac is consciously aware of an impulse to steal, originating within herself, but it is not her wish or will to steal.

This basically just means that so long as I ordered the action, then it's freely willed.

what does "I ordered" mean ?


MF said:
My preferred definition of free will I think you will find agrees completely with your above description :
"Free will is the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable."

That is compatible with indeterminism as well as determinism. Depending on what you mean by "choose" it might even require indeterminism.

'Strong' free will is just that and seems to be what everyone else wants - contracausal, non-deterministic, and could have done otherwise.

If the universe is indeterministic, there is nothing miraculous about the ability
to have done otherwise.
 
  • #140
Tournesol said:
But you can be consciously aware of impulses that are not your conscious wish. A kleptomaniac is consciously aware of an impulse to steal, originating within herself, but it is not her wish or will to steal.

That's a compulsion, not an impulse. Subtle difference.

what does "I ordered" mean ?

I made a decision to take any given particular action.

That is compatible with indeterminism as well as determinism. Depending on what you mean by "choose" it might even require indeterminism.

I don't think you were responding to me here, but I certainly don't view choices as indeterministic. They certainly can be, but don't have to be (go back to the Mars Rover example).

If the universe is indeterministic, there is nothing miraculous about the ability to have done otherwise.

There's nothing willed about it, either.
 
Back
Top