The so-called Darwinian model of free will

In summary, the "Darwinian model" of free will, also known as the Accidental model, proposes that indeterminacy can give an otherwise deterministic agent free will. This is achieved through a 2-stage decision-making process, where a "random idea generator" creates multiple alternate ideas and a "sensible idea selector" chooses one for action. However, this model may still result in capricious behavior if too few or too many alternate ideas are generated. It is also debatable whether this model truly endows "free will" without a clear definition. When applied to machines, it implies that even a simple computer-based decision making machine can have free will. Some consider this a flaw in the model, along with the fact
  • #36
There are two disparate assumptions here.

Assumption #1: "Free will is endowed by indeterminism."
- If you can prove that, and if you further assume that free will is a feature of consciousness, you have proven that consciousness is not computational, that it relies on quantum mechanics and that quantum mechanics is indeterminate. I say this because no other known natural phenomenon can provide for indeterminate processes.*

Assumption #2: "Free will is endowed by determinate processes."
- If you can prove that, and if you further assume that free will is a feature of consciousness, you have proven that one of the most contensious features of consciousness is determinate which would imply, but not prove, that consciousness is computational.

Would you agree? How can one prove either case? It doesn't seem like there's a resolution to be had, because in the end, the results of what you have proven speak volumes about consciousness itself. You'll need more than a good argument if you're to prove either. You need a theory which can examine the phenomenon analytically and determine if it is possible or not.

Personally, I think the best you can do is to suggest free will is a feature of consciousness, and attempt to disprove/prove that. But that seems like an axiom as opposed to something which needs to be proven. You could also create definitions around that assumption, such as what I've suggested earlier, that free will is the sensation of making a decision, and one can then argue whether that sensation feels as if it is determinate or not, but not if it is truly determinate or not. Certainly the sensation feels 'random', but can you also say that the sensation feels indeterminate? It seems the argument is based on a gut feel regarding this sensation - more than any strict logic which can be built upon to prove either case.

*Note: Yes, MF, I know, I know. <grin> Determinism/indeterminism is beyond our ability to know because of non-local hidden variables, etc… We must however make the assumption that if we prove something is indeterminate, then we've also proven indeterminacy exists and the most likely candidate is QM.
 
Physics news on Phys.org
  • #37
Why the Darwinian Model Does Not Work

Tournesol said:
What else is there ? When are you going to stop saying that I am wrong and start saying why I am wrong.

As succinctly as possible - here is what is wrong with the Darwinian model (in fact, here is what is wrong with the whole idea that "indeterminsim endows free will").

Taking the definitions of RIG and SIS as before, let us suppose we "run" the model twice under identical conditions. In other words we are simply doing what the model claims to do which is to "allow it to do otherwise" given identical circumstances. Let us call these two runs "Run 1" and "Run 2" .

In Run 1 the RIG throws up (randomly) a finite number of possible courses of action. Let us suppose that included in the possible courses of action thrown up by the RIG are actions A and B. The SIS examines these and selects Action A as the "best choice" out of the ones made available by the RIG. Therefore, given a "free choice" between A and B, the SIS deterministically chooses A.

In Run 2 the RIG throws up (randomly) another finite number of possible courses of action. This time action B is included in the actions thrown up, but action A is not. The SIS examines these and now selects Action B as the "best choice" out of the ones made available by the RIG.

Clearly the model has indeed "done otherwise" in Run 2 compared to Run 1, just as (it would seem) CHDO requires - in Run 1 it chose A and in Run 2 it chose B.

But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG? The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!

Is this an example of “Could Have Done Otherwise”? Or would a more accurate description be “Forced to Do Otherwise”? The agent in run 2 was effectively forced to choose B rather than A (it was forced to do otherwise in run 2 than it had done in run 1) because of the limited choices afforded to it by the RIG. Perthaps a better acronym for this kind of model is thus FDO (forced to do otherwise) rather than CHDO?

The important question that we need to ask is : Since the model chose A rather than B in run 1, exactly WHY did it choose B in Run 2?

It is the answer to this question "WHY" which shows us where the Darwinian model fails.

Did the model choose B rather than A in run 2 because it was "acting freely" in its choice, and preferred B to A? No clearly not (because in a straight choice between A and B the model always chooses A).
Or did the model choose B rather than A in run 2 because its choices were actually being RESTRICTED by the RIG, such that it was NOT POSSIBLE for it to choose to do A in Run 2, even though A was always a better choice than B? Yes, this is indeed clearly the case.

Which kind of "free will" would you prefer to have?

One where you can choose rationally between all possible courses of action (the deterministic model)...
Or one where your choices are necessarily restricted by indeterminism such that you may be forced to take a non-optimum choice, whether you like it or not (the Darwinian model)?

Free will is supposed to be "acting freely of restrictions"...
Please do point out any errors in interpretation or conclusion.

MF

Postscript : For the avoidance of doubt, I am not here assuming or asserting that the world is either deterministic or indeterministic. I am simply looking at some of the characteristics and implications of the so-called of Darwinian model of "free will", to discover whether it has any explanatory power in the sense of endowing anything that could be described as free will in any sense of the word. I think it is quite clear from the above example that the Darwinian model (and this fact is shared by all indeterministically-driven models of free will), rather than endowing anything that we might wish to have in the form of free will, in fact "robs us" of the possibility of making optimum choices. The Darwinian model is designed to deliberately restrict (via the RIG) some of the choices available to an agent, thereby forcing the agent to make non-optimal choices in the misnomer of "CHDO".

MF
 
Last edited:
  • #38
Q_Goest said:
There are two disparate assumptions here.
Assumption #1: "Free will is endowed by indeterminism."
- If you can prove that, and if you further assume that free will is a feature of consciousness, you have proven that consciousness is not computational, that it relies on quantum mechanics and that quantum mechanics is indeterminate. I say this because no other known natural phenomenon can provide for indeterminate processes.*
Assumption #2: "Free will is endowed by determinate processes."
- If you can prove that, and if you further assume that free will is a feature of consciousness, you have proven that one of the most contensious features of consciousness is determinate which would imply, but not prove, that consciousness is computational.
Before we can do either, we need to agree a definition of free will. The basic problem is that "libertarians" will not accept a definition of free will which is based on determinism (evidence my debate with Tournesol above), and determinists will of course not accept a definition of free will which is based on indeterminism (or - determinists simply deny the existence of free will).

Q_Goest said:
Would you agree? How can one prove either case?
imho the solution to the problem is to
(1) remain open-minded on exactly "what free will is" (ie do not rule anything in or anything out), then
(2) explore the implications of various models and paradigms (as I have been examining the implications of the Darwinian model in post #37 above), to see "what kind of free will" (ie what are the proeprties of the free will that) these models endow, then
(3) ask oneself, for each model, "is this the kind of free will that is worth having?" (as I have done for the Darwinian model in #37 above)
(4) repeat steps 1 to 3 for all possible models (indeterministic and deterministic), and decide which one(s) is(are) best

As far as I can see, my analysis of the Darwinian model (above) can be applied generally to all indeterministic models of decision making. The general conclusion is that indeterminism does not endow anything that we would "want" to have as free will - it endows at worst only random behaviour or (at best, in the case of the Darwinian model) it necessarily restricts the choices available to us, so that we are always forced (by the random behaviour) to make non-optimum choices.

Q_Goest said:
It doesn't seem like there's a resolution to be had, because in the end, the results of what you have proven speak volumes about consciousness itself.
There is no resolution as long as people remain dogmatic and prejudiced in their definitions, along the lines of "free will MUST contain an element of CHDO, by definition!".

I am asking people to free their minds of preconceptions, free themselves of dogma, and start looking at the world objectively and scientifically. Only this way can we arrive at a true understanding.

Q_Goest said:
You'll need more than a good argument if you're to prove either.
The "proof" is in demonstrating the properties of various models - using the 4-step process I have outlined above, and eliminating those models which do not work.

Q_Goest said:
You need a theory which can examine the phenomenon analytically and determine if it is possible or not.
Personally, I think the best you can do is to suggest free will is a feature of consciousness, and attempt to disprove/prove that.
That may turn out to be the case. But I think we can do more than that. I think we can ask questions like "is indeterminism necessary for free will, and what are the consequences of this hypothesis?"

Q_Goest said:
But that seems like an axiom as opposed to something which needs to be proven. You could also create definitions around that assumption, such as what I've suggested earlier, that free will is the sensation of making a decision, and one can then argue whether that sensation feels as if it is determinate or not, but not if it is truly determinate or not. Certainly the sensation feels 'random', but can you also say that the sensation feels indeterminate? It seems the argument is based on a gut feel regarding this sensation - more than any strict logic which can be built upon to prove either case.
I feel the road to understanding free will is NOT to get locked in dogmatic definitions of "what free will is" or "what free will is not", and then get backed into a corner trying to defend thoise definitions. The road to understanding is to rise above definitional prejudice and dogma, and examine the real consequences of some of the proposed models.

Q_Goest said:
*Note: Yes, MF, I know, I know. <grin> Determinism/indeterminism is beyond our ability to know because of non-local hidden variables, etc… We must however make the assumption that if we prove something is indeterminate, then we've also proven indeterminacy exists and the most likely candidate is QM.
lol - yes, ok, point taken :biggrin: . I know i can be like a stuck record. the important point is that we have NOT proven that QM is indeterminate (only that it might be).

MF
 
Last edited:
  • #39
An agent which is captain of its fate can be said to be acting rationally and at the same time not controlled or unduly influenced by external factors

If "not controlled by" means "not causally determined by" that is
simply a re-statement of my definition of free will. If it does not
mean that...what does it mean ?
 
  • #40
Tournesol said:
If "not controlled by" means "not causally determined by" that is
simply a re-statement of my definition of free will. If it does not
mean that...what does it mean ?
I'm happy to agree that my suggested definition of "captain of one's fate" is essentially the same as your definition of free will. I have no problem with that at all.

I have pointed out why I consider the "Darwinian model" fails to endow anything that could be considered to be "free will" in post #37 of this thread, would you care to respond?

With respect

MF
 
  • #41
MF, your argument in post #37 above seems reasonable to me, and I'd agree there seems to be no obvious beneficial factors to having a random or indeterminate ability to select between any given number of choices. But on the other hand, this doesn't seem like an argument that can provide any insight into why free will should emerge from a choice being made or endowed by the process of making that choice regardless of whether that choice is deterministic or not. I don't suppose it was intended for that though, it is an attempt to dispute indeterministic processes as being beneficial which could be true. But being beneficial doesn't do anything to suggest why something would be endowed by a physical process.

If a choice is nothing more than a switch comparing two inputs and either making a determinate switch position or an indeterminate switch position, then how is this magical switch which gives rise to this feeling of free will any different from another switch? Do all switches produce the sensation of free will? (rhetorical question, don't answer <lol>)

If you are a computationalist, you might suggest that free will emerges from the sum total of all switch positions or in other words, by the sum of all computations. There is no single switch which endows anything.

If you disagree that computationalism can provide for consciousness (what is the term for that - "anti-computationalist"? hehe) then you might complain that deterministic processes in the brain can't provide for consciousness because there's no need to be aware of making a decision when the decision is simply the result of a calculation. The determinist's "free will" doesn't have any meaning whatsoever. There is no choice made and there is no such thing as a choice, so why should one expect a sensation from such a thing?

The comeback would seem to be that the determinist would say a choice WAS made, that two or more concepts emerged in the computation and a selection was made. I think this largely proves the point that it doesn't matter if you're a computationalist or an anti-computationalist, free will doesn't emerge except from conscious experience.

Before we can do either, we need to agree a definition of free will.

Def: Free will is a feature of consciousness. It is not a process, so suggesting it is determinate or indeterminate is false IMHO. Suggesting free will is endowed by determinate or indeterminate processes is no different than suggesting love, hate, curiosity or any other emotion is endowed by determinate or indeterminate processes. The question about free will being endowed by a process is non-sensical to begin with. That's my story and I'm stickin' to it! lol

I think we can ask questions like "is indeterminism necessary for free will, and what are the consequences of this hypothesis?"

I respectfully disagree. To suggest you can ask questions as you've proposed presupposes they can be answered in such terms as determinate and indeterminate processes. Since free will is a feature of consciousness, you can't answer the question without having some pre-defined concept of consiousness.

I feel the road to understanding free will is NOT to get locked in dogmatic definitions of "what free will is" or "what free will is not", and then get backed into a corner trying to defend thoise definitions. The road to understanding is to rise above definitional prejudice and dogma, and examine the real consequences of some of the proposed models.

Yep. Doing my best to sidestep any dogmatic concepts of processes endowing such things! <lol>
 
  • #42
Try to herd cats.

The answer comes out
like an arrow.
 
  • #43
meL said:
Try to herd cats.
The answer comes out
like an arrow.
that cats are unpredictable, yes :smile:

MF
 
  • #44
Modelling Decision-Making Machines

We have seen (post #37) that the simple so-called Darwinian model, which comprises a single Random Idea Generator folowed by a Sensible Idea Selector, does not endow any properties to an agent which we might recognise as “properties of free will”. In particular, rather than endowing the ability of Could Have Done Otherwise (CHDO), the simple RIG-SIS combination acts to RESTRICT the number of possible courses of action, thus forcing the agent to make non-optimal choices (a feature I have termed Forced to Do Otherwise, FDO, rather than CHDO).

What now follows is a description of a slightly more complex model based on a parallel deterministic/random idea generator combination, which not only CAN endow genuine CHDO but ALSO is one in which the random idea generator creates new possible courses of action for the agent, rather than restricting possible courses of action.

Firstly let us define a Deterministic Idea Generator (DIG) as one in which alternate ideas (alternate possible courses of action) are generated according to a rational, deterministic procedure. Since it is deterministic the DIG will produce the same alternate ideas if it is re-run under identical circumstances.

Next we define a Random Idea Generator (RIG) as one in which alternate ideas (alternate possible courses of action) are generated according to an epistemically random procedure. Since it is epistemically random the RIG may produce different ideas if it is re-run under epistemically identical circumstances.
Note that the RIG may be either epistemically random and ontically deterministic (hereafter d-RIG), or it may be epistemically random and ontically indeterministic (hereafter i-RIG). Both the d-RIG and the i-RIG will produce different ideas when re-run under epistemically identical circumstances. (For clarification – a d-RIG behaves similar to a computer random number generator (RNG). The RNG produces epistemically random numbers, but if the RNG is reset then it will produce the same sequence of numbers that it did before).

Next we define a Deterministic Idea Selector (DIS) as a deterministic mechanism for taking alternate ideas (alternate possible courses of action), evaluating these ideas in terms of payoffs, costs and benefits etc to the agent, and rationally choosing one of the ideas as the preferred course of action.

Finally we define a Random Idea Selector (RIS) as a mechanism for taking alternate ideas (alternate possible courses of action), and choosing one of the ideas as the preferred course of action according to an epistemically random procedure. Since it is epistemically random the RIS may a produce different choice if it is re-run under epistemically identical circumstances (ie with epistemically identical input ideas).

These four basic building bloocks, the DIG, RIG, DIS and RIS, may then be assembled in various ways to create various forms of “idea-generating and decision-making” models with differing properties.

In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities). As we have seen in post #37, the so-called Darwinian model is an example of FDO.

Deterministic Agent
DIG -> DIS

The Deterministic agent comprises a DIG which outputs rational possible courses of action which are then input to a DIS, which rationally chooses one of the ideas as the preferred course of action.
.The Deterministic agent will always make the same choice given the same (identical) circumstances.
Clearly there is no possibility of CHDO.
A Libertarian would claim that such an agent does not possesses free will, but a Compatabilist might not agree.

Capricious (Random) Agent
DIG -> RIS

The Capricious agent comprises a DIG which outputs possible courses of action which are then input to a RIS.
Also known as the Buridan’s Ass model.
The Capricious agent will make epistemically random choices, even under the same (epistemically identical) circumstances.
Clearly there is the possibility for the agent to “choose otherwise” given epistemically identical circumstances, but since the choice is made randomly and not rationally this is an example of FDO.
I doubt whether even a Libertarian would claim that such an agent possesses free will.
(Note that making the agent ontically random, ie indeterministic, rather than epistemically random does not change the conclusion.)

So-called Darwinian Agent
RIG -> DIS

The so-called Darwinian agent comprises a RIG which outputs possible courses of action which are then input to a DIS.
See http://www.geocities.com/peterdjones/det_darwin.html#introduction for a more complete description of this model.
The so-called Darwinian agent will make rational choices from a random selection of possibilities.
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.
Because of this property (ie FDO rather than CHDO) no true-thinking Libertarian should claim that such an agent possesses free will, even in the case of an i-RIG where the agent clearly behaves indeterministically.

Parallel DIG-RIG Agent
DIG -> }
..... -> } DIS
RIG –> }

The parallel DIG-RIG agent comprises TWO separate idea generators, one deterministic and one random, working in parallel. The deterministic idea generator outputs rational possible courses of action which are then input to a DIS. Also input to the same DIS are possible courses of action generated by the random idea generator. The DIS then evaluates all of the possible courses of action, those generated deterministically and those generated randomly, and the DIS then rationally chooses one of the ideas as the preferred course of action.
Since a proportion of the possible ideas is generated randomly, the Parallel DIG-RIG agent (just like the Capricious and Darwinian agents) can appear to act unpredictably. If the RIG is deterministic (a d-RIG) then the Parallel DIG-RIG agent behaves deterministically but still unpredictably. If the RIG is indeterministic (an i-RIG) then the Parallel DIG-RIG behaves indeterministically (and therefore also unpredictably).
Since a proportion of the possible ideas is also generated deterministically and rationally (by the DIG), not only does the agent NOT behave capriciously but also the agent is NOT in any way restricted or forced by the RIG to choose a non-optimal or irrational course of action (all rational courses of action are always available as possibilities to the DIS via the DIG, even if the RIG throws up totally irrational or non-optimum possibilities).
The Parallel DIG-RIG model therefore combines the advantages of the Deterministic model (completely rational behaviour) along with the advantages of the Darwinian model (unpredictable behaviour) but with none of the drawbacks of the Darwinian model (the agent is not restricted or forced by the RIG to make non-optimal choices).
If the Parallel DIG-RIG is based on a d-RIG then it behaves deterministically but unpredictably. Importantly, it does NOT then endow CHDO (since it is deterministic) therefore presumably would not be accepted by a Libertarian as an explanatory model for free will. Interestingly though, this model (DIG plus d-RIG) explains everything that we observe in respect of free will (it produces a rational yet not necessarily predictable agent), and the model should be acceptable both to Determinists and Compatabilists alike (since it is deterministic).
If the Parallel DIG-RIG is based on an i-RIG then it is both indeterministic and unpredictable. Therefore it does endow CHDO (and this time it is GENUINE CHDO, not the FDO offered by the Darwinian model), therefore presumably would be accepted by a Libertarian (but obviously not by either a Determinist or a Compatabilist) as an explanatory model for free will.

Conclusion
We have shown that the so-called Darwinian model incorporating a single random idea generator gives rise to an indeterministic agent without endowing genuine CHDO. However, a suitable combination of deterministic and indeterministic idea generators, in the form of the Parallel DIG-RIG model, can form the basis of a model decision making machine which does endow genuine CHDO, and is genuinely both indeterministic yet rational.

Constructive criticism welcome!

MF
 
Last edited:
  • #45
Moving finger, I really appreciated this careful post. http://cscs.umich.edu/~crshalizi/notebooks/symbolic-dynamics.html", via Marcus's Atiyah thread on Strings Branes and LQG, and comments on Peter Woit's blog which Marcus links to, is another way to bring stochastic behavior (even the "real" kind) out of continuous deterministc physics by means of limiting coarse-graining strategies. The essay is by Cosma Shalizi, a very respected mathematician in this area (Nonlinear stochastic models), and I especially want to tout the many links he gives, which taken together amount to a copious training resource on it.
 
Last edited by a moderator:
  • #46
MF, I like the way you've laid this out. Your thinking is clear, and it forms a fairly good basis to discuss advantages/disadvantages of deterministic and indeterministic processes to AI. Taking the focus of the discussion away from simply "what processes endow free will" and focusing on what advantages/disadvantages there are to AI would IMHO be of great benefit to this discussion.

I noticed you also broke out the DIS from the RIS as up until now the logic you've used to berate indeterminism was focused on JUST the RIS and not the RIG which is where indeterminism may actually be of use. More on this in a moment.

Finally, thanks also for putting in bold the abbreviations as my memory is as useful as a spaghetti strainer for drinking coffee. Now I only need to remind myself to scroll up!

It's interesting you've concluded that indeterminate processes can be of value in decision making. The conclusion you've reached is echoed by the "Darwin" model, here:
Objection 3: "Indeterminism would disrupt the process of rational thought, and result in a capricious, irrational kind of freedom not worth having."
Is that so ? Computer programmes can consult random-number generators where needed (including 'real' ones implemented in hardware). The rest of their operation is perfectly deterministic. Why should the brain not be able to call on indeterminism as and when required, and exclude it the rest of the time ? And if random numbers are useful for computers, why should indeterministic input be useless for brains ? Is human rationality that much more hidebound than a computer ? Even including all the stuff about creativity ? Pseudo-random numbers (which are really deterministic) may be used in computers, and any indeterminism the brain calls on might be only pseudo-random.

I'd agree that for an AI computer program, some type of RIG, whether determinate or not could potentially make use of such a feature.

One thought on reading this - it seems the insinuation you've provided is that when talking about a RIG is that the ideas generated may or may not be of value at all:
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.

Similarly, you've insinuated that the DIG only provides useful solutions:
The Deterministic agent comprises a DIG which outputs rational possible courses of action which are then input to a DIS, which rationally chooses one of the ideas as the preferred course of action.

Note also you've similarly implied the DIS chooses optimal solutions and the RIS chooses poor ones, or at best, random ones. This seems fairly reasonable and you may have some good logic behind why you hold these views. However, it seems to me the reasoning you may have is that anything indeterministic or random will weigh every choice (RIS) or create every solution (RIG) on equal footing. That is, a RIS for example, won't try to determine the best, it will pick one solution at random. Similarly the RIG will simply bubble up ideas and provide as many useless ones as useful ones. The thing is, I don't see why that should necessarily be the case. Certainly a RIG or RIS could be designed to do so, but that doesn't mean it is an optimal solution for its function.

Take for example radioactive decay. What is indeterministic (or random if you don't like the i word) is WHEN it decays. Despite that, the decay process must still remain probabilistic. That feature of indeterminism can also be made use of, and if I'm not mistaken it largely is for computer programs that require the use of a random number generator. One can 'design' a RIG that incorporates a process that minimizes useless ideas and maximizes useful ones while at the same time coming up with ideas a DIG might not. Similarly, a RIS could be designed to incorporate a process that minimizes useless ideas and maximizes useful ones. The benefit of such a RIS would be the ability to keep a predator guessing, so to speak. If we always made the most logical choice, wouldn't a predator use that to its advantage by assuming for example, you will always run away instead of fight? It would seem to me that there are benefits to doing non-optimal things at times, since honestly we can't and don't always (or even usually) try to weigh choices as logically the best or worst. People and animals often do what is 'not optimal' and in so doing glean an advantage of surprise.

Hope that was constructive. :smile:

PS: The link in your last post doesn't work, you might want to check it.
 
  • #48
Q_Goest said:
It's interesting you've concluded that indeterminate processes can be of value in decision making.
Yes, and that conclusion was totally counter-intuitive to me! :blushing:
In fact the conclusion is more than just “indeterminate processes can be of value in decision making”, it’s that “random (both epistemically and ontically random) processes can be of value in decision making”. It’s no secret that up until that post I had taken the position that randomness simply makes decision-making random, and that’s it. I have to “eat humble pie”, but I’m happy to do so because I now feel that I have a much better understanding of what’s going on.
Q_Goest said:
The conclusion you've reached is echoed by the "Darwin" model, here:
Quote:
Objection 3: "Indeterminism would disrupt the process of rational thought, and result in a capricious, irrational kind of freedom not worth having."
Is that so ? Computer programmes can consult random-number generators where needed (including 'real' ones implemented in hardware). The rest of their operation is perfectly deterministic. Why should the brain not be able to call on indeterminism as and when required, and exclude it the rest of the time ? And if random numbers are useful for computers, why should indeterministic input be useless for brains ? Is human rationality that much more hidebound than a computer ? Even including all the stuff about creativity ? Pseudo-random numbers (which are really deterministic) may be used in computers, and any indeterminism the brain calls on might be only pseudo-random.
Yes, I agree.
The reason why randomness can “add value” to an otherwise deterministic decision-making machine is simply because the random idea generator may be able to throw up possible solutions which are not included in the “set of possible solutions” afforded by a deterministic idea generator – the total set of possible solutions available to the agent is therefore possibly greater if it uses both deterministic and random idea generators.
Q_Goest said:
I'd agree that for an AI computer program, some type of RIG, whether determinate or not could potentially make use of such a feature.
One thought on reading this - it seems the insinuation you've provided is that when talking about a RIG is that the ideas generated may or may not be of value at all:
moving finger said:
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.
I think this follows. If the RIG is simply “thowing out random ideas” then it is quite possible (in one particular run) that all of the ideas generated in that run may be of no value; equally it is possible that some of the ideas may be of value, hence it is true that “the ideas generated may or may not be of value at all”.
Q_Goest said:
Similarly, you've insinuated that the DIG only provides useful solutions:
moving finger said:
The Deterministic agent comprises a DIG which outputs rational possible courses of action which are then input to a DIS, which rationally chooses one of the ideas as the preferred course of action.
I didn’t actually say “useful solutions” – I said “rational solutions” – and this seems quite reasonable to me. It is quite possible that, for a particular problem, the DIG will produce no “useful” solutions at all (even though it still provides rational possible solutions). I am assuming of course that the deterministic idea generator is operating according to a rational deterministic algorithm – given this it seems reasonable that it will produce rational possible solutions. It is exactly because the DIG produces only rational solutions that the RIG might add value – by providing random or non-rational solutions which might be more useful than the rational solutions of the DIG.
Q_Goest said:
Note also you've similarly implied the DIS chooses optimal solutions and the RIS chooses poor ones, or at best, random ones. This seems fairly reasonable and you may have some good logic behind why you hold these views. However, it seems to me the reasoning you may have is that anything indeterministic or random will weigh every choice (RIS) or create every solution (RIG) on equal footing. That is, a RIS for example, won't try to determine the best, it will pick one solution at random.
I guess this is correct – because this is how I define the RIS (it picks a solution at random – it does not try to evaluate the solutions).
Q_Goest said:
Similarly the RIG will simply bubble up ideas and provide as many useless ones as useful ones. The thing is, I don't see why that should necessarily be the case. Certainly a RIG or RIS could be designed to do so, but that doesn't mean it is an optimal solution for its function.
I completely agree that we could explore more complex models, mixing random and deterministic behaviour in the idea generators and idea selectors for example – my intention here was to take the simplest possible cases to see if they provide an agent with the properties we are looking for (eg CHDO and unpredictability combined with rational behaviour), and I think I have done that. We can develop more complex models of course, but the conclusion stays the same – indeterminism and randomness can add value for decision making agents.
Q_Goest said:
Take for example radioactive decay. What is indeterministic (or random if you don't like the i word) is WHEN it decays. Despite that, the decay process must still remain probabilistic. That feature of indeterminism can also be made use of, and if I'm not mistaken it largely is for computer programs that require the use of a random number generator. One can 'design' a RIG that incorporates a process that minimizes useless ideas and maximizes useful ones while at the same time coming up with ideas a DIG might not. Similarly, a RIS could be designed to incorporate a process that minimizes useless ideas and maximizes useful ones. The benefit of such a RIS would be the ability to keep a predator guessing, so to speak. If we always made the most logical choice, wouldn't a predator use that to its advantage by assuming for example, you will always run away instead of fight? It would seem to me that there are benefits to doing non-optimal things at times, since honestly we can't and don't always (or even usually) try to weigh choices as logically the best or worst. People and animals often do what is 'not optimal' and in so doing glean an advantage of surprise.
Agreed. I am not suggesting that my simple model of parallel and pure DIG-RIG is the way that people and animals actually behave (that never was my intention), only that this simple model is an example of indeterminism and randomness “adding value” for decision making agents.
Q_Goest said:
Hope that was constructive.
Very much so! Thanks :smile:
MF
 
  • #49
moving finger said:
But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG? The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!
It was not constrained by the R.I.G. because the RIG is not external.
Whatever the internal causal basis of your actions is, it is not
something external to you that is overriding your wishes and pushing you around.
Is this an example of “Could Have Done Otherwise”? Or would a more accurate description be “Forced to Do Otherwise”? The agent in run 2 was effectively forced to choose B rather than A (it was forced to do otherwise in run 2 than it had done in run 1) because of the limited choices afforded to it by the RIG. Perthaps a better acronym for this kind of model is thus FDO (forced to do otherwise) rather than CHDO?
That agent is the totality of SIS, RIG and everything else. One part
of you does not constrain or force another.
Did the model choose B rather than A in run 2 because it was "acting freely" in its choice, and preferred B to A? No clearly not (because in a straight choice between A and B the model always chooses A).
Or did the model choose B rather than A in run 2 because its choices were actually being RESTRICTED by the RIG, such that it was NOT POSSIBLE for it to choose to do A in Run 2, even though A was always a better choice than B? Yes, this is indeed clearly the case.
Which kind of "free will" would you prefer to have?
The kind you are describing does not sound very attractive, but I can always amend the model so that "If the RIG succeeds in coming up with an option on one occasion, it will always include it on subsequent occasions".
After all, I only have to come up with a model that workds
 
  • #50
moving finger said:
We have seen (post #37) that the simple so-called Darwinian model, which comprises a single Random Idea Generator folowed by a Sensible Idea Selector, does not endow any properties to an agent which we might recognise as “properties of free will”. In particular, rather than endowing the ability of Could Have Done Otherwise (CHDO), the simple RIG-SIS combination acts to RESTRICT the number of possible courses of action, thus forcing the agent to make non-optimal choices (a feature I have termed Forced to Do Otherwise, FDO, rather than CHDO).

Of course RIG+SIS is not restricted compared to pure determinsim, because
under pure determinism there is always exactly one (physically) possible choice. Other options are may be considered as theories or ideas, but
they weill be inevitably rejected.




In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities).

I think that is misleading. It would be better to talk about
sub-optimal CHDO and irrational CHDO. In particular it is
a conceptual error to talk about agents being "forced"
by internal processes that constitute them.


As we have seen in post #37, the so-called Darwinian model is an example of FDO.

It turns out to be sub-optimal CHDO only by making unnecessary assumptions.
So-called Darwinian Agent
RIG -> DIS

The so-called Darwinian agent comprises a RIG which outputs possible courses of action which are then input to a DIS.
See http://www.geocities.com/peterdjones/det_darwin.html#introduction for a more complete description of this model.
The so-called Darwinian agent will make rational choices from a random selection of possibilities.
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.
Because of this property (ie FDO rather than CHDO) no true-thinking Libertarian should claim that such an agent possesses free will, even in the case of an i-RIG where the agent clearly behaves indeterministically.

There is no reason to suppose that the RIG, having succeeded in coming up
with option A at time t will fail to come up with it again -- this objection
is based on an arbitrary limitation.

Parallel DIG-RIG Agent
DIG -> }
..... -> } DIS
RIG –> }

The parallel DIG-RIG agent comprises TWO separate idea generators, one deterministic and one random, working in parallel. The deterministic idea generator outputs rational possible courses of action which are then input to a DIS. Also input to the same DIS are possible courses of action generated by the random idea generator. The DIS then evaluates all of the possible courses of action, those generated deterministically and those generated randomly, and the DIS then rationally chooses one of the ideas as the preferred course of action.
Since a proportion of the possible ideas is generated randomly, the Parallel DIG-RIG agent (just like the Capricious and Darwinian agents) can appear to act unpredictably. If the RIG is deterministic (a d-RIG) then the Parallel DIG-RIG agent behaves deterministically but still unpredictably. If the RIG is indeterministic (an i-RIG) then the Parallel DIG-RIG behaves indeterministically (and therefore also unpredictably).
Since a proportion of the possible ideas is also generated deterministically and rationally (by the DIG), not only does the agent NOT behave capriciously but also the agent is NOT in any way restricted or forced by the RIG to choose a non-optimal or irrational course of action (all rational courses of action are always available as possibilities to the DIS via the DIG, even if the RIG throws up totally irrational or non-optimum possibilities).
The Parallel DIG-RIG model therefore combines the advantages of the Deterministic model (completely rational behaviour) along with the advantages of the Darwinian model (unpredictable behaviour) but with none of the drawbacks of the Darwinian model (the agent is not restricted or forced by the RIG to make non-optimal choices).
If the Parallel DIG-RIG is based on a d-RIG then it behaves deterministically but unpredictably. Importantly, it does NOT then endow CHDO (since it is deterministic) therefore presumably would not be accepted by a Libertarian as an explanatory model for free will. Interestingly though, this model (DIG plus d-RIG) explains everything that we observe in respect of free will (it produces a rational yet not necessarily predictable agent), and the model should be acceptable both to Determinists and Compatabilists alike (since it is deterministic).

It doesn't explain the subjective sensation of having multipl epossibilities that are open
to you at the present moment, nor the phenomenon of regret, which
implies CHDO. Of course, I have argued against compatiblism (and
hence against d-RIG as adequate for a fully-fledged idea of FW)
in my article.
 
  • #51
More thoughts about “CHDO"

I now agree (see post #44) that incorporating random elements into a rational and otherwise deterministic decision making agent may provide a wider “set of alternate possibilities” for the agent to choose from.
But I was mistaken in thinking (as stated in my post #44 ) that such random elements could somehow give rise to genuine CHDO.
What exactly do we mean by CHDO in the context of an agent’s will?
Do we mean simply that “things could have turned out differently, whether I wanted them to or not?”. This is effectively the kind of CHDO that we have in the case of a RIG. The RIG is throwing up random possible courses of action, and the idea selector is (via the RIG) being restricted from choosing certain courses of action. This is precisely how the RIG is supposed to endow the so-called CHDO. But I suggest that this is not what we “free will agents” really mean when we say we “Could Have Done Otherwise”.

What is GENUINE CHDO?
I humbly suggest that what an agent really means by CHDO in the context of free will is the following :
CHDO Definition : What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the “possibility to do otherwise”, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.

In other words, given a free choice between action A or action B, I will select action A if and only if I CHOOSE to select action A. If the situation is re-run under identical circumstances, both choices A and B must be available to me once again, and I would then “do otherwise than select A” if and only if I then CHOOSE to select action B rather than A. This, to me, is what we mean when we say that we choose freely. It is a choice free of constraint. After all, why would I WANT to do B unless I freely CHOOSE to do B? Being “forced” to do B because the option of doing A “is no longer available to me” (this is what the RIG does) is NOT an example of free will.
Incorporating a random idea generator (RIG) (even in parallel with a DIG) does NOT result in an agent which possesses the above GENUINE CHDO properties. The RIG acts to RESTRICT the choices available to the DIS. The DIS is therefore once again FORCED to do otherwise. The DIS can only make a FREE CHOICE between A and B if the idea generator offers up both A and B as alternate possibilities. If the idea generator does not throw up both A and B then the “choice” is effectively being forced on the DIS by the random nature of the RIG, rather than the DIS “choosing freely”.
moving finger said:
But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG? The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!
Tournesol said:
It was not constrained by the R.I.G. because the RIG is not external.
It makes no difference whether one places the RIG “external” or “internal” to the agent – the simple fact remains that the way the RIG works is to offer up a limited and random number of alternate possibilities to the idea selector – this is true even if one places the RIG “internal to the agent”. Given a choice between A and B, the agent can only “choose freely” between A and B if both A and B are offered up (externally or internally) as alternate possibilities. If either A or B is not offered up (because the RIG only throws up one and not the other) then the agent is making a restricted or constrained choice, and NOT a free choice.
Tournesol said:
Whatever the internal causal basis of your actions is, it is not
something external to you that is overriding your wishes and pushing you around.
Agreed. But whatever the causal basis of my actions, I would rather have a free will which is based on a rational evaluation of all possible alternatives, rather than one which is somehow forced to make decisions because the choice is restricted by some random element. Thus to suggest that a random idea generator which acts to restricts choices somehow endows the ability to “have freely done otherwise” is incoherent and false. The “doing otherwise” in the case of the RIG is compelled upon the agent by the random nature of the RIG, it is not something the agent rationally and freely chooses to do.
moving finger said:
Is this an example of “Could Have Done Otherwise”? Or would a more accurate description be “Forced to Do Otherwise”? The agent in run 2 was effectively forced to choose B rather than A (it was forced to do otherwise in run 2 than it had done in run 1) because of the limited choices afforded to it by the RIG. Perthaps a better acronym for this kind of model is thus FDO (forced to do otherwise) rather than CHDO?
Tournesol said:
That agent is the totality of SIS, RIG and everything else. One part
of you does not constrain or force another.
We are trying here to “model” the causal basis of our actions. It has been suggested that a random element may somehow be the source of CHDO. But as discussed at the beginning of this post, REAL CHDO would be an agent choosing freely between two alternate possibilities A and B in both runs.
As I showed above, it makes no difference whether that random element is external or internal to the agent, if the random element acts to “restrict the possibilities being considered by the agent”, such that the agent no longer has a free choice between A and B, then it is no longer a case of CHDO, it is instead FDO.
moving finger said:
Did the model choose B rather than A in run 2 because it was "acting freely" in its choice, and preferred B to A? No clearly not (because in a straight choice between A and B the model always chooses A).
Or did the model choose B rather than A in run 2 because its choices were actually being RESTRICTED by the RIG, such that it was NOT POSSIBLE for it to choose to do A in Run 2, even though A was always a better choice than B? Yes, this is indeed clearly the case.
Which kind of "free will" would you prefer to have?
Tournesol said:
The kind you are describing does not sound very attractive, but I can always amend the model so that "If the RIG succeeds in coming up with an option on one occasion, it will always include it on subsequent occasions".
But we are not talking about “subsequent occasions” in a normal linear timeline, are we? What a Libertarian means by free will is “if I could have the choice all over again, with conditions EXACTLY as they were before, in other words if we re-run the model again nunder identical circumstances, then I could still choose to do otherwise”.
If we “re-run the model under identical circumstances” this is equivalent to rewinding the clock back to the original start point – the RIG will have absolutely no “memory” of its earlier selection of choices, there will be no possible mechanism for ensuring that it comes up with the same choices as before (unless it is in fact deterministic)……
Tournesol said:
After all, I only have to come up with a model that workds
Yep. And I think I have shown that so far you haven’t come up with a model that works for genuine CHDO.

MF
 
  • #52
moving finger said:
In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities).
Tournesol said:
I think that is misleading. It would be better to talk about sub-optimal CHDO and irrational CHDO.
With respect, I think suggesting that “being forced to do otherwise results in some kind of CHDO” is misleading.
We need to focus on whether ANY kind of model can endow genuine CHDO, which is where the agent says : “What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.”
I do not believe any model based on indeterminism can do this. The Darwinian model does not. The Parallel DIG-RIG does not. If you think you have such a model then I would love to see the details.
Tournesol said:
it is a conceptual error to talk about agents being "forced" by internal processes that constitute them.
OK. Therefore it follows from your statement that we cannot say a completely deterministic agent is lacking in free will?
EITHER an agent “chooses freely” between alternate possible courses of action, A and B, or it does not. If one of two possible courses of action, either A or B, is not available to the agent then the agent is not choosing freely.
moving finger said:
As we have seen in post #37, the so-called Darwinian model is an example of FDO.
Tournesol said:
It turns out to be sub-optimal CHDO only by making unnecessary assumptions.
It turns out NOT to be genuine CHDO. The agent did not CHOOSE to do otherwise, it was constrained to do otherwise because it no longer had a free choice.
Tournesol said:
There is no reason to suppose that the RIG, having succeeded in coming up with option A at time t will fail to come up with it again -- this objection is based on an arbitrary limitation.
But we are not talking about “subsequent occasions” in a normal linear timeline, are we? What a Libertarian means by free will is “if I could have the choice all over again, if I could reset the clock, if I could have my time over again, with conditions EXACTLY as they were before, in other words if we re-run the model again under identical circumstances, then I could still choose to do otherwise”.
If we “re-run the model under identical circumstances” this is equivalent to rewinding the clock back to the original start point – the SAME time t - the RIG will have absolutely no “memory” of its earlier selection of choices, there will be no possible mechanism for ensuring that it comes up with the same choices as before (unless it is in fact deterministic)……
If this does not convince you, then let us just reverse the sense of the argument – let us say that in the first run the RIG throws up only B, and in the second run it throws up both A and B. The DIS chooses A as better than B. Why then did it choose B in the first run? Certainly not because it “selected B from a free choice between A and B”.
Tournesol said:
It doesn't explain the subjective sensation of having multipl epossibilities that are open to you at the present moment,
The sensation of “having multiple possibilities available” (which you rightly say is subjective) is very easily explained through epistemic uncertainty. No agent has certain knowledge of the future, every agent has an epistemic horizon beyond which it cannot see, therefore it may simply have the illusion that there are multiple possible futures. There is no way the agent can ever know for sure that multiple possible futures actually existed.
Tournesol said:
nor the phenomenon of regret, which implies CHDO
It implies nothing of the sort.
“regret” is simply the feeling that we may have made a “bad” decision or a “bad” choice”. But the choice we made at the time was (or should have been) the best we could have made given the circumstances. It does not follow from this that we would then genuinely “choose differently” if we could turn the clock back – because turning the clock back would simply reset everything to exactly the same way it was before, and we would then make the same “bad choice” again. We can only learn from our bad choices in a linear timeline – resetting the clock from run 1 to run 2 does not allow for any learning to be carried over from run 1 to run 2
Tournesol said:
Of course, I have argued against compatiblism (and hence against d-RIG as adequate for a fully-fledged idea of FW) in my article.
I did not expect that Tournesol would accept a d-RIG. But a determinist or compatabilist might.

An agent which believes it is acting freely will say : “What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.”
“genuine CHDO” is an incoherent concept in face of the above. So far none of the random or indeterministic models considered would result in an agent with genuine CHDO.
MF
 
Last edited:
  • #53
Why CHDO is an empty concept

A Libertarian agent which believes in CHDO, having selected option A over option B, would say “if I could have the chance all over again, with conditions EXACTLY as they were before, if I could turn the clock back, in other words if we re-run the exact same situation again under identical circumstances, then I would still be free to choose option B, and I could still select option B – in other words I could have done otherwise to what I actually did”

But any agent which believes it is acting freely will say : “What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.” This applies to Libertarian free will agents just as much as any other free will agent.

It follows that in “re-running” the selection, the free will Libertarian agent will select option A ONLY if it chooses to select option A; it will select option B ONLY if it chooses to select option B. But IF the circumstances are identical in the re-run then it follows that the agent (if it is behaving rationally and not capriciously) will wish to choose the same way that it chose in the first run. Nothing has changed in the second run – by definition it is a precise re-run of the first run - WHY would the agent therefore WANT to choose any differently than it did in the first run? What possible reason would the agent have for wanting to choose any differently – unless of course it’s very choice is somehow influenced by random or indeterministic behaviour…… But addding indeterminism to the selection process simply adds capricioiusness to the agent – it simply detracts from the rational behaviour of the agent – it is equivalent once again to the Buridan’s ass model - adding indeterminism into the selection process has nothing to do with a free will choice.

CHDO implies “if we re-run the exact same situation again under identical circumstances, then I would be free to select option B in the second run, even though I selected option A in the first run”

Free will implies “If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.”

From free will we can see that when we re-run the selection between A and B, there is no reason why my choice should be any different to what it was before – the circumstances are exactly the same as before, therefore why (unless I am a random or capricious agent) would I wish to choose any differently? It follows that I would NOT do otherwise that what I did before, because (if I am free) I do what I choose to do, and there is no rational reason why my choice should be any different to what it was in the first run.

Conclusion
Thus, it matters not whether or not “I could really have done otherwise”. What happened is that I was free to choose, and I chose to do what I wished to do, without constraint. This is free will. If I re-run the situation there is absolutely no rational reason why my wishes or my choice should be any different to the way it was before.

MF
 
Last edited:
  • #54
What is GENUINE CHDO?
I humbly suggest that what an agent really means by CHDO in the context of free will is the following :
CHDO Definition : What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the “possibility to do otherwise”, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.

I suggest that this is already part of the analysis of FW I am using under the
rubric of "ultimate origination" or "ultimate responsibility". Since it is
already part of the definition of FW, it does not need to be added again as
part of the defintion of CHDO (which is of course itself already part of the
definition of FW).



In other words, given a free choice between action A or action B, I will select action A if and only if I CHOOSE to select action A. If the situation is re-run under identical circumstances, both choices A and B must be available to me once again, and I would then “do otherwise than select A” if and only if I then CHOOSE to select action B rather than A. This, to me, is what we mean when we say that we choose freely. It is a choice free of constraint. After all, why would I WANT to do B unless I freely CHOOSE to do B? Being “forced” to do B because the option of doing A “is no longer available to me” (this is what the RIG does) is NOT an example of free will.

It is not an example of constraint either, since it is not external.
What is the alternative ? Having every possible option available to you at all
times ? As I have pointed out before, that is a kind of god-like omniscience.


Incorporating a random idea generator (RIG) (even in parallel with a DIG) does NOT result in an agent which possesses the above GENUINE CHDO properties. The RIG acts to RESTRICT the choices available to the DIS.

No it doesn't. A deterministic mechanism cannot come up with the rich and
original set of choices that an indeterministic mechanism can come up with.
Unplugging the RIG does not unleash some hidden creativity in the SIS.


The DIS is therefore once again FORCED to do otherwise. The DIS can only make a FREE CHOICE between A and B if the idea generator offers up both A and B as alternate possibilities.

If the RIG comes up with more than one option, the SIS can make a free (in the
compatiblist sense -- no external constraint) choice between them. What the
RIG adds to the SIS is the extra, incompatiblist, freedom of CHDO.

The SIS cannot choose an option that is not presented to it by the RIG.
It can only choose from what is on the menu -- that is what we normally
mean by a free choice. The alternative -- an ultra-genius level of insight
and innovation for every possible situation -- may be worth wanting,
but is not naturalistically plausible.



If the idea generator does not throw up both A and B then the “choice” is effectively being forced on the DIS by the random nature of the RIG, rather than the DIS “choosing freely”.

1) The fact that the SIS (or any other component) of the agent is causally
influenced by other components does not constitute freedom-negating constraint
because it is not extenal.

2) Not being able to choose an option that is not presented to
you is not lack of free choice. A finite, natural, agent will
have internal limitations; they are not limitations on freedom
because they are not external. A caged bird is unfree because it
cannot fly; a pig cannot fly either, but that is not an example
of unfreedom because it is an inherent, internal limitation.

But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG?

Well, the RIG is part of the model, and you can'tbe constrained by something
internal.


The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!

The RIG did not block from choice A -- choice A was never on the menu. It
certainly *failed* to come up with choice A. Failures and limitations
are part of being a finite, natural being.



It was not constrained by the R.I.G. because the RIG is not external.
It makes no difference whether one places the RIG “external” or “internal” to the agent – the simple fact remains that the way the RIG works is to offer up a limited and random number of alternate possibilities to the idea selector – this is true even if one places the RIG “internal to the agent”. Given a choice between A and B, the agent can only “choose freely” between A and B if both A and B are offered up (externally or internally) as alternate possibilities. If either A or B is not offered up (because the RIG only throws up one and not the other) then the agent is making a restricted or constrained choice, and NOT a free choice.

It is not a constrained choice because nothing is doing the constraining. All
realistic choices are from a finite, limited, list of options. You are asking
for god-like omnipotence.

Whatever the internal causal basis of your actions is, it is not
something external to you that is overriding your wishes and pushing you around.

Agreed. But whatever the causal basis of my actions, I would rather have a free will which is based on a rational evaluation of all possible alternatives,

What natural mechanism can provide all possible choices ex nihilo ? How is Ug
the caveman to know that rubbing two sticks together and starting a fire
is the way to keep warm ? I don't doubt for a minute that what you want is
desiable; but how do *you* think it is possible ?


rather than one which is somehow forced to make decisions because the choice is restricted by some random element. Thus to suggest that a random idea generator which acts to restricts choices somehow endows the ability to “have freely done otherwise” is incoherent and false.
It doesn't restrict choices, becuase the choices don't exist priori to be
restricted. The RIG is a GENERATOR not a filter. It does endow CHDO
(absent your modifications) by generating choices. It doesn't generate
all optimal choices, but optimality in all situations is not part
of any derfinition of FW except your own.

The “doing otherwise” in the case of the RIG is compelled upon the agent by the random nature of the RIG, it is not something the agent rationally and freely chooses to do.


The RIG is not separate from the agent.


Is this an example of “Could Have Done Otherwise”? Or would a more accurate description be “Forced to Do Otherwise”? The agent in run 2 was effectively forced to choose B rather than A (it was forced to do otherwise in run 2 than it had done in run 1) because of the limited choices afforded to it by the RIG. Perthaps a better acronym for this kind of model is thus FDO (forced to do otherwise) rather than CHDO?

That agent is the totality of SIS, RIG and everything else. One part
of you does not constrain or force another.


We are trying here to “model” the causal basis of our actions. It has been suggested that a random element may somehow be the source of CHDO. But as discussed at the beginning of this post, REAL CHDO would be an agent choosing freely between two alternate possibilities A and B in both runs.
As I showed above, it makes no difference whether that random element is external or internal to the agent, if the random element acts to “restrict the possibilities being considered by the agent”, such that the agent no longer has a free choice between A and B, then it is no longer a case of CHDO, it is instead FDO.

1) The internality or externality does make a difference
2) The RIG does not restrict the SIS, it provides a range of possibilities
which the SIS is not able to provide itself
3) Failure by the RIG to provide an option which looks desirable with 20:20
hindsight is not failure of CHDO, or of FW, it is failure of omniscience.


Did the model choose B rather than A in run 2 because it was "acting freely" in its choice, and preferred B to A? No clearly not (because in a straight choice between A and B the model always chooses A).
Or did the model choose B rather than A in run 2 because its choices were actually being RESTRICTED by the RIG, such that it was NOT POSSIBLE for it to choose to do A in Run 2, even though A was always a better choice than B? Yes, this is indeed clearly the case.
Which kind of "free will" would you prefer to have?

Originally Posted by Tournesol
The kind you are describing does not sound very attractive, but I can always amend the model so that "If the RIG succeeds in coming up with an option on one occasion, it will always include it on subsequent occasions".

But we are not talking about “subsequent occasions” in a normal linear timeline, are we? What a Libertarian means by free will is “if I could have the choice all over again, with conditions EXACTLY as they were before, in other words if we re-run the model again nunder identical circumstances, then I could still choose to do otherwise”.


We are talking about both. The idea that the RIG might fail to come up with an
option it succeeded in coming up with before, is an engineering issue.

Fixing it does not affect CHDO; the agent could have doen otherwise becuase
the RIG could have come up with a different, and preferable option;
amending it so that it does not "forget" options does not affect that.

If we “re-run the model under identical circumstances” this is equivalent to rewinding the clock back to the original start point – the RIG will have absolutely no “memory” of its earlier selection of choices, there will be no possible mechanism for ensuring that it comes up with the same choices as before (unless it is in fact deterministic)……

The objection being what ? That is isn't guaranteed to come up with the best
possible option out of all the options every time ? True, but that is human
frailty, not constraint.


Yep. And I think I have shown that so far you haven’t come up with a model that works for genuine CHDO.

it works for CHDO as standardly defined.
 
  • #55
In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities).

I think that is misleading. It would be better to talk about sub-optimal CHDO and irrational CHDO.


With respect, I think suggesting that “being forced to do otherwise results in some kind of CHDO” is misleading.


And I am suggesting that making sub-optimal choices is not the same thing as
being forced.

We need to focus on whether ANY kind of model can endow genuine CHDO, which is where the agent says : “What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.”
I do not believe any model based on indeterminism can do this. The Darwinian model does not. The Parallel DIG-RIG does not. If you think you have such a model then I would love to see the details.


You haven't genuinely established this. You are just appealing to your
favourite -- if not sole -- manouvre of tendentious redefinition.
Sub-optimal choice is not compulsion.

it is a conceptual error to talk about agents being "forced" by internal processes that constitute them.

OK. Therefore it follows from your statement that we cannot say a completely deterministic agent is lacking in free will?
EITHER an agent “chooses freely” between alternate possible courses of action, A and B, or it does not. If one of two possible courses of action, either A or B, is not available to the agent then the agent is not choosing freely.


Choosing freely means choosing without compulsion between options that are actually available.
The absence of options is not the same as the presence of external compulsion.

There is no reason to suppose that the RIG, having succeeded in coming up with option A at time t will fail to come up with it again -- this objection is based on an arbitrary limitation.

But we are not talking about “subsequent occasions” in a normal linear timeline, are we? What a Libertarian means by free will is “if I could have the choice all over again, if I could reset the clock, if I could have my time over again, with conditions EXACTLY as they were before, in other words if we re-run the model again under identical circumstances, then I could still choose to do otherwise”.
If we “re-run the model under identical circumstances” this is equivalent to rewinding the clock back to the original start point – the SAME time t - the RIG will have absolutely no “memory” of its earlier selection of choices, there will be no possible mechanism for ensuring that it comes up with the same choices as before (unless it is in fact deterministic)……
If this does not convince you, then let us just reverse the sense of the argument – let us say that in the first run the RIG throws up only B, and in the second run it throws up both A and B. The DIS chooses A as better than B. Why then did it choose B in the first run? Certainly not because it “selected B from a free choice between A and B”.

Maybe it chose B from a free choice between B, C and D. What's the alternative
, anyway? When Ug wants to cross a stream, his RIG doesn't just throw up
"Ug swim" and "Ug float on log" but "Ug build suspension bridge" and "Ug fly
in helicopter".


It doesn't explain the subjective sensation of having multipl epossibilities that are open to you at the present moment,
The sensation of “having multiple possibilities available” (which you rightly say is subjective) is very easily explained through epistemic uncertainty. No agent has certain knowledge of the future, every agent has an epistemic horizon beyond which it cannot see, therefore it may simply have the illusion that there are multiple possible futures. There is no way the agent can ever know for sure that multiple possible futures actually existed.

But why that particular illusion. Why don't we see our decisions as random,
or caused forces beyond our control ?


nor the phenomenon of regret, which implies CHDO

It implies nothing of the sort.
“regret” is simply the feeling that we may have made a “bad” decision or a “bad” choice”. But the choice we made at the time was (or should have been) the best we could have made given the circumstances.

There would be nothing to regret if it were.

It does not follow from this that we would then genuinely “choose differently” if we could turn the clock back – because turning the clock back would simply reset everything to exactly the same way it was before, and we would then make the same “bad choice” again.

If determinsm is true, we would make the same choice. But then what does the
determinist regret? The inevitable ?


It follows that in “re-running” the selection, the free will Libertarian agent will select option A ONLY if it chooses to select option A; it will select option B ONLY if it chooses to select option B. But IF the circumstances are identical in the re-run then it follows that the agent (if it is behaving rationally and not capriciously) will wish to choose the same way that it chose in the first run.

The agent will want to come up with the best solution, as judgedby her pesonal
SIS, to the problem. If her RIG comes up with a better solution on the re-run
the agent would wish to choose that.

Nothing has changed in the second run – by definition it is a precise re-run of the first run

Only if determinism is true. By the definition of indeterminism, a re-run
situation will probably turn out different.

- WHY would the agent therefore WANT to choose any differently than it did in the first run?


Who wouldn't want a better solution to a problem ?

What possible reason would the agent have for wanting to choose any differently – unless of course it’s very choice is somehow influenced by random or indeterministic behaviour…… But addding indeterminism to the selection process simply adds capricioiusness to the agent – it simply detracts from the rational behaviour of the agent – it is equivalent once again to the Buridan’s ass model - adding indeterminism into the selection process has nothing to do with a free will choice.

Which is why I put the randomness into the RIG; the RIG can come up with
different inspirations, and the SIS would be motivated to choose differently
if different choices are available, so long as the new choices are
better by its weightings.

From free will we can see that when we re-run the selection between A and B, there is no reason why my choice should be any different to what it was before – the circumstances are exactly the same as before, therefore why (unless I am a random or capricious agent) would I wish to choose any differently? It follows that I would NOT do otherwise that what I did before, because (if I am free) I do what I choose to do, and there is no rational reason why my choice should be any different to what it was in the first run.

Assuming that the RIG will come up with the same options. But why should it ?
 
  • #56
moving finger said:
CHDO Definition : What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the “possibility to do otherwise”, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.
Tournesol said:
I suggest that this is already part of the analysis of FW I am using under the
rubric of "ultimate origination" or "ultimate responsibility". Since it is
already part of the definition of FW, it does not need to be added again as
part of the defintion of CHDO (which is of course itself already part of the
definition of FW).
If it is already part of your definition of FW, then there surely can be no objection to re-inforcing this in the definition of CHDO? Or do you think this would invalidate your arguments?
moving finger said:
In other words, given a free choice between action A or action B, I will select action A if and only if I CHOOSE to select action A. If the situation is re-run under identical circumstances, both choices A and B must be available to me once again, and I would then “do otherwise than select A” if and only if I then CHOOSE to select action B rather than A. This, to me, is what we mean when we say that we choose freely. It is a choice free of constraint. After all, why would I WANT to do B unless I freely CHOOSE to do B? Being “forced” to do B because the option of doing A “is no longer available to me” (this is what the RIG does) is NOT an example of free will.
Tournesol said:
It is not an example of constraint either, since it is not external.
(see end of post for response)
Tournesol said:
What is the alternative ? Having every possible option available to you at all
times ? As I have pointed out before, that is a kind of god-like omniscience.
I have not suggested “all possible options are available”..
But to suggest that our options to choose are necessarily limited by some “indeterministic idea generator”, and that this is the source of our free will and “CHDO”, is a gross misconception and misrepresentation of both free will and CHDO.
moving finger said:
Incorporating a random idea generator (RIG) (even in parallel with a DIG) does NOT result in an agent which possesses the above GENUINE CHDO properties. The RIG acts to RESTRICT the choices available to the DIS.
Tournesol said:
No it doesn't. A deterministic mechanism cannot come up with the rich and
original set of choices that an indeterministic mechanism can come up with.
Unplugging the RIG does not unleash some hidden creativity in the SIS.
I have never suggested that a deterministic mechanism does!
A deterministic idea generator does not endow CHDO. But my point is that neither does a random idea generator! BOTH generators endow “forced to do otherwise” – and not “could have done otherwise”
A random idea generator gives the “possibility that things could turn out differently”, NOT because we FREELY CHOOSE them to turn out differently, of our free will, but because the RIG FORCES them to turn out differently! The RIG constrains the choices available, just as much as the DIG does, regardless of our rational will. And this is true regardless of whether the RIG is internal or external.
moving finger said:
The DIS is therefore once again FORCED to do otherwise. The DIS can only make a FREE CHOICE between A and B if the idea generator offers up both A and B as alternate possibilities.
Tournesol said:
If the RIG comes up with more than one option, the SIS can make a free (in the
compatiblist sense -- no external constraint) choice between them
The key here is the “If”……
What if the RIG does not come up with option A, when the DIS prefers A to B?
Tournesol said:
. What the
RIG adds to the SIS is the extra, incompatiblist, freedom of CHDO.
If the RIG offers up both A and B, then let us say the DIS chooses A.
The only reason the DIS would choose B rather than A is NOT from some “free will choice of the agent”, but because it is CONSTRAINED by the RIG, because the RIG does not offer up A as a choice in the first place!
Do you call this free will?
Do you call this “could have done otherwise”?
I call it “forced to do otherwise”.
Tournesol said:
The SIS cannot choose an option that is not presented to it by the RIG.
It can only choose from what is on the menu -- that is what we normally
mean by a free choice. The alternative -- an ultra-genius level of insight
and innovation for every possible situation -- may be worth wanting,
but is not naturalistically plausible.
My point is that it is precisely CHDO that “may be worth wanting”, but is not naturalistically plausible. CHDO does not exist.
I can ALWAYS do what I wish to do if I have free will. Why would I then want to have some kind of random idea generator which constrains the choices available to me, just so that it can provide the artificial conditions necessary for your alleged “CHDO”, which is actually FDO after all?
moving finger said:
If the idea generator does not throw up both A and B then the “choice” is effectively being forced on the DIS by the random nature of the RIG, rather than the DIS “choosing freely”.
Tournesol said:
1) The fact that the SIS (or any other component) of the agent is causally
influenced by other components does not constitute freedom-negating constraint
because it is not extenal.
(see reply at end of post)
Tournesol said:
2) Not being able to choose an option that is not presented to
you is not lack of free choice.
Of course it is!
“not presenting an option” is a constraint on my free will to choose. The whole function of the RIG is to randomly present options – some will be available and some will not – on a random basis.
Tournesol said:
A finite, natural, agent will
have internal limitations; they are not limitations on freedom
because they are not external. A caged bird is unfree because it
cannot fly; a pig cannot fly either, but that is not an example
of unfreedom because it is an inherent, internal limitation.
In our example of A and B, both A and B are possible choices that the agent might make. The agent always chooses A rather than B if given a free will choice between A and B. The ONLY reason the agent would choose otherwise (ie the only reason the agent would choose B) is simply because the option of doing A is NOT MADE AVAILABLE (by the RIG). Whether the RIG is internal to the agent or external the effect is the same - the agent chooses B simply because A is not "deemed to be available", and NOT because the agent prefers (freely chooses) B compared to A.
Do you call this a free will choice between A and B? Do you call this CHDO?
I call it an artificial constraint (albeit maybe internal) which is FORCING the agent to choose B rather than A. The agent is not choosing B rather than A because it freely wishes to choose B rather than A. This is FDO, not CHDO.
moving finger said:
But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG?
Tournesol said:
Well, the RIG is part of the model, and you can'tbe constrained by something
internal.
(see reply at end of post)
moving finger said:
The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!
Tournesol said:
The RIG did not block from choice A -- choice A was never on the menu.
Choice A was certainly on the menu in the first run. Why not on the second run?
Tournesol said:
It
certainly *failed* to come up with choice A. Failures and limitations
are part of being a finite, natural being.
Thus you are saying that the agent “chose to do B rather than A simply because it failed to come up with the option of doing A”, and NOT because “it chose to do B rather than A out of a free will choice to do B rather than A”?
This is what you understand by free will?
“I freely choose to do B rather than A, not because I WANT to do B rather than A, but simply because I did not think of doing A in the first place”?
This a very strange kind of free will, and not one that many would recognise!
Tournesol said:
It was not constrained by the R.I.G. because the RIG is not external.
(see reply at end of post)
Tournesol said:
It is not a constrained choice because nothing is doing the constraining. All
realistic choices are from a finite, limited, list of options. You are asking
for god-like omnipotence
No. I am asking whether CHDO genuinely exists. It clearly does not.
Tournesol said:
What natural mechanism can provide all possible choices ex nihilo ? How is Ug
the caveman to know that rubbing two sticks together and starting a fire
is the way to keep warm ? I don't doubt for a minute that what you want is
desiable; but how do *you* think it is possible ?
I am not asking for omniscience. I am asking whether CHDO exists.
I will never know if I have considered all possible alternatives, that is why I have already acknowledged that a RIG CAN add value to a decision-making agent by perhaps throwing up some additional possible alternatives.
But that is ALL it does. The RIG does NOT endow CHDO, the most it can ever endow is FDO.
Tournesol said:
It doesn't restrict choices, becuase the choices don't exist priori to be
restricted. The RIG is a GENERATOR not a filter.
In one run the RIG might throw up A and B. In another run it might throw up only B. Thus the RIG controls whether A is made available to the agent or not. Whether you look upon this as a filter or as a generator makes no difference – the fact is that in one run A is made available, in another it is not.
This is what you understand by free will?
“I freely choose to do B rather than A, not because I WANT to do B rather than A, but simply because I did not think of doing A in the first place”?
This a very strange kind of free will, and not one that many would recognise!
The agent is not constrained by the RIG because the RIG is not external.
Most of your objections to my argument seem to be based on the idea that the RIG is not external to the agent – it is supposed to be internal. Therefore the RIG cannot be looked upon as an external “constraint” to the agent’s free will. Correct?
OK. But then you are saying the indeterminism (in the RIG) is internal to the agent. That the source of the agent’s free will is based on some kind of internal indeterminism in the agent’s decision-making process.
But if the indeterminism is supposed to be internal to the agent, this must surely undermine the rationality of the agent. How can an agent believe that it is acting rationally if it at the same time thinks its choices are somehow controlled by an indeterministic mechanism?
Speaking for myself, I certainly would not like to think that my rational decision-making processes were based on an indeterministic mechanism. How on Earth could I believe that such a thing is the source of my free will?
The net result is the same. In your model, free will means the following :
“I freely choose to do B rather than A, not because I WANT to do B rather than A, but simply because I did not think of doing A in the first place”?
This a very strange kind of free will, and not one that many would recognise!
MF
 
Last edited:
  • #57
moving finger said:
We have seen (post #37) that the simple so-called Darwinian model, which comprises a single Random Idea Generator folowed by a Sensible Idea Selector, does not endow any properties to an agent which we might recognise as “properties of free will”. In particular, rather than endowing the ability of Could Have Done Otherwise (CHDO), the simple RIG-SIS combination acts to RESTRICT the number of possible courses of action, thus forcing the agent to make non-optimal choices (a feature I have termed Forced to Do Otherwise, FDO, rather than CHDO).

What now follows is a description of a slightly more complex model based on a parallel deterministic/random idea generator combination, which not only CAN endow genuine CHDO but ALSO is one in which the random idea generator creates new possible courses of action for the agent, rather than restricting possible courses of action.

Firstly let us define a Deterministic Idea Generator (DIG) as one in which alternate ideas (alternate possible courses of action) are generated according to a rational, deterministic procedure. Since it is deterministic the DIG will produce the same alternate ideas if it is re-run under identical circumstances.

Next we define a Random Idea Generator (RIG) as one in which alternate ideas (alternate possible courses of action) are generated according to an epistemically random procedure. Since it is epistemically random the RIG may produce different ideas if it is re-run under epistemically identical circumstances.
Note that the RIG may be either epistemically random and ontically deterministic (hereafter d-RIG), or it may be epistemically random and ontically indeterministic (hereafter i-RIG). Both the d-RIG and the i-RIG will produce different ideas when re-run under epistemically identical circumstances. (For clarification – a d-RIG behaves similar to a computer random number generator (RNG). The RNG produces epistemically random numbers, but if the RNG is reset then it will produce the same sequence of numbers that it did before).

Next we define a Deterministic Idea Selector (DIS) as a deterministic mechanism for taking alternate ideas (alternate possible courses of action), evaluating these ideas in terms of payoffs, costs and benefits etc to the agent, and rationally choosing one of the ideas as the preferred course of action.

Finally we define a Random Idea Selector (RIS) as a mechanism for taking alternate ideas (alternate possible courses of action), and choosing one of the ideas as the preferred course of action according to an epistemically random procedure. Since it is epistemically random the RIS may a produce different choice if it is re-run under epistemically identical circumstances (ie with epistemically identical input ideas).

These four basic building bloocks, the DIG, RIG, DIS and RIS, may then be assembled in various ways to create various forms of “idea-generating and decision-making” models with differing properties.

In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities). As we have seen in post #37, the so-called Darwinian model is an example of FDO.

Deterministic Agent
DIG -> DIS

The Deterministic agent comprises a DIG which outputs rational possible courses of action which are then input to a DIS, which rationally chooses one of the ideas as the preferred course of action.
.The Deterministic agent will always make the same choice given the same (identical) circumstances.
Clearly there is no possibility of CHDO.
A Libertarian would claim that such an agent does not possesses free will, but a Compatabilist might not agree.

Capricious (Random) Agent
DIG -> RIS

The Capricious agent comprises a DIG which outputs possible courses of action which are then input to a RIS.
Also known as the Buridan’s Ass model.
The Capricious agent will make epistemically random choices, even under the same (epistemically identical) circumstances.
Clearly there is the possibility for the agent to “choose otherwise” given epistemically identical circumstances, but since the choice is made randomly and not rationally this is an example of FDO.
I doubt whether even a Libertarian would claim that such an agent possesses free will.
(Note that making the agent ontically random, ie indeterministic, rather than epistemically random does not change the conclusion.)

So-called Darwinian Agent
RIG -> DIS

The so-called Darwinian agent comprises a RIG which outputs possible courses of action which are then input to a DIS.
See http://www.geocities.com/peterdjones/det_darwin.html#introduction for a more complete description of this model.
The so-called Darwinian agent will make rational choices from a random selection of possibilities.
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.
Because of this property (ie FDO rather than CHDO) no true-thinking Libertarian should claim that such an agent possesses free will, even in the case of an i-RIG where the agent clearly behaves indeterministically.

Parallel DIG-RIG Agent
DIG -> }
..... -> } DIS
RIG –> }

The parallel DIG-RIG agent comprises TWO separate idea generators, one deterministic and one random, working in parallel. The deterministic idea generator outputs rational possible courses of action which are then input to a DIS. Also input to the same DIS are possible courses of action generated by the random idea generator. The DIS then evaluates all of the possible courses of action, those generated deterministically and those generated randomly, and the DIS then rationally chooses one of the ideas as the preferred course of action.
Since a proportion of the possible ideas is generated randomly, the Parallel DIG-RIG agent (just like the Capricious and Darwinian agents) can appear to act unpredictably. If the RIG is deterministic (a d-RIG) then the Parallel DIG-RIG agent behaves deterministically but still unpredictably. If the RIG is indeterministic (an i-RIG) then the Parallel DIG-RIG behaves indeterministically (and therefore also unpredictably).
Since a proportion of the possible ideas is also generated deterministically and rationally (by the DIG), not only does the agent NOT behave capriciously but also the agent is NOT in any way restricted or forced by the RIG to choose a non-optimal or irrational course of action (all rational courses of action are always available as possibilities to the DIS via the DIG, even if the RIG throws up totally irrational or non-optimum possibilities).
The Parallel DIG-RIG model therefore combines the advantages of the Deterministic model (completely rational behaviour) along with the advantages of the Darwinian model (unpredictable behaviour) but with none of the drawbacks of the Darwinian model (the agent is not restricted or forced by the RIG to make non-optimal choices).
If the Parallel DIG-RIG is based on a d-RIG then it behaves deterministically but unpredictably. Importantly, it does NOT then endow CHDO (since it is deterministic) therefore presumably would not be accepted by a Libertarian as an explanatory model for free will. Interestingly though, this model (DIG plus d-RIG) explains everything that we observe in respect of free will (it produces a rational yet not necessarily predictable agent), and the model should be acceptable both to Determinists and Compatabilists alike (since it is deterministic).
If the Parallel DIG-RIG is based on an i-RIG then it is both indeterministic and unpredictable. Therefore it does endow CHDO (and this time it is GENUINE CHDO, not the FDO offered by the Darwinian model), therefore presumably would be accepted by a Libertarian (but obviously not by either a Determinist or a Compatabilist) as an explanatory model for free will.

Conclusion
We have shown that the so-called Darwinian model incorporating a single random idea generator gives rise to an indeterministic agent without endowing genuine CHDO. However, a suitable combination of deterministic and indeterministic idea generators, in the form of the Parallel DIG-RIG model, can form the basis of a model decision making machine which does endow genuine CHDO, and is genuinely both indeterministic yet rational.

Constructive criticism welcome!

MF
Good grief!

Anything said in the impermanence
about existence is a lie...of course, you know how to make toast...

We talk about something
that doesn't exist.

Be simple.
Dont faint.
Its OK.
The universe is perfect...go figure:-p

JC said we are satan.
What does THAT really mean?
satan is a liar.
satan is NOT EVIL.

i am not an xtian or any
other bs type of believer.
:shy:
 
Last edited:
  • #58
Let me illustrate my argument that "CHDO is an empty concept" with a simple example.

Suppose that Mary is a Libertarian faced with a simple binary decision – let us say either “to have an egg for breakfast” (let us call this choice A) or “NOT to have an egg for breakfast” (let us call this choice B). Clearly in this case Mary must choose either to have an egg for breakfast, or not to have an egg for breakfast. There are no other possibilities.

Suppose in our example that Mary chooses A.

Having enjoyed her breakfast, Mary (believing as she does in CHDO) would presumably claim that “if I could have the chance to make that decision again, if I could rewind the clock and set everything exactly back as it was before, then I would still have the free and unconstrained ability to choose either A or B, and I could indeed freely choose to do B rather than A”

Mary’s belief seems to be that she possesses “genuine CHDO”. Mary believes that she could have freely chosen NOT to have an egg for breakfast if she could choose again and everything was reset exactly the way it was before.

Would you agree this is what CHDO actually means? It certainly looks like CHDO to me.

Now let us look at how the “Darwinian model” would work in this scenario.

Presumably the “first” time the model is run, the RIG throws up both A and B as possible course of action, and Mary selects A rather than B via the deterministic DIS. This indeed explains why Mary actually chose to have an egg for breakfast.

But what about Mary’s claim that she “could have done otherwise” – in other words that if she could have the chance to make that decision again, if she could rewind the clock and set everything exactly back as it was before, then she would still have the free and unconstrained ability to choose either A or B, and she could indeed freely choose to do B rather than A”

How could we re-run the Darwinian model and generate the outcome that Mary chooses B instead of A, to support Mary’s claim that she indeed CHDO?

The DIS is deterministic – it will always choose A given a straight choice between A and B. So no solution here.

The only way to generate the desired outcome from the Darwinian model so that Mary could indeed “do otherwise” is to suggest that the RIG must come up with only one possible course of action – B. If we want Mary to “be able to do otherwise”, the RIG must NOT throw up the possibility of doing A in the second run, such that Mary then has no choice but to do B. But in this case, Mary is NOT choosing rationally and freely between the two options A and B – the “choice” of doing B is actually already made for her – by the random nature of the RIG (which is not under her control).

We thus have three possible ways that the Darwinian model could play itself out in this example –
EITHER the RIG throws up both A and B (in which case, as we have seen, Mary will always deterministically choose A),
OR the RIG throws up only A (in which case Mary must choose A)
OR the RIG throws up only B (in which case Mary must choose B).

There are no other possibilities.

Thus the outcome “whether Mary chooses A or B” is actually precisely determined by the random nature of the RIG. If the RIG throws up “only A” or “A and B” then Mary will choose A; if the RIG throws up “only B” then Mary will choose B.

Whether the RIG is “internal” to Mary or not makes no difference to the way it all works.
(Having an "internal RIG" actually means that there are random processes involved in our internal decision-making - ie our decision making is neither completely rational nor under our complete control)

By suggesting that the RIG is the source of “CHDO” we are actually saying that the ultimate “choice” of whether to do A or B is purely random.

Is this free will?
Is this CHDO?

MF
 
Last edited:
  • #59
MF

You have choice...you move away from pain toward pleasure.

Fush the logic. Be simple.

ENJOY
.
 
  • #60
meL said:
MF
You have choice...you move away from pain toward pleasure.
Fush the logic. Be simple.
ENJOY
.
yes all agents have choice

even a thermostat chooses whether and when to switch on or off

this is not the question here

the question is "could we have freely done otherwise than what we actually did"

the world is only as simple as some would like it to be if one wears rose-tinted spectacles

:smile:

MF
 
  • #61
I suggest that this is already part of the analysis of FW I am using under the
rubric of "ultimate origination" or "ultimate responsibility". Since it is
already part of the definition of FW, it does not need to be added again as
part of the defintion of CHDO (which is of course itself already part of the
definition of FW).


If it is already part of your definition of FW, then there surely can be no objection to re-inforcing this in the definition of CHDO? Or do you think this would invalidate your arguments?

The point of analysing a concept is to "divide and conquer" -- show that is
valid because all of its constituents can be satisfied. (Or to show that it is
invalid because they cannot be combined).


What is the alternative ? Having every possible option available to you at all
times ? As I have pointed out before, that is a kind of god-like omniscience.

I have not suggested “all possible options are available”..
But to suggest that our options to choose are necessarily limited by some “indeterministic idea generator”, and that this is the source of our free will and “CHDO”, is a gross misconception and misrepresentation of both free will and CHDO.

Why ? If you agree that Fw is not god-like omniscience and omnipotence, there
has to be some intenral limitation. Why should not the limitations of the
RIG+SIS model actaully constitute that necessary and inevetable limitation ?
You have said that they dop not, without explaining why.


Incorporating a random idea generator (RIG) (even in parallel with a DIG) does NOT result in an agent which possesses the above GENUINE CHDO properties. The RIG acts to RESTRICT the choices available to the DIS.
No it doesn't. A deterministic mechanism cannot come up with the rich and
original set of choices that an indeterministic mechanism can come up with.
Unplugging the RIG does not unleash some hidden creativity in the SIS.

I have never suggested that a deterministic mechanism does!
A deterministic idea generator does not endow CHDO. But my point is that neither does a random idea generator! BOTH generators endow “forced to do otherwise” – and not “could have done otherwise”

Internal limitiation (due to lack of omnipotence and omiscience) is not
the same as being forced externally.
One part of a system (in this case the RIG) having a causal effect on another
is not the same as the constraint of the system as a whole by external forces.
The SIS is not a mini-agent with its own wants needs, and decisions, it
is a filtration mechanism.

A random idea generator gives the “possibility that things could turn out differently”, NOT because we FREELY CHOOSE them to turn out differently, of our free will, but because the RIG FORCES them to turn out differently!

One part of a system (in this case the RIG) having a causal effect on another
is not the same as the constraint of the system as a whole by external forces.

The RIG constrains the choices available, just as much as the DIG does, regardless of our rational will. And this is true regardless of whether the RIG is internal or external.

Internal limitiation (due to lack of omnipotence and omiscience) is not
the same as being forced externally.


The DIS is therefore once again FORCED to do otherwise. The DIS can only make a FREE CHOICE between A and B if the idea generator offers up both A and B as alternate possibilities.

The SIS does not make choices as an agent does; it is not a mini-agent with its own wants needs, and decisions, it
is a filtration mechanism.

If the RIG comes up with more than one option, the SIS can make a free (in the
compatiblist sense -- no external constraint) choice between them

The key here is the “If”……
What if the RIG does not come up with option A, when the DIS prefers A to B?

Then there is a failure of imagination, creativity and ingenuity -- such
as is inevitable in a natural, finite. non-go-like agent.

. What the
RIG adds to the SIS is the extra, incompatiblist, freedom of CHDO.
If the RIG offers up both A and B, then let us say the DIS chooses A.

The only reason the DIS would choose B rather than A is NOT from some “free will choice of the agent”, but because it is CONSTRAINED by the RIG, because the RIG does not offer up A as a choice in the first place!
Do you call this free will?
Do you call this “could have done otherwise”?
I call it “forced to do otherwise”.

I call it a failure of imagination, creativity and ingenuity -- such
as is inevitable in a natural, finite. non-go-like agent.



The SIS cannot choose an option that is not presented to it by the RIG.
It can only choose from what is on the menu -- that is what we normally
mean by a free choice. The alternative -- an ultra-genius level of insight
and innovation for every possible situation -- may be worth wanting,
but is not naturalistically plausible.

My point is that it is precisely CHDO that “may be worth wanting”, but is not naturalistically plausible. CHDO does not exist.
I can ALWAYS do what I wish to do if I have free will.

Naturalistically , you cannot expect to be able to think of every possible
solution to a problem the very first time you encounter it. Ug the caveman
cannot think of crossing the river by suspension bridge. The fact that some
solution might appear to have been desirable
with 20:20 hindsight does not mean your will was frustrated
on all the occasions you failed to come up with it
On the occasions you failed to come up with the suspension bridge solution,
you were not thinking to yourself "Oh, I really wish I could dream
up a suspension bridge, but some mysterious force is preventing me",
the idea of a suspension bridge just isn't in your head at all.

Why would I then want to have some kind of random idea generator which constrains the choices available to me, just so that it can provide the artificial conditions necessary for your alleged “CHDO”, which is actually FDO after all?

The RIG doesn't constrain the options avaible to the SIS (NB: Not "you", the
SIS is not you, not is it a mini-agent inside you), it allows more than
one option to be available to the SIS, in general. The SIS cannot be
pre-loaded with preferences for every possible decision, since that
would entail omniscience. Therefore, when the RIG fails to come
up with an idea the SIS [n]would have[/b] preferred, if it
had been available, that does not mean the SIS already has
the idea, plus a preference for it. The SIS "recognises" the preferability
of an idea for the SIS using an algorithm, rather than a look-up table,
as it were.


A finite, natural, agent will
have internal limitations; they are not limitations on freedom
because they are not external. A caged bird is unfree because it
cannot fly; a pig cannot fly either, but that is not an example
of unfreedom because it is an inherent, internal limitation.

In our example of A and B, both A and B are possible choices that the agent might make. The agent always chooses A rather than B if given a free will choice between A and B. The ONLY reason the agent would choose otherwise (ie the only reason the agent would choose B) is simply because the option of doing A is NOT MADE AVAILABLE (by the RIG). Whether the RIG is internal to the agent or external the effect is the same - the agent chooses B simply because A is not "deemed to be available", and NOT because the agent prefers (freely chooses) B compared to A.

I dare say Ug would always choose to keep warm by making a fire rather than
shivering once the fire-making idea had occurred to him

But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG?

It had an internal limitation, as naturalistic systems must.


The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!

The RIG did not block from choice A -- choice A was never on the menu.


Choice A was certainly on the menu in the first run. Why not on the second run?

Naturalistic limitations (actually I have already addressed this issue. I can
amend the RIG so that once it has succeeded in throwing up
possibity, it continues to do so -- that is within naturalistic limitations).

It
certainly *failed* to come up with choice A. Failures and limitations
are part of being a finite, natural being.

Thus you are saying that the agent “chose to do B rather than A simply because it failed to come up with the option of doing A”, and NOT because “it chose to do B rather than A out of a free will choice to do B rather than A”?

I am saying FW is the whole process of RIGation and SISection. The SIS is not
a mini-agent with its own FW.



This is what you understand by free will?
“I freely choose to do B rather than A, not because I WANT to do B rather than A, but simply because I did not think of doing A in the first place”?
This a very strange kind of free will, and not one that many would recognise!

The SIS cannot be pre-loaded with preferences for (you cannot have wants
relating to) options the RIG has never dreamt up ITFP.


What natural mechanism can provide all possible choices ex nihilo ? How is Ug
the caveman to know that rubbing two sticks together and starting a fire
is the way to keep warm ? I don't doubt for a minute that what you want is
desiable; but how do *you* think it is possible ?

I am not asking for omniscience. I am asking whether CHDO exists.
I will never know if I have considered all possible alternatives, that is why I have already acknowledged that a RIG CAN add value to a decision-making agent by perhaps throwing up some additional possible alternatives.
But that is ALL it does. The RIG does NOT endow CHDO, the most it can ever endow is FDO.

It does endow CHDO in the standard defintion (eg Robert Kane's Alternative
Possibilites).


It doesn't restrict choices, becuase the choices don't exist priori to be
restricted. The RIG is a GENERATOR not a filter.

In one run the RIG might throw up A and B. In another run it might throw up only B. Thus the RIG controls whether A is made available to the agent or not. Whether you look upon this as a filter or as a generator makes no difference – the fact is that in one run A is made available, in another it is not.

The RIG can only be afilter if it filters all possible ideas. But all possible
ideas cannot be naturalistically available apriori, therefore the RIG is not a filter
but a generator, as I said.

This is what you understand by free will?
“I freely choose to do B rather than A, not because I WANT to do B rather than A, but simply because I did not think of doing A in the first place”?

What is the alternative ? To be able to think of every possible idea
ITFP ? That is omniscience, not FW.


This a very strange kind of free will, and not one that many would recognise!

You are very far from establishing that the failures of the RIG constrain the
SIS in all circumstances. For instance, you keep failing to consider
that the RIG can suceed in coming up with preferable ideas.

The agent is not constrained by the RIG because the RIG is not external.

Most of your objections to my argument seem to be based on the idea that the RIG is not external to the agent – it is supposed to be internal. Therefore the RIG cannot be looked upon as an external “constraint” to the agent’s free will. Correct?
OK. But then you are saying the indeterminism (in the RIG) is internal to the agent. That the source of the agent’s free will is based on some kind of internal indeterminism in the agent’s decision-making process.
But if the indeterminism is supposed to be internal to the agent, this must surely undermine the rationality of the agent. How can an agent believe that it is acting rationally if it at the same time thinks its choices are somehow controlled by an indeterministic mechanism?

The short answer is that there is not a pre-determined set of rules for
solving every particular problem. Agents have to depart from
rule-following in order to guess, create and innovate.

The long answer is found in my original article.

Speaking for myself, I certainly would not like to think that my rational decision-making processes were based on an indeterministic mechanism. How on Earth could I believe that such a thing is the source of my free will?
The net result is the same. In your model, free will means the following :
“I freely choose to do B rather than A, not because I WANT to do B rather than A, but simply because I did not think of doing A in the first place”?

Again, this is one-sided. In many cases you freely choose, C, which you have
never thought of before, because it is a better solution to the traditional
options A and B.
 
Last edited:
Back
Top