Weak and Strong Emergence, what is it?

In summary, the two definitions of emergence seem to be heavily reliant on local causation. Weak emergence reduces a system to its parts and assumes that the microstates of the parts are determined by the microstates of nearby parts. Strong emergence, on the other hand, assumes that higher-level phenomena are irreducible and exert a causal efficacy over the system.
  • #1
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,012
42
The idea of weak and strong emergence seems to be one that is easily confused. I've seen numerous mentions of "emergence" in the literature without classifying which they are talking about. Many times it seems the author wants to imply strong but is really only looking at a weakly emergent phenomena. In this thread I'd like to have a discussion about the two definitions.

I see http://www.reed.edu/~mab/papers/weak.emergence.pdf" has been cited 46 times according to Google Scholar. He defines weak emergence this way:
Weak emergence applies in contexts in which there is a system, call it S, composed out of "micro-level" parts; the number and identity of these parts might change over time. S has various "macro-level" states (macrostates) and various "micro-level" states (microstates). S's microstates are the intrinsic states of its parts and it's macrostates are structural properties constituted wholly out of microstates. Interesting macrostates typically average over microstates and so compresses microstate information. Further, there is a microdynamic, call it D, which governs the time evolution of S's microstates. Usually the microstate of a given part of a system at a given time is a result of the microstates of "nearby" parts of the system at preceding times; in this sense, D is "local".

Weak emergence is essentially a reductionist philosophy in which local causes create local effects. What exactly is a cause and what exactly is an effect seems intuitive enough for most folks to grasp, however I'd also like to better define cause and effect so I've also started a https://www.physicsforums.com/showthread.php?p=1051592#post1051592".

Similarly, strong emergence is defined by http://consc.net/papers/emergence.pdf" this way:
We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principal from truths in the low-level domain.

In this paper, Chalmers suggests there are higher level physical laws:
We can think of strongly emergent phenomena as being systematically determined by low-level facts without being deducible from those facts. In philosophical language, they are naturally but not logically supervenient on low-level facts. In any case like this, fundamental physical laws need to be supplemented with further fundamental laws to ground the connection between low-level properties and high-level properties.

Chalmers also resorts to "downward causation" and even to weak and strong downward causation though exactly why is just a bit unclear to me. He says:
Downward causation means that higher-level phenomena are not only irreducible but also exert a causal efficacy of some sort. Such causation requires the formulation of basic principals which state that when certain high-level configurations occur, certain consequences will follow. …

With strong downward causation, the causal impact of a high-level phenomenon on low-level processes is not deducible even in principal from initial conditions and low-level laws. With weak downward causation, the causal impact of the high-level phenomenon is deducible in principal, but is nevertheless unexpected.

Note that without some sort of downward causation, we could have strongly emergent phenomena which have no causal efficacy. They would exist but not have any way of interacting with the world. For example, a computer interacts at a local level exactly as Bedau points out. Each switch in a chip acts only because of some electrical signal provided to its control. It does not act because of any other reason. Thus, we can say the computer switch is a "micro-level" part in a system S. The macrostate of the computer exists, and is "constituted wholly out of microstates". Further, there is a microdynamic which we can call D which governs the time evolution of the microstate. This microdynamic is the application of voltage to the switch which makes it change state.

If we assume then that there is some kind of 'strongly emergent' phenomena which arises in a computational device, such as subjective experience, that phenomena has no causal efficacy over any portion of the system. One need not theorize additional physical laws as Chalmers points to. The laws governing the action of each switch are necessary and sufficient and no further description is needed. Thus, if there are any strongly emergent phenomena which might arise, it would seem that downward causation is the only way such a phenomena could have any kind of causal efficacy over the system.

I believe computationalism side steps this issue by simply suggesting that strongly emergent phenomena are 'like the weight of a polar bear's coat'. The purpose of the coat is to keep the polar bear warm, not to create weight. Yet it creates weight because that is needed to provide the insulation in this case since hair is made of matter and has weight and much of it is needed to provide the insulation. Similarly, subjective experience to a computationalist is the weight of the coat. It is not needed, and it serves no direct purpose, it is simply there. I'd found that example somewhere on the net, but it really doesn't strike me as a decent argument. Nevertheless, I suppose it will have to do. Perhaps someone else has seen a better argument?

It should be fairly clear that strong emergence and downward causation (strong or weak) can't be taken lightly. The only cases of strong emergence that should be taken seriously are molecular interactions IMO. Even there, it seems most interactions don't need anything like a strongly emergent theory to support them. They can be explained in terms of energy balance, bonds and so forth.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
The problem as I see it, is that we haven't really defined in what context we want to apply emergence.

For instance, it seems to me that emergence is actually just an emergent property of our minds, that is, we categorize things systematically, and try to make sense of them by themselves as individual objects.

For example, to determine the emergence of a dog, it would not only be a matter of scale and point of view, but also of how the atoms that make up the dog interact, to make an object called a dog.
A dog is a weakly emergent phenomena, not taking into account its mind, should it have one.

To me, every object in the universe is weakly emergent, it can all be reduced down to its pure fundamental interactions. But then again when you think about it can it really?
I mean, it all depends on how you look at it.
For instance if we were to calculate with a computer every atom in a dog, and its interactions, we would automatically get a dog, even if we didn't realize it.

What this implies to me, is that all objects in the universe is a side effect of smaller interactions, why these interactions happen the way they do, and why objects exist in the first place, can maybe be explained only by fully understanding the smaller interactions.

However, the problem arises when we get other things, that may seem irreducible, at least at this time.
One problem is of course the whole subjective side of the human mind.
If we were to create an exact brain and body replicant in a computer, that had everything down to the smallest quark (or string :P), would all the subjective stuff that we experience right now in our minds, arise simply as machine code?

i mean if we were able to create such a complicated computer program we would most likely already have solved the problem of consciousness, but taking that aside, for the purpose of this discussion.

I won't go too deeply into this, but this also somewhat ties in with determinism.
The problem lies in the fact that IF we created a program like above, and we could get hard output on the monitor as numbers that would represent every facet of the subjective mind, then that would also show that the universe is deterministic.

But even if it was, would that exclude any chance of strongly emergent phenomena?
It's kind of hard when we don't really have an example of a strongly emergent phenomena..
 
  • #3
Hi Octelcogopod, I'd agree with everything you've said. The conclusion is that it is hard to find, and even more difficult to have people agree on whether or not some phenomena is emergent. That really is one of the main thrusts of this thread, to shake out potential strongly emergent phenomena and see if there is ANYTHING that can be termed strongly emergent. Along with that would be to propose what downward causal actions that phenomena may have.

I think the reason such phenomena are difficult to identify as being strongly emergent is that there is no conceptual or logical tool with which we can make the determination. To advance such a tool would require some agreement as to what weak emergence entails, and I think Bedau has a very nice definition. Unfortunately it's only a definition, not a tool. But engineering uses such tools as Bedau is referring to all the time. They're conceptual tools called finite element analysis, control volumes, and many other things. Problem is, no one has recognized them as being applicable to weak and strong emergence. To do that I've started the https://www.physicsforums.com/showthread.php?p=1052467#post1052467", so feel free to comment in that one also.
If we were to create an exact brain and body replicant in a computer, that had everything down to the smallest quark (or string :P), would all the subjective stuff that we experience right now in our minds, arise simply as machine code?
If we modeled the brain using finite element analysis or the equivalent of it, we would be calculating what the physical system is doing using symbols. Do the symbols represent what is actually occurring to the actual gray matter? In the sense that we are able to interpret them, I believe the answer is yes. In the sense that the COMPUTER is able to interpret them, I believe the answer is no (ie: the computer is a p-zombie). But that conclusion must rest on some logical tool as I've previously mentioned.

. . . this also somewhat ties in with determinism.
Yes, I fully agree. That conclusion though may be hard for some to see. Weak emergence seems to imply a kind of 'bottom up' determinism. It implies that the system S as Bedau calls it, is completely determined by the microdynamics. Further, those microdynamics operate at a local level. We can think of a system as being broken down into small microscopic parts, larger than a molecule such that we can examine interactions at the classical level. If we do this, the classical interactions are essentially determinate and calculable. A switch in a computer for example is completely deterministic, and any system made of them similarly is.

The only thing I'd like to emphasize regarding the modeling of classical phenomena using computational means is that the computer is strictly a symbol manipulator, and does not have the same physical properties as the classical phenomena which is being modeled. However, if we made a 'physical computer' and considered it in terms of microstates similar to an FEA analysis (following Bedau's line of reasoning), then the conclusion is that many phenomena we percieve as strongly emergent are actually weakly emergent.
 
Last edited by a moderator:
  • #4
David J. Chalmers said:
If there are phenomena whose existence is not deducible from the facts about the exact distribution of particles and fields throughout space and time (along with the laws of physics), then this suggests that new fundamental laws of nature are needed to explain these phenomena.
If this is indeed what "strong emergence" means to the academic community, then I think one can confidently conclude that "strong emergence" does not exist; see my paper http://home.jam.rr.com/dicksfiles/Explain/Explain.htm .

With regard to "weak emergence" (that is with regard to the definition of "weak emergence") I feel it can also be dispensed with via the following proof. That is, emergence is emergence and there is nothing either weak or strong about it!
Doctordick said:
On the other hand, that result [that every explanation of anything must be mappable into my Analytical Model] certainly implies that all explanations must be "emergent" phenomena based upon the laws of physics. Finally, with regard to "emergent" phenomena, either the concepts being used are based on fundamental concepts (in which case they must directly obey my equation) or they are not. If the concepts being used to explain a phenomena are not fundamental, they must be explainable in terms of more fundamental concepts and that is the very definition of "emergent" phenomena.
In that regard, the following proof is of great interest regarding "emergent" phenomena. The proof concerns a careful examination of the projection of a trivial geometric structure on a one dimensional line element.

The underlying structure will be a solid defined by a collection of n+1 points connected by lines (edges) of unit length embedded in an n dimensional Euclidean space (an n dimensional equilateral polyhedron). The universe of interest will be the projection of the vertices of a that polyhedron on a one dimensional line element. The logic of the analysis will follow the standard inductive approach: i.e., prove a result for the cases n=0, 1, 2 and 3. Thereafter prove that if the description of the consequence is true for n-1 dimensions, it is also true for n dimensions. The result bears very strongly on the possible complexity of "emergent" phenomena.

First of all, the projection will consist of a collection of points (one for each vertex of that polyhedron) on the line segment. Since motion of that polyhedron parallel to the given line segment is no more than uniform movement of every projected point, we can define the projection of the center of the polyhedron to be the center of the line segment. Furthermore, as the projection will be orthogonal to that line segment and the n dimensional space is Euclidean, any motion orthogonal to that line segment introduces no change in the projection. It follows that the only motion of the polyhedron which changes the distribution of points on the line segment will be rotations of the polyhedron in the n dimensional space.

The assertion which will be proved is that every conceivable distribution of points on the line segment is achievable by a specifying a particular rotational orientation of the polyhedron. Before we proceed to the proof, one issue of significance must be brought up. That issue concerns the scalability of the distribution. I referred to the collection of points on the line segment as the "universe of interest" as I want the student to think of that distribution of points as a universe: i.e., any definition of length must be arrived at via some defined characteristic of the the distribution itself or subset of the distribution.

Case n=0 is trivial as the polyhedron consists of one point (with no edges) and resides in a zero dimensional space. It's projection on the line segment is but one point (which is at the center of the line segment by definition) and no variations in the distribution of any kind are possible. Neither is it possible to define length. It follows trivially that every conceivable distribution of a point centered on a line segment (which is one which can be used to define the origin of the line segment) is achievable by a particular rotational orientation of the polyhedron (of which there are none). Thus the theorem is valid for n=0 (or at least can be interpreted in a way which makes it valid).

Case n=1 is also trivial as the polyhedron consists of two points and one edge residing in a one dimensional space. Since the edge is to have unit length, one point must be a half unit from the center of the polyhedron and the other must be a half unit from the center in the opposite direction. Since rotation is defined as the trigonometric conversion of one axis of reference into another, rotation can not exist in a one dimensional space. It follows that our projection will consist of two points on our line segment. We can now define both a center (defined as the midpoint between the two points) and a length (define it to be the distance between the two points) in this universe but there is utterly no use for our length definition because there are no other lengths to measure. It follows trivially that every conceivable distribution of two points on a line segment (which is one) is achievable by a particular rotational orientation of the polyhedron (of which there are none). Thus the theorem is valid for n=1.

Case n=2 is the first case which is not utterly trivial. Fabrication of an equilateral n dimensional polyhedron is not a trivial endeavorer. In order to keep our life simple, let us construct our equilateral polyhedron in such a manner so as to make the initial orientation of the lower order polyhedron orthogonal to the added dimension and move the lower order entity up from the center of our coordinate system and add a new point on the new axis below the center. In this case, the coordinates of previous polyhedron remain exactly what they were for the established coordinates and are all shifted by the same distance in the new dimension. The new point has a position zero in all the old coordinates (it is on the new axis) and an easily calculated position on in the negative direction on the new axis (it must be equal to the new radius of the vertices of the old polyhedron).

The proper movement is quite easy to calculate. Consider a plane through the new axis and a line through any vertex on the lower order polyhedron. If we call the new axis the x-axis and the line through the chosen vertex the y axis, the y position of that vertex will be the old radius of the vertex in the old polyhedron. The new radius will be given by the square root of the sum of the old radius squared and the distance the old polyhedron was moved up in the new dimension squared. That is exactly the same distance the new point must be from the new center. Assuring the new edge length will be unity imposes a second Pythagorean constraint consisting of the fact that the old radius squared plus (the new radius plus the distance the old polyhedron was moved up) squared must be unity.

[tex]r_n = \sqrt{x_{up}^2 + r_{n-1}^2} \mbox{ and } 1 = \sqrt{r_{n-1}^2 + (x_{up} + r_n )^2 }[/tex]
The solution of this pair of equations is given by

[tex]r_n = \sqrt{\frac{n}{2(n=1)}} \mbox{ and } r_{up} = \frac{1}{\sqrt{2n(n+1)}}[/tex]

The case n=0 was a single point in a zero dimensional space. The case n=1 can be seen as an addition of one dimension x_1 (orthogonal to nothing) where point #1 was moved up one half unit in the new dimension and a point #2 was added at minus one half in the new dimension (both the new radius and "distance to be moved up" are one half). The case n=2 changes the radius to one over the square root of three and the line segment (the result of case n=1) must be moved up exactly one half that amount. A little geometry should convince you that the result is exactly an equilateral triangle with a unit edge length. Projection of this entity upon a line segment yields three points and the relative positions of the three points are changed by rotation of that triangle.

In this case, we have two points to use as a length reference and a third point who's distance from the center of the other two can be specified in terms of that defined length reference. Using those definitions, two of the points can be defined to be one unit apart and the third point's position can vary from any specific position from plus infinity to minus infinity. The infinities occur when the edge defined by the two vertices being used as our length reference is orthogonal to the line segment upon which the triangle is being projected (in which case the defining unit of measure falls to zero). Plus infinity when the third point is on the right (by convention) and minus infinity when the third point is on the left (by common convention, right is usually taken to be positive and left to be negative). It thus follows that every conceivable distribution of three points on a line segment is achievable by a particular rotational orientation of the polyhedron (our triangle). Thus the theorem is valid for n=2.

Case n=3 consists of a three dimensional equilateral polyhedron consisting of four points, six unit edges and four triangle faces: i.e., what is commonly called a tetrahedron. If you wish you may show that the radius of vertices is given by one half the square root of three halves and the altitude by the radius plus one over two times the square root of six (as per the equations given above). To make life easy, begin by considering a configuration where a line between the center of our tetrahedron and one vertex is parallel to the axis of projection on our reference line segment. Any and all rotations around that axis will leave that vertex at the center of our line segment. Essentially, except for that particular point, we obtain exactly the same results which were obtained in case n=2 (that would be projection of the triangle face opposite the chosen vertex). Using two of the points on that face to specify length, we can find an orientation which will yield the third point in any position from minus infinity to plus infinity while the forth point remains at the center of the reference segment.

Having performed that rotation, we can rotate the tetrahedron around an axis orthogonal to the first rotational axis and orthogonal to the line on which the projection is being made. This rotation will end up doing nothing to the projection of the first three points except to uniformly scale their distance from the center. Since we have defined length in terms of two of those points, the referenced configuration obtained from the first rotation does not change at all. On the other hand, the forth point (which was projected to the center point) will move from the center towards plus or minus infinity depending on the rotation direction (the infinite positions will correspond to the orientation where the line of projection lies in the face opposite the fourth point). It follows that all possible configurations of points in our projection can be reached via rotations of the tetrahedron and the theorem is valid for n=3.

Since the space in which the n dimensional polyhedron is embedded is Euclidean, we can specify a particular orientation of that polyhedron by listing the n coordinates of each vertex. That coordinate system may have any orientation with respect to the orientation of the polyhedron. That being the case, we are free to set our coordinate system to have one axis (we can call it the x axis) parallel to the line on which the projection is to be made. In that case, except for scale, a list of the x coordinates correspond exactly to the apparent positions of the projected points on our reference line.

If the theorem is true for an n-1 dimensional polyhedron, there exists an orientation of that polyhedron which will correspond to any specific distribution of n points on a line (where scale is established via some procedure internal to that distribution of points). If that is the case, we can add another axis orthogonal to all n-1 axes already established, move that polyhedron up along that new axis a distance equal to [itex]\frac{1}{\sqrt{2n(n+1)}}[/itex] and add a new point at zero for every coordinate axis except the nth axis where the coordinate is set at [itex] - \sqrt{\frac{n}{2(n+1)}}[/itex]. The result will be an n dimensional equilateral polyhedron with unit edge which will project to exactly the same distribution of points obtained from the previous n-1 dimensional polyhedron with one additional point at the center of our reference line segment.

If our n dimensional polyhedron is rotated on an axis perpendicular to both the reference line segment and the nth axis just added, the only effect on the original distribution will be to adjust the scale of every point via the relationship [itex]xcos\theta[/itex] where theta is the angle of rotation. Meanwhile, the position of the added point will be given by [itex]sin \theta[/itex]. Once again, the added point may be moved to any position between plus and minus infinity which occurs at ninety degrees. Once again the length scale is established via some procedure internal to the distribution of points. It follows that the theorem is valid for all possible n.

QED​

There is an interesting corollary to the above proof. Notice that the rotation specified in the final paragraph changes only the components of the collection of vertices along the x-axis and the nth axis. All other components of that collection of vertices remain exactly as they were. Since the order used to establish the coordinates of our polyhedron is immaterial to the resultant construct, the nth axis can be a line through the center of the polyhedron and any point except the first and second (which essentially establish the x-axis under our current perspective). It follows that for any such n dimensional polyhedron for n greater than three (any x projection universe containing more than four points) there always exists n-2 axes orthogonal to both the x and y axes. These n-2 axes may be established in any orientation of interest so long as they are orthogonal to each other and the x,y plane. For any point (excepting the first and the second which establish the x axis) there exists an orientation of these n-2 axes such that one will be parallel to the line between that point and the center of the polyhedron. Any rotation in the plane of that axis and the y-axis will do nothing but scale the y components of all the points and move that point through the collection, making no change whatsoever in the projection on the x axis.

We can go one step further. Within those n-2 axes orthogonal to the x and y axes, one can choose one to be the z axis and still have n-3 definable planes orthogonal to both the x and the y axes. That provides one with n-3 possible rotations which will leave the projections on the x and y axes unchanged. Since, in the construction of our polyhedron no consequences of rotation had any effect until we got to rotations after addition of the third point, these n-3 possible rotations are sufficient to obtain any distribution of projected points on the z axis without altering the established projections on the x and y axes.

Thus it is seen that absolutely any three dimensional universe consisting of n+1 points for n greater than four can be seen as an n dimensional equilateral polyhedron with unit edges projected on a three dimensional space. That any means absolutely any configuration of points conceivable. Talk about "emergent" phenomena, this picture is totally open ended. Any collection of points can be so represented! Consider the republican convention at noon of the second day (together with the rest of the world with all the people and all the plants and all the planets and all the galaxies) where the collection of the positions of all the fundamental particles in the universe is no more than a projection of some n dimensional equilateral polyhedron on a three dimensional space.

On top of that, if nothing in the universe can move instantaneously from one position to another, it follows that the future (another distribution of that collection of positions of all the fundamental particles in the universe) is no more than another orientation of that n dimensional polyhedron. Think about that view of that rather simple construct and the complex phenomena which is directly emergent from the fundamental picture.

Have fun -- Dick
 
Last edited by a moderator:
  • #5
But Dr. Dick, there are many properties of the whole of that group of folks present at 12:00 noon at the convention that cannot be predicted from knowledge of their positions, thus your example does not explain why emergent properties are not a fundamental reality of cybernetic systems--in fact, the exact opposite is true, for when a system becomes large the properties of the whole are very different from the properties of the parts.
 
  • #6
I simply cannot comprehend your inability to fathom the consequences of what I just proved. The republican convention has nothing to do with the proof at all. I put it the way I did to express the fact that the evolution of the most complex phenomena conceivable all the way from the exact detailed behavior of an entire collection of individuals and all their intimate environment amounting to a complex community of human beings all the way to the behavior of the entire universe can be seen as no more than a projection of the vertices of a rotating n-dimensional equilateral polyhedron on a three dimensional space. And all you say is "when a system becomes large the properties of the whole are very different from the properties of the parts."

I do not know how to reach you -- Dick
 
  • #7
Hey Dick,
I'd agree stong emergence and downward causation is a highly contentious issue, and I'd only seriously consider it at a molecular level as it's here we find a discontinuity between quantum theory and classical physics.

I read over your reference as well as other things you've posted at that site. You said:
... behavior of the entire universe can be seen as no more than a projection of the vertices of a rotating n-dimensional equilateral polyhedron on a three dimensional space.*
That seems like a nice summary of what you're trying to accomplish. Correct me if I'm wrong, but your proof shows that any n dimensional structure can be seen as a projection of an n+1 dimensional structure onto an n dimensional space. Sorry if that's an oversimplification or if I've gotten something mixed up.

Would you agree that if some explanation can be shown to match reality, we still haven't proven that it does in fact match reality? String theory has this issue if I'm not mistaken. How would you prove that the universe is in fact a multidimensional structure? I like the idea and believe such a possibility might hold promise in explaining something about the world, but from what I understand such theories aren't able to predict anything and therefore they are no better than a strongly emergent phenomena without downward causation <grin> ie: additional dimensions may exist, but if there is no benefit derived from theorizing them, if everything can be explained without invoking the additional dimensions, then it seems these additional dimensions serve no physical purpose just as computationalism supposes conscious phenomena exist where such a phenomena serves no phyisical purpose.

Have you created a thread to discuss your work? If so can you provide a link? I'd rather not have discussions regarding your work in this thread and retain this one for discussions regarding weak and strong emergence.

*Questions for another thread: What causes the "rotation" and is the cause deterministic? Can all sets of dimensions be known or measured with respect to any other set of dimensions? If not, this might result in some very interesting phenomena that might help explain gaps in our understanding.
 
  • #8
Doctordick said:
I do not know how to reach you
Well, it sure would help if you would explain how you came to conclude this: "any three dimensional universe consisting of n+1 points for n greater than four can be seen as an n dimensional equilateral polyhedron with unit edges projected on a three dimensional space". Well, here is crackpot that would find seven dimensions to the universe:http://homepages.ihug.co.nz/~brandon1/resources/dim3.htm and not your three. Since you state that the correct number of dimensions in the universe MUST BE 3 ! -- what use all your explanation when in fact the correct number is found to be 4 as suggested by general relativity of Einstein(http://en.wikipedia.org/wiki/General_relativity) , or many as suggested by string theory (http://en.wikipedia.org/wiki/String_theory) ?
 
Last edited by a moderator:
  • #9
Doctordick said:
..the evolution of the most complex phenomena conceivable all the way from the exact detailed behavior of an entire collection of individuals... can be seen as no more than a projection of the vertices of a rotating n-dimensional equilateral polyhedron on a three dimensional space.
So, you are saying that your projection allows one to "see" the "exact detailed behavior" of the simultaneous position and momentum of a collection of quantum particles--is that correct ?
 
  • #10
OK, someone will need to help me out here, maybe I’m just being dense.

We can think of strongly emergent phenomena as being systematically determined by low-level facts without being deducible from those facts. In philosophical language, they are naturally but not logically supervenient on low-level facts. In any case like this, fundamental physical laws need to be supplemented with further fundamental laws to ground the connection between low-level properties and high-level properties.

How can one set of phenomena be “determined by” another set of phenomena, and yet not be logically supervenient on that other set?

Can Chalmers, or anyone else, give examples of such strongly emergent phenomena (ones which fit his description)?

octelcogopod said:
IF we created a program like above, and we could get hard output on the monitor as numbers that would represent every facet of the subjective mind
The problem here is that by looking at the monitor output as external observers, we have destroyed or circumvented the subjectivity (if there is any) within the machine. Subjective experience, by definition, is 1st person, and it cannot (by definition) be displayed on a monitor. That’s what people like Chalmers cannot accept, and the reason (imho) that they keep tilting at windmills trying to say that we need a whole new physics to explain subjective experience. We don’t.


Q_Goest said:
I think the reason such phenomena are difficult to identify as being strongly emergent is that there is no conceptual or logical tool with which we can make the determination. To advance such a tool would require some agreement as to what weak emergence entails, and I think Bedau has a very nice definition. Unfortunately it's only a definition, not a tool.
To turn a definition into a tool, we just need to identify the necessary and jointly sufficient conditions for emergence – then investigate alleged emergent phenomena to see if they satisfy those conditions. So step one would be to identify the necessary and jointly sufficient conditions……

Interesting that we all (Q_Goest, octelcogopd, Doctordick & myself) seem to doubt that strongly emergent phenomena actually exist (Rade has not declared in this thread any beliefs one way or another). Is there anyone who wants to defend the notion that strongly emergent phenomena exist?

Best Regards
 
  • #11
Finger, I am another non-believer in strong emergence, but I just wanted to comment on this

[quote-moving finger]How can one set of phenomena be “determined by” another set of phenomena, and yet not be logically supervenient on that other set?

Can Chalmers, or anyone else, give examples of such strongly emergent phenomena (ones which fit his description)?[/quote]

This is a good point and aftr reading a lot of defenses of strong emergence and downward causation, not just within the consciousness arena, I have yet to see any defender of SE really grappple with it. Either they just present it as a gulp and accept primary fact, with handwaving toward sand piles or such, or else they argue in effect the it's technically very difficult to derive the SE phenomena from the lower level ones and personally THEY can't imagine any way to do it.

Generally speaking I consider folks like that, including Searle, and perhaps Chalmers, to be lacking in imagination and comprehension of the big complexity of the world.
 
  • #12
selfAdjoint said:
This is a good point and aftr reading a lot of defenses of strong emergence and downward causation, not just within the consciousness arena, I have yet to see any defender of SE really grappple with it. Either they just present it as a gulp and accept primary fact, with handwaving toward sand piles or such, or else they argue in effect the it's technically very difficult to derive the SE phenomena from the lower level ones and personally THEY can't imagine any way to do it.
I agree 100% - and I think you've highlighted the real "hard problem" here - the fact that it is indeed often very difficult in practice to derive the emergent phenomena from lower level properties, and some people then jump to the conclusion that "oh! there must be a whole new physics in here!".

Basically the same problems underlie the understanding of causation vs correlation, and of understanding the "emergence" of responsibility within so-called "free agents" - as exemplified in the Quantum Mechanics and Determinism thread here : https://www.physicsforums.com/showthread.php?p=1056559#post1056559


There is no need for any new physics. There's just a need to let go of false intuitions and use common sense.

Best Regards
 
  • #13
Hi MF.

MF said: How can one set of phenomena be “determined by” another set of phenomena, and yet not be logically supervenient on that other set?
Chalmers said: In philosophical language, they are naturally but not logically supervenient on low-level facts.
I'm a bit confused by the use of the term "supervenient", but it seems understandable to me when read in context. I interpret Chalmers as saying that strong emergence postulates there being phenomena that can't, even in principal, be determined by the low level facts or the microstates as Bedau puts it. If this is true, then to maintain physicalism I guess we must postulate additional laws that might govern the interelationship between the microstates and the system. Chalmers gives an example of what he means:

One might also in principle have both strongly emergent qualities and strong downward causation together. If so, one has a situation in which a new fundamental quality is involved in new fundamental causal laws. This last option can be illustrated by combining the cases of consciousness and quantum mechanics discussed above. In the familiar interpretations of quantum mechanics according to which it is consciousness itslef that is responsible for wavefunction collapse, the emergent quality of consciousness is not epiphenomenal but plays a crucial causal role.
Note: I've included Chalmers reference to strong downward causation because I honestly don't see a need to invoke strong emergence without it.

I think a potential explanation for strong emergence might arise from a discussion of multiple dimensions. The concept of more than 4 dimensions is a common one. From the perspective of the proverbial 2 dimensional ant crawling on a 2 dimensional plane, the 3'rd dimension intersects that plane at an orthoganal angle - such that from the ant's perspective, there is no 3'rd dimension and he has no reason to consider it.* The dimension makes no impact on the world. At least, that's what the ant thinks. The ant can not see nor measure any 3'rd dimension as it crawls around on this plane and there is no way for the ant to detect this dimension, even in principal.

The fact one can not measure a dimension in any way may make some sense of strong emergence and also of strong downward causation. If your yardstick is made of n dimensions, it can't measure n+1 dimensions. If however, there is another dimension, it is conceivable that it affects or is related somehow to the others.

Chalmers doesn't support this concept of course, he's only suggesting that there may exist higher level configurations which may require new physical laws, but I don't see that at any level above the quantum level. I could potentially accept such a concept at a molecular level. That is, perhaps some additional dimensions have a causal affect on molecules, and potentially those molecules then affect the overall system, but once we have a statistically large group of molecules that interact at a classical level, the outcome is essentially deterministic and governed only by weak emergence.

*The 2 dimensional ant exists in 2 linear dimensions and a time dimension, so actually it is a 3 dimensional ant, but here I've used length as a dimension as is often done for the ant analogy.
 
  • #14
Consider the concept "cat". The concept can be viewed either as a "set" (e.g., the set of all cats) or your pet cat fluffy. IMO, strong emergence is nothing more than the common sense fact that what may be true about a set may be false (even meaningless) when applied to any element of the set. Thus, consider this statement about the concept "cat" -- it is one million years old. Is this not an example of strong emergence, a higher order phenomenon not possible for any single element of the set ?-- for fluffy may be old but not that old. Example of weak emergence using cat concept is this statement -- one half are female, for fluffy must be either male or female. In this example the higher order phenomenon is thus deduced from basic principle concerning x y chromosomes and meets definition of weak emergence. But perhaps I do not understand the motive for the division -- strong vs weak.
 
  • #15
Q_Goest said:
I'm a bit confused by the use of the term "supervenient", but it seems understandable to me when read in context. I interpret Chalmers as saying that strong emergence postulates there being phenomena that can't, even in principal, be determined by the low level facts
Hold on. This seems to directly contradict your earlier quote from Chalmers.

We can think of strongly emergent phenomena as being systematically determined by low-level facts without being deducible from those facts. In philosophical language, they are naturally but not logically supervenient on low-level facts.

Unless there is some other strange interpretation of the verb “determined” that Chalmers is using here, this means that given antecedent “low-level facts” the “strongly emergent phenomena” arise as nomologically (if not logically) necessary consequences.

Your statement “strong emergence postulates there being phenomena that can't, even in principal, be determined by the low level facts” is thus in contradiction to Chalmers’ statement. If you actually mean “strong emergence postulates there being phenomena that can't, even in principle, be determinable by knowledge of the low level facts” then I would agree this is perhaps correct (but arguable) – because determinability (an epistemic property) is NOT the same as determinism (an ontic property). This once again gets back to the fundamental difference between ontic determinism and epistemic determinability – a recurring theme is so many threads!

One might also in principle have both strongly emergent qualities and strong downward causation together. If so, one has a situation in which a new fundamental quality is involved in new fundamental causal laws. This last option can be illustrated by combining the cases of consciousness and quantum mechanics discussed above. In the familiar interpretations of quantum mechanics according to which it is consciousness itslef that is responsible for wavefunction collapse, the emergent quality of consciousness is not epiphenomenal but plays a crucial causal role.
This assumes the premise that consciousness “causes’ wave function collapse is true – I don’t believe it is.

Q_Goest said:
Note: I've included Chalmers reference to strong downward causation because I honestly don't see a need to invoke strong emergence without it.
I tend to agree. Epiphenomena are pretty useless (hence may be ignored as any part of an explanation) by definition.

Q_Goest said:
I think a potential explanation for strong emergence might arise from a discussion of multiple dimensions. The concept of more than 4 dimensions is a common one. From the perspective of the proverbial 2 dimensional ant crawling on a 2 dimensional plane, the 3'rd dimension intersects that plane at an orthoganal angle - such that from the ant's perspective, there is no 3'rd dimension and he has no reason to consider it.* The dimension makes no impact on the world. At least, that's what the ant thinks.
OK. What you’re saying here is basically that “there may be more laws of nature/physics than we are currently aware of” – and I wouldn’t disagree. But I wouldn’t call this any form of emergence – it only appears like emergence because we have limited knowledge of the underlying physics. If one were to educate the ant about the existence of this 3rd dimension he would presumably say (assuming ants are sentient and can communicate) “ahhhh, I see! That’s how it works” – he wouldn’t say “ohhh, that’s an emergent phenomenon”.

Q_Goest said:
The ant can not see nor measure any 3'rd dimension as it crawls around on this plane and there is no way for the ant to detect this dimension, even in principal.
This is not in fact correct. Firstly, in reality the ant is aware of that third dimension. If it starts to measure distances (btw – it has been shown that ants CAN measure distances!), then it will find some very strange geometrical properties of its world (unless it is living on a truly flat plane with no topography), from which it could infer that there exists a 3rd dimension. Secondly, if you wish to imagine truly 2D beings then these beings would have no 3rd dimension at all – thus it would be impossible for them to physically “exist” in any real sense of the word existence.

Even if I were to allow that the ant is aware of only 2 dimensions, if there is no way for the ant even in principle to determine the existence of the 3rd dimension then in what possible way can the 3rd dimension have any impact (via downward causation) upon the ant?

Q_Goest said:
The fact one can not measure a dimension in any way may make some sense of strong emergence and also of strong downward causation. If your yardstick is made of n dimensions, it can't measure n+1 dimensions. If however, there is another dimension, it is conceivable that it affects or is related somehow to the others.
How can it affect the others and at the same time we cannot in principle be aware of its existence? Could you give an example?

Q_Goest said:
Chalmers doesn't support this concept of course, he's only suggesting that there may exist higher level configurations which may require new physical laws, but I don't see that at any level above the quantum level. I could potentially accept such a concept at a molecular level. That is, perhaps some additional dimensions have a causal affect on molecules, and potentially those molecules then affect the overall system, but once we have a statistically large group of molecules that interact at a classical level, the outcome is essentially deterministic and governed only by weak emergence.
I accept there may be some “laws” of nature that we have not yet discovered. If this is all that Chalmers is getting at then I don’t disagree. But to jump from this to “strong emergence” or “downward causation” is (imho) an irrational and unwarranted “wrong-headed” approach. It’s not that higher dimensions have “causal effects” on lower dimensions, it’s that if there are higher dimensions then we will need additional “laws of physics” to explain how all dimensions (lower and higher) interact. To my mind these laws are in principle no more “inaccessible” than laws of quantum physics or relativity or cosmology.

Rade said:
Consider the concept "cat". The concept can be viewed either as a "set" (e.g., the set of all cats) or your pet cat fluffy. IMO, strong emergence is nothing more than the common sense fact that what may be true about a set may be false (even meaningless) when applied to any element of the set. Thus, consider this statement about the concept "cat" -- it is one million years old. Is this not an example of strong emergence, a higher order phenomenon not possible for any single element of the set ?-- for fluffy may be old but not that old.
I don’t see why this is “strong” emergence. The concept cat may be 1 million years old, I may not be able to determine how the concept cat arose in the first place (ie where the concept came from is not epistemically determinable), but if I believe in determinism then I simply say that this concept arose as a necessary consequence of the outworking of laws of nature plus antecedent states. I don’t see what emergence has to do with it.

Rade said:
Example of weak emergence using cat concept is this statement -- one half are female, for fluffy must be either male or female.
It is logically possible that there be an unequal split in genders – indeed it is logically possible that (ie there exist logically possible worlds where) 99.999999% of cats are female. There even exist logically possible worlds where cats reproduce asexually.

Best Regards
 
  • #16
Rade said:
But Dr. Dick, there are many properties of the whole of that group of folks present at 12:00 noon at the convention that cannot be predicted from knowledge of their positions, thus your example does not explain why emergent properties are not a fundamental reality of cybernetic systems--in fact, the exact opposite is true, for when a system becomes large the properties of the whole are very different from the properties of the parts.
I simply cannot comprehend your failure to fathom what I said. I merely stated my example as I did to emphasize a that the "complex distribution of a collection of positions" can display the specific details of absolutely anything ranging from the exact details of every aspect concerning the intimate behavior of all arbitrary macroscopic groups of human beings together with their surroundings all the way to the very extent of the universe. And the behavior of it all can be represented by rotation of that polyhedron. From a very simple view emerges an extremely complex phenomena.
Q_Goest said:
I'd agree stong emergence and downward causation is a highly contentious issue, and I'd only seriously consider it at a molecular level as it's here we find a discontinuity between quantum theory and classical physics.
Is that discontinuity real or merely a figment of your imagination?
Q_Goest said:
Correct me if I'm wrong, but your proof shows that any n dimensional structure can be seen as a projection of an n+1 dimensional structure onto an n dimensional space. Sorry if that's an oversimplification or if I've gotten something mixed up.
I think you have gotten some very important things mixed up. You should have said,"your proof shows that absolutely any collection of three dimensional structures can be seen as a projection of the vertices of an n dimensional equilateral polyhedron with unit edges (the n dimensional version of an equilateral triangle) onto a three dimensional space. Think about what that sentence says carefully.
Q_Goest said:
Would you agree that if some explanation can be shown to match reality, we still haven't proven that it does in fact match reality?
You would have to explain to me exactly where you find a difference in meaning between "shown to match reality" and "in fact" "match reality". I would normally take "shown to match" to mean that the match is a fact.
Q_Goest said:
String theory has this issue if I'm not mistaken.
The problem with "string theory", as I understand it, is that, although it can produce mathematical relationships found in the experimental results, these relationships can not be uniquely tied to real experiments. My simple constraints can be directly related to real experiments through analytical definition. I might comment that, in my opinion, if one cannot provide analytical definitions of the terms they use, they do not know what they are talking about; an analytic statement itself. That is exactly why I begin with "undefined sets" A, B, C and D: i.e., working explicitly with undefined things is the only way to talk about something without knowing what you are talking about.
Q_Goest said:
How would you prove that the universe is in fact a multidimensional structure?
I wouldn't! "IS" is a very strong statement no matter what it refers to and only serves a real purpose in an analytic truth (as per Kant's definition).
Q_Goest said:
Have you created a thread to discuss your work? If so can you provide a link? I'd rather not have discussions regarding your work in this thread and retain this one for discussions regarding weak and strong emergence.
I have many times tried to create a little interest in my work and have yet to find anyone both educationally capable of following my arguments and emotionally interested in following them.
Q_Goest said:
*Questions for another thread: What causes the "rotation" and is the cause deterministic?
You must first define and defend the concept of "cause" before intelligently discussing a cause of any kind. In my opinion, "cause" is no more than the event proceeding the event being explained by that cause: i.e., explanations (the methods of obtaining your expectations) introduce the concept of cause. Without explanations, the concept "cause" serves no purpose whatsoever.
Q_Goest said:
Can all sets of dimensions be known or measured with respect to any other set of dimensions? If not, this might result in some very interesting phenomena that might help explain gaps in our understanding.
I think the gaps in your understanding are a simple consequence of not thinking things out carefully. In particular, chasing off after poorly defined concepts as if they are facets of reality which require explanation. You need first to be very careful as to what you are talking about.
Rade said:
Well, it sure would help if you would explain how you came to conclude this: "any three dimensional universe consisting of n+1 points for n greater than four can be seen as an n dimensional equilateral polyhedron with unit edges projected on a three dimensional space".
That is exactly what is presented in the post; however, you seem not to be able to follow the steps of the proof.
Rade said:
So, you are saying that your projection allows one to "see" the "exact detailed behavior" of the simultaneous position and momentum of a collection of quantum particles--is that correct ?
You "see" things in your imagination. Whatever it is that you see, in most normal human beings, it is rendered as things dispersed in a three dimensional space which change in various ways as time passes. Theories are hypotheses as to "why" things appear as they do, not proofs of what is! You are a very confused person.
moving finger said:
How can one set of phenomena be “determined by” another set of phenomena, and yet not be logically supervenient on that other set?
It is quite simple. The presumption is that there is a fundamental law of the universe which requires many many variables to express. First, it is a "fundamental" law in the sense that it expresses a relationship inherent in the universe which is not a consequence of the collection of other fundamental "laws". And second, as expression of this relationship requires many many variables, the existence of the law has no observable consequences until that required collection of variables are under consideration. Paul and others would like to define consciousness to be such a collection, thus introducing a new "fundamental law" to explain the observed behavior.

The fundamental problem with such a concept is that it must be possible to communicate an explanation of the concept to another or it is useless. That is why my analysis of "an explanation" in terms of undefined fundamental entities A, B, C and D still applies. And further, as utterly no causality is required to explain any distribution of fundamental entities (other than "they must be different", enforced by the Dirac delta function, and the set D, what is hypothesized to exist,) in order that the observed physical laws between two fundamental elements be what is physically observed, there exist no evidence for any physical laws outside our imagination.
selfAdjoint said:
Generally speaking I consider folks like that, including Searle, and perhaps Chalmers, to be lacking in imagination and comprehension of the big complexity of the world.
And I agree with you one hundred percent.
moving finger said:
I agree 100% - and I think you've highlighted the real "hard problem" here - the fact that it is indeed often very difficult in practice to derive the emergent phenomena from lower level properties, and some people then jump to the conclusion that "oh! there must be a whole new physics in here!".
You are exactly right. The "hard problem" is solving any many body problem. I would point out to you that physics is notoriously lacking in analysis of many variable systems. Newtonian mechanics is quite easy to solve for "one" body problems (so long as the forces on that lone body can be expressed) and for "two" body problems so long as those two bodies are the sources of all significant forces (i.e., cases where the problem can be reduced to a one body problem via conservation of center of mass momentum) but general three body problems can only be solved through numerical approximation or for very special cases. What I am trying to point out is that many variable systems are, in general, very difficult to solve and determining the correct emergent behavior (except for something as simple as random gas) is actually very very difficult.

By the way, I can show that all one body problems (and that would include reducible two body problems) and random gas problems can be accurately modeled by that revolving n dimensional polyhedron so what evidence is there that the observed "emergent" behavior is not also so modeled?
Q_Goest said:
If your yardstick is made of n dimensions, it can't measure n+1 dimensions.
You are quite correct. In the same vein, everyone seems to miss the fact that "every" physical measure (as opposed to selfAdjoints reference to Lebesque measure which is an analytic concept) must be established via references to defined "physical" phenomena internal to the universe under consideration. That fact has some very profound consequences usually missed by everyone.

Have fun -- Dick
 
  • #17
Doctordick said:
..You "see" things in your imagination. Whatever it is that you see, in most normal human beings, it is rendered as things dispersed in a three dimensional space which change in various ways as time passes. Theories are hypotheses as to "why" things appear as they do, not proofs of what is! You are a very confused person...
Well, no, theories are not hypotheses, a theory FYI is an "explanation" (of facts, hypotheses, laws) not a hypothesis (we can call this Dr Dicks confused lapse #1); I see many things (as do most normal human beings) with my eyes (there is a name for this phenomenon btw--perception) not my imagination (DD confused lapse #2); I cannot see with eyes nor imagine the simultaneous phenomenon of position and momentum of a quantum particle--if you can please share as then you can publish your falsification of the Heisenberg Uncertainty Principle (DD confused lapse #3); I never stated that a theory was a "proof"--in fact I did not even use the word in my question to you that you have no idea how to answer (DD confused lapse #4). Confusion indeed in this thread.
 
  • #18
Hi MF.
One definition of "strong emergence" per Chalmers.
We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principal from truths in the low-level domain.
Chalmers also states numerous times that he believes consciousness is strongly emergent. Chalmers is also a computationalist. Computationalism suggests that although the mechanism that produces the "high-level phenomenon" as Chalmers puts it, is completely deterministic (any computational mechanism is completely deterministic) he's suggesting that the phenomenon of consciousness is something which can't be deduced in any way from examining the operation of the computer's parts. So I don't think I've misquoted him when I say:
I interpret Chalmers as saying that strong emergence postulates there being phenomena that can't, even in principal, be determined by the low level facts or the microstates as Bedau puts it.
Perhaps the term "determined" is underspecified for a philisophical discussion like this. I mean they can't be figured out or understood, not that they aren't deterministic. We can have perfect knowledge of all the parts, and the phenomena could still not be understandable, even in principal. In the case of a computational machine, strongly emergent phenomena can obviously be created by the deterministic actions of some mechanism.

The really crazy part of all that is someone wanting to suggest there's something more, an add on, something that is created which we have no need to suspect exists and we have no way of measuring. Further, it exists without any causal efficacy. This is the computationalist's position. It is a belief, not unlike a religion, which says we need to accept that something along the lines of a strongly emergent phenomenon must be created which has no causal efficacy, can't be understood even in principal and it arises from the actions of numerous, deterministic, knowable parts which we can easily duplicate, simulate, and know everything about except we can never understand anything about the subjective experience it creates. If one believes in computationalism, you are essentially forced into believing strong emergence exists. I see no way around it, as Chalmers obviously has also concluded. If one accepts computationalism, you have little choice but to believe in strong emergence. I'd like to know how one can avoid that conclusion.

What you’re saying here is basically that “there may be more laws of nature/physics than we are currently aware of” – and I wouldn’t disagree. But I wouldn’t call this any form of emergence – it only appears like emergence because we have limited knowledge of the underlying physics.
I'm suggesting that in order for any kind of strong emergence to make sense, we need to step away from our common perceptions of the "laws" of nature and physics. Here you've suggested we can know them. However, if we can't measure something, I'm suggesting we can't know what makes it work, even in principal. Not from this perspective anyway, the perspective of a conscious human living in a 4 dimensional world. Thus, the ant would never be able to say, “ahhhh, I see! That’s how it works”. He would say, "ohhh, that’s an emergent phenomenon”. Why? Because he can't measure a dimension he doesn't have privy to, despite the fact it may affect him in some way.

Regarding the two dimensional ant analogy, my apologies, I thought you'd have heard it before. It's a very common analogy used in physics to describe additional dimensions because we live in a 3-d world so it's easier for us to visualize a world with one less dimension as opposed to one more dimension. So yes, the ant is a 2-d ant, not a real 3-d ant that crawls around your yard. Here's a few examples:
http://d0server1.fnal.gov/users/gll/public/edpublic.htm
http://www.sciam.com/askexpert_question.cfm?articleID=000EF410-7C84-1C72-9EB7809EC588F2D7

Even if I were to allow that the ant is aware of only 2 dimensions, if there is no way for the ant even in principle to determine the existence of the 3rd dimension then in what possible way can the 3rd dimension have any impact (via downward causation) upon the ant?
This gets back to the one example given by Chalmers where he suggests wave function collapse might be an example of downward causation. I'm actually modifying Chalmers a bit and suggesting that this wave function collapse might be the action of another dimension which acts through conscious phenomena. Downward causation in this case is the influence of this other dimension which has causal efficacy through the strongly emergent phenomenon of consciousness. I don't think this is totally untalked about in the physics community. I might have to look to see specific examples of this, but I'd say this concept is not new - although my use of terminology may be a bit unique here (ie: the use of the terms "downward causation" and "strong emergence" in conjunction with the more common discussions along these lines which only point to consciousness possibly having some causal efficacy over wave function collapse, about which there have been many discussions within the physics community.

I accept there may be some “laws” of nature that we have not yet discovered. If this is all that Chalmers is getting at then I don’t disagree.
Yes, he's saying there are additional 'natural laws' which we might potentially uncover, but note that these are NOT reductionist type laws as we've already uncovered. I opened the other thread regarding FEA to point out that physical laws at the classical level can all be seen to be reductionist type laws, laws of cause and effect at a local level. That's exactly what strong emergence and downward causation are NOT. We can model anything at a classical level assuming only cause and effect or reductionism. We can't do this at the molecular level, but I don't think anyone can really say why.

It’s not that higher dimensions have “causal effects” on lower dimensions, it’s that if there are higher dimensions then we will need additional “laws of physics” to explain how all dimensions (lower and higher) interact. To my mind these laws are in principle no more “inaccessible” than laws of quantum physics or relativity or cosmology.
I think this has been the approach all along, but as far along as we are in being able to 'calculate' quantum phenomena, we have absolutely no philosophy for what it means, and no way to exactly determine some phenomena such as radioactive decay. It may be such things are impossible to determine because we simply do not occupy and thus do not have tools with which to measure the other dimension. I'm sure you'll think that if we DID then we COULD and everything would be DETERMINISTIC. That's overly simplistic though. It doesn't look at the reality of trying to measure an orthogonal dimension if you don't have tools which can reach that dimension. We can't create ideas or knowledge around things which are not accessable to us.

The problem here is that by looking at the monitor output as external observers, we have destroyed or circumvented the subjectivity (if there is any) within the machine. Subjective experience, by definition, is 1st person, and it cannot (by definition) be displayed on a monitor.
Where did you get this? I've not heard of anyone suggesting this before.
 
  • #19
Doctordick said:
It is quite simple. The presumption is that there is a fundamental law of the universe which requires many many variables to express. First, it is a "fundamental" law in the sense that it expresses a relationship inherent in the universe which is not a consequence of the collection of other fundamental "laws". And second, as expression of this relationship requires many many variables, the existence of the law has no observable consequences until that required collection of variables are under consideration. Paul and others would like to define consciousness to be such a collection, thus introducing a new "fundamental law" to explain the observed behavior.
I don’t see how you get from this to the conclusion that determination does not entail supervenience. By definition, if X determines Y, then Y is supervenient on X. In other words :

Supervenience is the relationship between two sets X and Y (usually sets of properties or propositions), where fixing one set -- the supervenience base -- fixes the other -- the supervening set.

Determinism is the relationship between two states of the world X and Y where fixing one state – the antecedent state – fixes the other – the consequent state.

How can it be that determinism does not entail supervenience?

Q_Goest said:
We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principal from truths in the low-level domain.
OK, I can go with this.

Q_Goest said:
Chalmers also states numerous times that he believes consciousness is strongly emergent.
OK, and I can go with this too, up to a point (but not because I agree with Chalmers’ ideas about consciousness). Any particular conscious state S is a unique system configuration, with no other conscious state perfectly identical to it, and as such S will have many “unique” properties which are determined by the particular configuration of S. The more interesting properties as far as consciousness is concerned are the self-referential ones – the properties of the conscious state S as judged by the consciousness itself. Now since each conscious state is unique it follows that there is indeed in principle no way (as long as we analyse finely enough) that an external observer can deduce the self-referential properties of any particular conscious state S from knowledge of the microstates. This simple fact (ie that the content of consciousness is not epistemically determinable by an external observer) is the one that Chalmers cannot accept, and upon which his argument for a "whole new physics" is based. He's misguided.

Q_Goest said:
Chalmers is also a computationalist. Computationalism suggests that although the mechanism that produces the "high-level phenomenon" as Chalmers puts it, is completely deterministic (any computational mechanism is completely deterministic) he's suggesting that the phenomenon of consciousness is something which can't be deduced in any way from examining the operation of the computer's parts.
I think this agrees pretty much with what I have said above.

Q_Goest said:
I interpret Chalmers as saying that strong emergence postulates there being phenomena that can't, even in principal, be determined by the low level facts or the microstates as Bedau puts it.
The use of the phrase “can't be determined by” in the above is misleading and strictly incorrect, since it possibly implies a reference to lack of ontic determinism. If determinism is true (and we do not know if it is or not), then all phenomena are determined. This applies to my example above of the impossibility of deducing the the self-referential properties of anyone conscious state S when one is an observer external to that conscious state S. Just because we cannot deduce the self-referential properties, it does not follow that these properties are not (ontically) determined by low-level facts – it simply means that the phenomena are not (epistemically) determinable..

I think it would be correct (therefore better) to use “determinable from” rather than “determined by”, in which case the wording becomes :

“strong emergence postulates there are phenomena that are not, even in principle, determinable from knowledge of the low level facts or the microstates”

(if we include in phenomena “subjective conscious experiences” and we assume that “determinable by” means epistemically determinable by an agent external to the conscious experience in question.)

This reflects the fact that the lack of determinability is an epistemic obstacle rather than an ontic one.

Q_Goest said:
Perhaps the term "determined" is underspecified for a philisophical discussion like this. I mean they can't be figured out or understood, not that they aren't deterministic.
I answered the above before I read this bit. We think alike.

In which case I suggest that instead of using “determined by” (which may be incorrectly interpreted in the strict ontic causal determinism sense), it would be better to use “determinable from” – to ensure that we all understand we are talking of limits to epistemic determinability here as opposed to limits to ontic determinism.

Q_Goest said:
We can have perfect knowledge of all the parts, and the phenomena could still not be understandable, even in principal.
Agreed, if we substitute “completely knowable” for “understandable” – there are senses in which I can claim to understand something without knowing all the details of that thing. (btw – I think the word you want is principle – a principal is something different). Thus we need to make sure we are clear in referring not to determinism but to determinability.

Q_Goest said:
In the case of a computational machine, strongly emergent phenomena can obviously be created by the deterministic actions of some mechanism.
Agreed

Q_Goest said:
The really crazy part of all that is someone wanting to suggest there's something more, an add on, something that is created which we have no need to suspect exists and we have no way of measuring.
Is this Chalmers’ position?

Q_Goest said:
Further, it exists without any causal efficacy. This is the computationalist's position. It is a belief, not unlike a religion, which says we need to accept that something along the lines of a strongly emergent phenomenon must be created which has no causal efficacy, can't be understood even in principal and it arises from the actions of numerous, deterministic, knowable parts which we can easily duplicate, simulate, and know everything about except we can never understand anything about the subjective experience it creates. If one believes in computationalism, you are essentially forced into believing strong emergence exists. I see no way around it, as Chalmers obviously has also concluded. If one accepts computationalism, you have little choice but to believe in strong emergence. I'd like to know how one can avoid that conclusion.
With some slight changes in the wording, I would agree with all of the above EXCEPT the part about lack of causal efficacy. Is the computationalist necessarily committed to believing that certain phenomena are epiphenomena? I’m not sure that follows.

What I am saying is that I agree certain phenomena, such as subjective conscious experience, can be classed as strongly emergent, by virtue of the fact that the precise details of those phenomena are not determinable by any external agent. But it does not follow from this that the phenomena in question are not causally determined by “low level facts”, and it also does not follow that they are epiphenomena.

Q_Goest said:
I'm suggesting that in order for any kind of strong emergence to make sense, we need to step away from our common perceptions of the "laws" of nature and physics. Here you've suggested we can know them. However, if we can't measure something, I'm suggesting we can't know what makes it work, even in principal. Not from this perspective anyway, the perspective of a conscious human living in a 4 dimensional world.
Everything we think we “know” about the world is based on inferences made from assumptions. We can in principle infer properties of other dimensions even when we have no direct access to those other dimensions. It’s really no different in principle to inferring the structure of the atom when we have no direct (in the normal sense of the word) access to the interior of the atom, or inferring the temperature of the interior of the sun when we have no direct access to the interior of the sun. Just because we cannot measure the interior temperature of the sun directly does not mean that this temperature is an emergent pheomenon.

Q_Goest said:
Thus, the ant would never be able to say, “ahhhh, I see! That’s how it works”. He would say, "ohhh, that’s an emergent phenomenon”. Why? Because he can't measure a dimension he doesn't have privy to, despite the fact it may affect him in some way.
I disagree. Imagine the ant is living on the surface of a very large sphere (but doesn’t know it), and imagine the ant is intelligent and starts investigating trigonometry. He eventually discovers a "law" which says that the internal angles of a triangle always add up to 180 degrees. As long as his triangles are very small in relation to the sphere, he won’t find any significant discrepancy with this "law". But if he makes a very large triangle, one that is of the same order as the radius of the sphere, he will find some very strange results. If he is a very intelligent ant he may be able to deduce that the "triangle law" assumes flat Euclidean space, and one possible explanation for his strange results is that he is not in fact living in such a flat space. He could even estimate the radius of the sphere on which he is living from his measurements, even though he is resticted to working (and experiencing directly) just two dimensions. Having done all this, the ant would indeed say “ahhhhh, That’s how it works!”. Nothing at all to do with strong emergence.

Q_Goest said:
This gets back to the one example given by Chalmers where he suggests wave function collapse might be an example of downward causation. I'm actually modifying Chalmers a bit and suggesting that this wave function collapse might be the action of another dimension which acts through conscious phenomena. Downward causation in this case is the influence of this other dimension which has causal efficacy through the strongly emergent phenomenon of consciousness. I don't think this is totally untalked about in the physics community. I might have to look to see specific examples of this, but I'd say this concept is not new - although my use of terminology may be a bit unique here (ie: the use of the terms "downward causation" and "strong emergence" in conjunction with the more common discussions along these lines which only point to consciousness possibly having some causal efficacy over wave function collapse, about which there have been many discussions within the physics community.
OK. But I don’t accept that consciousness causes wave function “collapse”, either in our familiar 3 spatial dimensions or via any extra dimensions. How can consciousness be emergent and at the same time be responsible for wave function collapse? You have a chicken-and-egg situation there - for consciousness to emerge in the first place there must be wave function collapse, but there also needs to be consciousness around in order to cause wave function collapse...? Which comes first? It seems to me that one can postulate either that consciousness is emergent, or that consciousness causes wave function collapse, but the two together is a contradiction.

Q_Goest said:
Yes, he's saying there are additional 'natural laws' which we might potentially uncover, but note that these are NOT reductionist type laws as we've already uncovered. I opened the other thread regarding FEA to point out that physical laws at the classical level can all be seen to be reductionist type laws, laws of cause and effect at a local level. That's exactly what strong emergence and downward causation are NOT.
I disagree. Strong emergence (as discussed above) has nothing necessarily to do with “non-reductionist type laws”. I believe we can have strongly emergent phenomena (consciousness is an example we have discussed) in the presence of strict determinism (obeying reductionist laws), but I don’t see any evidence here for downward causation in the sense of “non-reductionist type laws”.

We therefore need to be very careful when lumping strong emergence and downward causation together – because they are very different things.

Q_Goest said:
We can model anything at a classical level assuming only cause and effect or reductionism. We can't do this at the molecular level, but I don't think anyone can really say why.
Agreed – but this may simply be due to lack of epistemic determinability, not necessarily lack of ontic determinism.

Q_Goest said:
I think this has been the approach all along, but as far along as we are in being able to 'calculate' quantum phenomena, we have absolutely no philosophy for what it means, and no way to exactly determine some phenomena such as radioactive decay.
We have plenty of philosophies (which attempt to explain what is going on in QM) – the problem is in deciding which one to choose. The physicist says that we cannot know which one is correct, so just shut up and calculate (SUAC). The philosopher is not happy with that state of affairs, but she cannot find any way to show which one, if any, of the many different interpretations might be the correct one.

Q_Goest said:
It may be such things are impossible to determine because we simply do not occupy and thus do not have tools with which to measure the other dimension. I'm sure you'll think that if we DID then we COULD and everything would be DETERMINISTIC.
Either everything is deterministic or it is not – the truth of determinsim is not a function of what we do know or even can know about the world. I think the word you mean to use here is “determinable” – and no, I don’t think that everything would or even could be determinable, even if we could occupy additional dimensions. The issue is not that we are restricted to particular dimensions, the issue is that we are part of the problem we are trying to solve. We cannot “step outside” the system and examine it objectively from the outside.

Max Planck summed it up very well when he said :
Planck said:
Science cannot solve the ultimate mystery of nature. And it is because in the last analysis we ourselves are part of the mystery we are trying to solve.

Q_Goest said:
That's overly simplistic though. It doesn't look at the reality of trying to measure an orthogonal dimension if you don't have tools which can reach that dimension.
I’ve shown exactly how it can be done with the ant example above.

Q_Goest said:
We can't create ideas or knowledge around things which are not accessable to us.
Of course we can (we just don’t know for certain if our ideas are correct or not – but that’s true of almost everything). The interior of the sun is not accessible to us, but we can create a very detailed theory of what is going on in there. The interior of the atom is not accessible to us, but we can create a very detailed theory of the sub-atomic realm. The "past" is not accessible to us, but we can construct detailed histories right up to the Big Bang. The importat point here is that one does not need direct access to something in order to model it - one needs only to be able to obtain some information relevant to that thing so that one can construct models. This is just what the ant on the sphere does - she does not have direct access to the 3rd dimension, but she can still construct a mathematical model of the shape of her world in that 3rd dimension. We could do the same for the 4th dimension if our 3D world was curved in the 4th dimension.

The main point I would like to make is that it is NOT simple black and white. It is not the case that we cannot model realms which are not directly accessible to us (even the interior of consciousness). Instead it is the case that we have access to a limited amount of information that allows us to construct certain models describing those realms (the interior of the atom, interior of the sun, history, consciousness, and the ant's 3rd dimension). Those models are not complete, I agree, they are missing some information. But this is not the same as saying categorically that we can't create ideas about those realms.

Q_Goest said:
Where did you get this? I've not heard of anyone suggesting this before.
This follows quite logically from the observations above about the emergence of consciousness. Any particular conscious state S is a unique system configuration, with no other conscious state perfectly identical to it, and as such S will have many “unique” properties which are determined by the particular configuration of S. The more interesting properties as far as consciousness is concerned are the self-referential ones – the properties of the conscious state S as judged by the consciousness itself. Now since each conscious state is unique it follows that there is indeed in principle no way (as long as we analyse finely enough) to deduce the self-referential properties of anyone conscious state S when one is an observer external to that conscious state S. Thus we (as external observers) cannot hope to deduce the subjective properties of S by simply displaying some objectively measured information about S on a monitor.

Best Regards
 
Last edited:
  • #20
Computationalism requires Strong Emergence

I believe Chalmers is correct in arguing that consciousness is a strongly emergent phenomena. He also correctly concludes that if computationalism is to be accepted as the paradigm for consciousness, computationalism must assume strong emergence. Further, it seems as if strong emergence must imply irreducible laws of physics are at work, laws very much unlike the laws we're accustomed to seeing at a classical level. These seem to be laws that govern large assemblages of matter and not the local cause and effect we see at the micro level. It's a rather unsettling view, one I feel shines a rather disheartening light on computationalism.

First, we have to understand weak emergence. Bedau provides an excellent discussion on this, and the quote from his paper in the OP should be sufficient to provide us a definition. In that paper, he provides the example and discusses the game of Life and explains in detail how this game is weakly emergent. We can know everything about the game of Life if we simulate it. We can see "gliders" take off, we can see other patterns emerge, and every phenomena that comes out of these patterns is understandable and predictable (in principal) from knowledge of the rules of the game and the actions of the parts. There is nothing "extra", nothing unknowable or not understandable about this game, even though it may be highly unpredictable. We can see how each pixel changes and how an image of a "glider" appears from the action of the microdynamics of the elements which are completely dependant on the rules of the game. We wouldn't say the game of Life is "conscious" or has subjective experience, nor would we say there is some phenomena about this game that can't be understood. To do so would be to propose that the game had some EXTRA quality or property which had nothing to do with the microdynamics of the microstates and would also be completely unknowable even in principal. If someone were to suggest the game creates the phenomena of zortnore, but we can't see this phenomena simply by looking at the parts, we might suggest this person seek professional help!

Chalmers recognizes Bedau's work toward the end of his paper when he writes:
It will help to focus on a few core examples of weak emergence:

(A) The game of Life: high-level patterns and structure emerge from simple low-level rules.

This is key to understanding the problem, because here we see that Life is completely knowable and has no additional features, nor does it create any phenomena which isn't completely understandable by examining the rules and creating a simulation.

But then Chalmers notes another example of "weak emergence":

(B) Connectionist networks: high-level 'cognitive' behavior emerges from simple interactions between simple threshold logic units.

He doesn't say much about this, but what he's implying I believe is fairly straight forward. Any computer network, has "simple interactions between simple threshold logic units" which would for example be the interactions of switches which are logic units that operate only above a certain threshold voltage. The interaction of any large set of switches is determined strictly by the actuation of power to a control wire on the switch which then operates the switch. Thus, the high-level 'cognitive' behavior which emerges is seen to depend on the interaction of the switches. The microdynamics of any set of switches certainly is knowable, and thus, how that set of switches behaves is understandable by examining the microdynamics. Everything we want to know about the seemingly 'cognitive' behavior which emerges from computations which may or may not be consciously aware can be understood simply by examining the microdynamics of these "threshold logic units".

Note the strong similarity between Chalmers examples A and B. The game of Life has pixels, the computer has switches. The pixels operate depending on the local interaction, just as a switch operates from local interactions. Both are reducible.

We don't need to suggest that the computer has anything extra, some special phenomena which we can't understand from observing the interaction of the switches. The behavior of such a computer is specified completely by examining the microdynamics of the system. If we didn't claim there was anything more, we might call such a computer a p-zombie, because although it might behave exactly as a person would, it would have no subjective experience, it would not have that "something more", some phenomena which can't be deduced in principal even by examining the parts. Why can't we understand it? That should be fairly obvious now, because subjective experience requires something more than simply observing this interaction of the parts. We are at a loss to say what subjective experience the computer is having, and we can't call in such things as "self-referential properties" because to suggest such properties explain anything presumes such properties exist because of the interaction of these switches. It is impossible to understand or deduce in any way the subjective experiences of "red", "pain", or "zortnore" from the interactions of the switches.

Chalmers is a computationalist, and if you've read much of his work, you'll find he has a passion that forces him to find a way around the difficulties and allow computationalism to move forward as the paradigm for consciousness, very much like Dennet. And so, once he recognizes that computationalism is in trouble because it can't explain subjective experience through reductionist type laws, he looks to "strong emergence" to find a way.

Chalmers recognizes the problems such an issue may raise, and tries to find away around it:
Strong emergence has much more radical consequences than weak emergence. If there are phenomena that are strongly emergent with respect to the domain of physics, then our conception of nature needs to be expanded to accommodate them. That is, if there are phenomena whose existence is not deducible from the facts about the exact distribution of particles and fields throughout space and time (along with the laws of physics), then this suggests that new fundamental laws of nature are needed to explain these phenomena.

Further, Chalmers contrasts the game of life with the COBAL system and notes "all the complexity of the high-level behavior is due to the complex structure that is given to the low-level mechanisms." Throughout the paper, Chalmers refers to some connection between low and high level properties, and this is consistent with others who discuss strong emergence. Here's another quote, "Still, this suggests the possibility of an intermediate but still radical sort of emergence, in which high-level facts and laws are not deducible from low level laws."

Certainly, one has to accept that strong emergence of the type proposed by Chalmers and others can not be seen in relatively small, and simple systems. Strong emergence is characterized by large and very complex systems, and does not and can not come about in smaller, less complex systems. Chalmers provides ample discussion on this using his COBAL example.

So if these new physical laws can't find any place in the simple systems, and only emerge within complex systems, then any strongly emergent phenomena must be characterized by physical laws which govern large, complex systems, not the local interactions of numerous parts. Such laws sound like irreducible laws to me, ones that only apply to large, complex, high-level systems of particles and thus they would not be reducible and reductionism would have to fail.
 
  • #21
Q_Goest said:
I believe Chalmers is correct in arguing that consciousness is a strongly emergent phenomena. He also correctly concludes that if computationalism is to be accepted as the paradigm for consciousness, computationalism must assume strong emergence.
I agree with the first sentence, and the second follows from the first.

Q_Goest said:
Further, it seems as if strong emergence must imply irreducible laws of physics are at work, laws very much unlike the laws we're accustomed to seeing at a classical level. These seem to be laws that govern large assemblages of matter and not the local cause and effect we see at the micro level. It's a rather unsettling view, one I feel shines a rather disheartening light on computationalism.
I disagree. Why do you think strong emergence implies irreducible laws of physics? I think it implies only that not all laws are epistemically determinable. If this is what you mean by irreducible then OK I agree (but again we must be careful not to confuse epistemic irreducibility with ontic irreducibility).

Q_Goest said:
First, we have to understand weak emergence. Bedau provides an excellent discussion on this, and the quote from his paper in the OP should be sufficient to provide us a definition. In that paper, he provides the example and discusses the game of Life and explains in detail how this game is weakly emergent. We can know everything about the game of Life if we simulate it. We can see "gliders" take off, we can see other patterns emerge, and every phenomena that comes out of these patterns is understandable and predictable (in principal) from knowledge of the rules of the game and the actions of the parts. There is nothing "extra", nothing unknowable or not understandable about this game, even though it may be highly unpredictable. We can see how each pixel changes and how an image of a "glider" appears from the action of the microdynamics of the elements which are completely dependant on the rules of the game. We wouldn't say the game of Life is "conscious" or has subjective experience, nor would we say there is some phenomena about this game that can't be understood. To do so would be to propose that the game had some EXTRA quality or property which had nothing to do with the microdynamics of the microstates and would also be completely unknowable even in principal. If someone were to suggest the game creates the phenomena of zortnore, but we can't see this phenomena simply by looking at the parts, we might suggest this person seek professional help!
This is true only up to a point. Games of life (GoL) that we can conceive of are relatively simplistic. But imho it is possible in principle to have a GoL which is sufficiently complex that it does give rise to consciousness – as an emergent property within the game. Once this happens then it opens the door also to strong emergence – because there are properties of that GoL (subjective properties as experienced from the perspective of the emergent consciousness within the GoL) which properties are in principle inaccessible from our perspective “on the outside looking in”.

Q_Goest said:
This is key to understanding the problem, because here we see that Life is completely knowable and has no additional features, nor does it create any phenomena which isn't completely understandable by examining the rules and creating a simulation.
No. It is only completely knowable as long as there is no agency emergent within the game which can have a subjective perspective (hence observe properties) from within the game rather than from outside the game. Such an agency would have a perspective which is inaccessible to us as external observers. And this is exactly what consciousness is, and is exactly the reason why we say that consciousness is a strongly emergent phenomenon – because it provides a unique perspective which is in principle inaccessible from any other perspective.

Q_Goest said:
Any computer network, has "simple interactions between simple threshold logic units" which would for example be the interactions of switches which are logic units that operate only above a certain threshold voltage. The interaction of any large set of switches is determined strictly by the actuation of power to a control wire on the switch which then operates the switch. Thus, the high-level 'cognitive' behavior which emerges is seen to depend on the interaction of the switches. The microdynamics of any set of switches certainly is knowable, and thus, how that set of switches behaves is understandable by examining the microdynamics. Everything we want to know about the seemingly 'cognitive' behavior which emerges from computations which may or may not be consciously aware can be understood simply by examining the microdynamics of these "threshold logic units".
I disagree. Your description assumes that the “network” does not possesses perceptual/cognitive abilities. If it does, then the property of “what it feels like” for that network to perceive or experience something is a property which is in principle inaccessible to us, it is a property which cannot be understood by us simply by examining the microdynamics of the components.

Q_Goest said:
We don't need to suggest that the computer has anything extra, some special phenomena which we can't understand from observing the interaction of the switches. The behavior of such a computer is specified completely by examining the microdynamics of the system. If we didn't claim there was anything more, we might call such a computer a p-zombie, because although it might behave exactly as a person would, it would have no subjective experience, it would not have that "something more", some phenomena which can't be deduced in principal even by examining the parts. Why can't we understand it? That should be fairly obvious now, because subjective experience requires something more than simply observing this interaction of the parts. We are at a loss to say what subjective experience the computer is having, and we can't call in such things as "self-referential properties" because to suggest such properties explain anything presumes such properties exist because of the interaction of these switches. It is impossible to understand or deduce in any way the subjective experiences of "red", "pain", or "zortnore" from the interactions of the switches.
But subjective properties ARE self-referential – by definition! That is precisely the reason why we cannot experience the same properties – because we have a completely different frame of (conscious) reference. I cannot know (exactly) what it is like to be a perceiving computer (any more than I can know what it is like to a bat) unless I actually BECOME a perceiving computer (or a bat) – but then by definition it wouldn’t be “me” knowing it – it would be the computer (or the bat). You simply “can’t get there from here”.

Objective science assumes that the results of any experiment are independent of the observer. Science tells us that if Q_Goest mixes A and B and gets C, then exactly the same thing should happen if Moving Finger mixes A and B. The whole of science is predicated on this assumption that the result of an experiment is independent of who carries out the experiment. Just imagine how weird scientific experimentation would be if the results of each experiment depended on who carried out the experiment? If this were the case, we might quite naturally insist that the laws of physics are “irreducible” and “emergent”, and that to understand what is going on we (like Chalmers) would insist that we need a whole new physics. But that would be misguided. We would just need to remember that one of the variables in any experiment is always the experimenter. That’s all. And that is exactly what happens in the case of consciousness. Each “experiment” of consciousness is unique and different to every other “experiment” of consciousness, and there is no way in principle that we can precisely replicate one agent’s conscious experiment within another agent - because the precise make-up of the agent is one of the variables in the experiment. Simple as that. It’s all in the perspective. No new laws needed.

Q_Goest said:
Chalmers is a computationalist, and if you've read much of his work, you'll find he has a passion that forces him to find a way around the difficulties and allow computationalism to move forward as the paradigm for consciousness, very much like Dennet. And so, once he recognizes that computationalism is in trouble because it can't explain subjective experience through reductionist type laws, he looks to "strong emergence" to find a way.
Computationalism isn’t in trouble at all. You just have to recognise that a perception implies a perspective – and there ain’t no way to get the true perspective of a “perceiving computer” from the perspective of a human being. Just like there ain’t no way to get the true perspective of Q-Goest from the perspective of Moving Finger – it’s impossible by definition. None of this is at odds with computationalism.

Q_Goest said:
Strong emergence has much more radical consequences than weak emergence. If there are phenomena that are strongly emergent with respect to the domain of physics, then our conception of nature needs to be expanded to accommodate them. That is, if there are phenomena whose existence is not deducible from the facts about the exact distribution of particles and fields throughout space and time (along with the laws of physics), then this suggests that new fundamental laws of nature are needed to explain these phenomena.
This is Chalmers’ going off the deep end with “gosh, maybe we need a whole new physics!”. Absolute poppycock. We don’t need any new physics. Chalmers’ ideas are strangely reminiscent of the 18th & 19th century ideas of vitalists – that band of pessimists who insisted that there is something special within living beings which can never be explained by the inanimate sciences, that we maybe need “a whole new physics of vitalism” to understand. We can see how quaintly naïve and misguided such ideas are now – and one day we’ll say the same about Chalmers’ ideas.

There is nothing we have discussed here which cannot be explained based on a perspectival account of subjective perception. Remember that “not deducible” simply means “not epistemically accessible”. Just because I have no (epistemic) access to the “inside” of your consciousness (I cannot see the world precisely as you see it) does NOT mean that there are new laws of physics at work, and it does NOT mean that determinism or reductionism (in the ontic sense) has failed. There is no way in principle that Moving Finger can see the world in exactly the same way that Q_Goest sees it, hence Q_Goest’s world is epistemically “not deducible” by Moving Finger. No new laws needed – that’s just Chalmers’ hyperbole because he can’t or won’t see the simple truth.

Q_Goest said:
Further, Chalmers contrasts the game of life with the COBAL system and notes "all the complexity of the high-level behavior is due to the complex structure that is given to the low-level mechanisms." Throughout the paper, Chalmers refers to some connection between low and high level properties, and this is consistent with others who discuss strong emergence. Here's another quote, "Still, this suggests the possibility of an intermediate but still radical sort of emergence, in which high-level facts and laws are not deducible from low level laws."
Where is the evidence for this? I thought the GoL was supposed to be weak emergence – and now you’re referring to intermediate and strong emergence?

Q_Goest said:
Certainly, one has to accept that strong emergence of the type proposed by Chalmers and others can not be seen in relatively small, and simple systems. Strong emergence is characterized by large and very complex systems, and does not and can not come about in smaller, less complex systems. Chalmers provides ample discussion on this using his COBAL example.
I agree – because the strong emergence exemplified by consciousness requires an agent within the system in which consciousness emerges – which obviously cannot happen in simple systems.

Q_Goest said:
So if these new physical laws can't find any place in the simple systems, and only emerge within complex systems, then any strongly emergent phenomena must be characterized by physical laws which govern large, complex systems, not the local interactions of numerous parts. Such laws sound like irreducible laws to me, ones that only apply to large, complex, high-level systems of particles and thus they would not be reducible and reductionism would have to fail.
What new physical laws? Don’t swallow the Chalmers’ hyperbole hook, line and sinker. There are no new laws, and none are necessary. Everything can be understood and explained based on “it all depends on your perspective”.

Best Regards
 
  • #22
Hi MF,
I disagree. Why do you think strong emergence implies irreducible laws of physics?
Note that this was the statement used to tell you what I was about to explain. I'd suggest reading the entire post before commenting. What you're arguing is along the same lines as what Chalmers is suggesting, but you need to better understand three basic points which lead to his conclusion.

This is true only up to a point. Games of life (GoL) that we can conceive of are relatively simplistic. But imho it is possible in principle to have a GoL which is sufficiently complex that it does give rise to consciousness – as an emergent property within the game.
We are not talking about ANY GoL, we are talking about THE GoL. Your observation regarding emergence in a more complex GoL has already been considered and agreed to by Chalmers. This in fact is his point, see his discussion regarding the COBAL example.

No. It is only completely knowable as long as there is no agency emergent within the game which can have a subjective perspective (hence observe properties) from within the game rather than from outside the game.
Again, we are not talking about ANY GoL, we are talking about THE GoL. THE GoL is totally consistant with weak emergence, and this is the first point which needs to be better understood. To reiterate my prior post: "This is key to understanding the problem, because here we see that Life is completely knowable and has no additional features, nor does it create any phenomena which isn't completely understandable by examining the rules and creating a simulation." That is, we don't need to ever suggest there is something more (ex: the phenomenon of zortnore) which exists as a byproduct of the actions of the game. Note that zortnore is a ficticious phenomena about which one may make any claims which are completely unverifyable and unknowable, even in principal.

I disagree. Your description assumes that the “network” does not possesses perceptual/cognitive abilities. If it does, then the property of “what it feels like” for that network to perceive or experience something is a property which is in principle inaccessible to us, it is a property which cannot be understood by us simply by examining the microdynamics of the components.
You've misunderstood what Chalmers is saying. He's not saying the network doesn't possesses perceptual/cognitive abilities. In fact, he's saying it DOES! Please read that portion more carefully and you'll see the distinction he's making is that the BEHAVIOR is what can be considered weakly emergent, analogous to the behavior of the GoL. The BEHAVIOR of the GoL is the resulting "gliders" and other patterns which emerge. The BEHAVIOR of a computational system is the computation itself, and what is knowable by examining the microdynamics of the components. This is the second key to understanding what Chalmers is getting at that you need to understand prior to making the next step.

We need to make a case which suggests something else exists which can explain the phenomena of consciousness within a computer, despite the fact we can understand the BEHAVIOR without resorting to anything else. The BEHAVIOR of the computational device does NOT need anything BUT weak emergence to understand it. The behavior of a computational device is perfectly understandable without resorting to strong emergence. This is the second key to understanding the problem. Behavior is what we can measure and know. We can measure and know exactly what a computer is doing, right down to the individual switches. This is ALL that needs to be understood in order to understand it's behavior, regardless of whether or not we postulate the computer is conscious and has this extra phenomenon of zortnore; or it is not conscious and is only a p-zombie. There is no difference between a conscious computer and a p-zombie computer that we can detect or measure. They are identical, and furthermore, we can know everything about either one and through weak emergence alone, we can understand everything it is doing, just like THE GoL.

So the second issue says that we should, in principal, be able to know everything about an allegedly conscious computer through weak emergence alone. We should not need to resort to anything else, but of course, we will because we are stuck with this fancifull concept of computationalism!

But subjective properties ARE self-referential – by definition!
Your discussion about these alleged properties is a problem for this discussion. You've said:

The more interesting properties as far as consciousness is concerned are the self-referential ones – the properties of the conscious state S as judged by the consciousness itself.
Note this is a perfectly circular argument. You can't claim there are self-referential properties in order to explain the phenomena of self-awareness! That's circular. We need to examine how knowing the microdynamics can be had for the case of an allegedly conscious computer (which we can, and therefore it should be by definition, weakly emergent).

Computationalism isn’t in trouble at all.
I disagree. Understanding the second key to this puts computationalism in a lot of trouble, which is why Chalmers trys to address this issue (the second issue) directly.

This is Chalmers’ going off the deep end with “gosh, maybe we need a whole new physics!”. Absolute poppycock.
Yes, this is a perfect lead into the third issue. We need to explain how a computational device, which can be perfectly understood in terms of behavior and what it is doing, can possibly create some phenomena which we can not understand or know or measure, in any way, even in principal. This, despite the fact that we don't NEED to. We don't NEED to suggest there is something more because everything this allegedly conscious machine is doing is perfectly understandable from a weak emergence perspective. A computational machine, conscious or not, performs what it does without the need to resort to strong emergence whatsoever. We don't need strong emergence here, but Chalmers is a computationalist and would like to understand how it could be that such things emerge and thus he proposes there are things we don't understand that may need some kind of new physics to explain, thus he resorts to strong emergence.

It seems clear to me from reading your response that you're not understanding what Chalmers is saying, yet you DO seem to agree with much of what he says. Chalmers has found a fine line within the arguments that you're not understanding though and if you're to see the problems with computationalism, you'll need to understand his work more better. ;)
 
  • #23
MF said:
How can it be that determinism does not entail supervenience?

it is at least implicit in most versions that the two sets of
properties exist simultaneously.


OK, I can go with this.

OK, and I can go with this too, up to a point (but not because I agree with Chalmers’ ideas about consciousness). Any particular conscious state S is a unique system configuration, with no other conscious state perfectly identical to it,

What does this actually mean ? (1) That conscious states are
unique as conscious states ? (2) That physical brains states are unique
as such ? (3) Or that conscious states are differently realized
in different individuals ? (4) Or that no conscious state is ever
exactly replicated across time ?


and as such S will have many “unique” properties which are determined by the particular configuration of S. The more interesting properties as far as consciousness is concerned are the self-referential ones – the properties of the conscious state S as judged by the consciousness itself. Now since each conscious state is unique it follows that there is indeed in principle no way (as long as we analyse finely enough) that an external observer can deduce the self-referential properties of any particular conscious state S from knowledge of the microstates.

Meaning what ? That arbitrarily small differences in
the fine-grained brain-state may lead to arbitrarily larg
differences in the self-referential conscious state ?

I suppose that is logicaly possible, but issues like the grain problem,
and the successes of fMRI tend to go against it.

This simple fact (ie that the content of consciousness is not epistemically determinable by an external observer) is the one that Chalmers cannot accept, and upon which his argument for a "whole new physics" is based. He's misguided.

The qualiaphile's argument can also be stated in terms
of the in-principle incommunicability of qualia.


With some slight changes in the wording, I would agree with all of the above EXCEPT the part about lack of causal efficacy. Is the computationalist necessarily committed to believing that certain phenomena are epiphenomena? I’m not sure that follows.


The usual mistake is to think that computation accounts are causal accounts. Computers "inherit" their causal efficacy from their
physical implementations. If some kind of identity theory is
true, then mental states will be as causally efficacious as physical ones.

What I am saying is that I agree certain phenomena, such as subjective conscious experience, can be classed as strongly emergent, by virtue of the fact that the precise details of those phenomena are not determinable by any external agent. But it does not follow from this that the phenomena in question are not causally determined by “low level facts”, and it also does not follow that they are epiphenomena.

As a matter of definition, properties that entirely
determined by low-level properties, whether knowably or not,
are not strongly emergent.


I disagree. Strong emergence (as discussed above) has nothing necessarily to do with “non-reductionist type laws”.

Emergence and reduction are oppsites a a matter of
definition.

I believe we can have strongly emergent phenomena (consciousness is an example we have discussed) in the presence of strict determinism (obeying reductionist laws),

Only if micro-events are casually undeterdetermined.

but I don’t see any evidence here for downward causation in the sense of “non-reductionist type laws”.

http://users.california.com/~mcmf/causeweb.html


We therefore need to be very careful when lumping strong emergence and downward causation together – because they are very different things.

There are more than two varieties of emergentism. Downward causation
is needed to define the strongest varieties.


We have plenty of philosophies (which attempt to explain what is going on in QM) – the problem is in deciding which one to choose. The physicist says that we cannot know which one is correct, so just shut up and calculate (SUAC). The philosopher is not happy with that state of affairs, but she cannot find any way to show which one, if any, of the many different interpretations might be the correct one.

Either everything is deterministic or it is not – the truth of determinsim is not a function of what we do know or even can know about the world.

If you can predict everything, indetermnism is false.
 
Last edited by a moderator:
  • #24
Q_Goest said:
I believe Chalmers is correct in arguing that consciousness is a strongly emergent phenomena.

There are more than two varieties of emergentism. The strongest
varieties involve downward causation, which Chalmers rejects.

He also correctly concludes that if computationalism is to be accepted as the paradigm for consciousness, computationalism must assume strong emergence.

Computationalism cannot meaningfully assume downward causation,
if that is what you mean.
Further, it seems as if strong emergence must imply irreducible laws of physics are at work, laws very much unlike the laws we're accustomed to seeing at a classical level.

As Teed Rockwell

http://users.california.com/~mcmf/causeweb.html

would point out, we don't know which level is fundamental, so our
"fundamental" laws may in fact be higher-level ones
 
Last edited by a moderator:
  • #25
moving finger said:
This is true only up to a point. Games of life (GoL) that we can conceive of are relatively simplistic. But imho it is possible in principle to have a GoL which is sufficiently complex that it does give rise to consciousness – as an emergent property within the game.

In what sense of "emergent" ? Would it then have
properties beyond phsycial and computational ones.

Once this happens then it opens the door also to strong emergence – because there are properties of that GoL (subjective properties as experienced from the perspective of the emergent consciousness within the GoL) which properties are in principle inaccessible from our perspective “on the outside looking in”.

Why would they be inaccessible ? Your argyument that
conscious states of humans are inaccessible seems
to hinge on their complexity. But whatever goes on
in a GoL is comprehensible and predictable in principle,
no matter how complex. Is this an in-principle
inaccesbility, or an in-practice inaccessability ?
But subjective properties ARE self-referential – by definition! That is precisely the reason why we cannot experience the same properties – because we have a completely different frame of (conscious) reference.

If physicalism is true "frames of refernce" are as third-person
comprehensible as anything else.

I cannot know (exactly) what it is like to be a perceiving computer (any more than I can know what it is like to a bat) unless I actually BECOME a perceiving computer (or a bat)

IOW, there are irreducibly 1st-personal facts and phsyicalism
is false. You seem to be trying to have your cake and eat it.

– but then by definition it wouldn’t be “me” knowing it – it would be the computer (or the bat). You simply “can’t get there from here”.

If physicalism is true, everything is entriely
comprehensible, in principle, form a 3rd person POV,
and it therefore doesn't matter where you start from.

Each “experiment” of consciousness is unique and different to every other “experiment” of consciousness, and there is no way in principle that we can precisely replicate one agent’s conscious experiment within another agent - because the precise make-up of the agent is one of the variables in the experiment. Simple as that. It’s all in the perspective. No new laws needed.
That doesn't quite follow. A smarter agent could replicate
the structure of a dumber one, however
unique it is. (Unless there are irreducibly 1st-personal
properties, and physicalim is false).

Computationalism isn’t in trouble at all. You just have to recognise that a perception implies a perspective – and there ain’t no way to get the true perspective of a “perceiving computer” from the perspective of a human being.

The wouldn't be if there are irreducibly 1st-persoanl properties,
But computationalism implies that mentallity is entirely
comprehensible, in principle, form a 3rd-person persepctive, since
all computer programmes are.

Just like there ain’t no way to get the true perspective of Q-Goest from the perspective of Moving Finger – it’s impossible by definition.

By whose definition ? Calculating literal perspectives
is just geometry. Physicalsim means everything is
3rd personal, including all "frames" and "perspectives".

None of this is at odds with computationalism.

Yes it is , as demonstrated.
There is nothing we have discussed here which cannot be explained based on a perspectival account of subjective perception.

There are no irreducible perspectives under physicalsim and
computationalism.

Remember that “not deducible” simply means “not epistemically accessible”. Just because I have no (epistemic) access to the “inside” of your consciousness (I cannot see the world precisely as you see it) does NOT mean that there are new laws of physics at work,

If things have "insides" in some irreducible sense, phsyicalism
is false.

and it does NOT mean that determinism or reductionism (in the ontic sense) has failed. There is no way in principle that Moving Finger can see the world in exactly the same way that Q_Goest sees it,

According to whose principle ? According to physicalism
there is such a way. Just understand Q Goest from a
3rd-person perspective.
What new physical laws? Don’t swallow the Chalmers’ hyperbole hook, line and sinker. There are no new laws, and none are necessary. Everything can be understood and explained based on “it all depends on your perspective”.

Once you have abandoned the central claim of physicalism,
there is not much point worrying about the laws.
 
Last edited:
  • #26
Q_Goest said:
Again, we are not talking about ANY GoL, we are talking about THE GoL. THE GoL is totally consistant with weak emergence,

Of course, just about everything is!
 
  • #27
Hi Q_Goest

moving finger said:
I disagree. Why do you think strong emergence implies irreducible laws of physics?
Q_Goest said:
Note that this was the statement used to tell you what I was about to explain. I'd suggest reading the entire post before commenting.
Q_Goest, as you know, I did read your entire post (and responded to it). Nowhere in your post can I find any explanation (which stands up to rational scrutiny) as to why you think strong emergence implies irreducible laws of physics (unless you mean only epistemically, and not ontically, irreducible – in which case, as I said in my last post, I agree).

Q_Goest said:
What you're arguing is along the same lines as what Chalmers is suggesting, but you need to better understand three basic points which lead to his conclusion.
And with respect, having read the whole of your post, you need to understand that I am replying to (and arguing with) YOUR comments in YOUR posts – whether or not that means I agree with Chalemrs or not is irrelevant – I am taking issue with YOUR comments.

moving finger said:
This is true only up to a point. Games of life (GoL) that we can conceive of are relatively simplistic. But imho it is possible in principle to have a GoL which is sufficiently complex that it does give rise to consciousness – as an emergent property within the game.
Q_Goest said:
We are not talking about ANY GoL, we are talking about THE GoL. Your observation regarding emergence in a more complex GoL has already been considered and agreed to by Chalmers. This in fact is his point, see his discussion regarding the COBAL example.
Q_Goest, I am responding to your posts in this thread. If you make reference to Chalmers that is up to you, but I’m not going to check everything you claim is said by Chalmers to verify that it is actually said by Chalmers. I trust you. But when YOU make comments (as you do) that

Q_Goest said:
We can know everything about the game of Life if we simulate it.
and
Q_Goest said:
There is nothing "extra", nothing unknowable or not understandable about this game
and
Q_Goest said:
nor would we say there is some phenomena about this game that can't be understood
Then I disagree (with YOU, not necessarily with Chalmers). For exactly the reasons explained in my last post. Making references to Chalmers doesn’t change the fact that I disagree with your statements If you wish to defend your statements with rational and logical argument then please do so, but simply making oblique references to Chalmers does nothing to defend your statements in this thread.

moving finger said:
No. It is only completely knowable as long as there is no agency emergent within the game which can have a subjective perspective (hence observe properties) from within the game rather than from outside the game.
Q_Goest said:
Again, we are not talking about ANY GoL, we are talking about THE GoL. THE GoL is totally consistant with weak emergence, and this is the first point which needs to be better understood.
I do not agree with you. THE GoL is not necessarily totally consistent with weak emergence. You are assuming it is.

Q_Goest said:
To reiterate my prior post: "This is key to understanding the problem, because here we see that Life is completely knowable and has no additional features, nor does it create any phenomena which isn't completely understandable by examining the rules and creating a simulation." That is, we don't need to ever suggest there is something more (ex: the phenomenon of zortnore) which exists as a byproduct of the actions of the game. Note that zortnore is a ficticious phenomena about which one may make any claims which are completely unverifyable and unknowable, even in principal.
I do not agree. How do you know that all of the properties of THE GoL are completely knowable by an entity external to the game? How do you know that there are not some properties which are internal self-referential properties of the game, which would not be (epistemically) accessible to an external observer?

moving finger said:
I disagree. Your description assumes that the “network” does not possesses perceptual/cognitive abilities. If it does, then the property of “what it feels like” for that network to perceive or experience something is a property which is in principle inaccessible to us, it is a property which cannot be understood by us simply by examining the microdynamics of the components.
Q_Goest said:
You've misunderstood what Chalmers is saying.
Q_Goest, you keep referring to Chalmers, but I am responding to your posts in this thread. If you wish to defend Chalmers position then please do so and I will gladly argue against it, but it is no defence to simply say “you’ve misundertood what Chalmers is saying” – if you have not explained your/his argument clearly enough such that I might understand what you are saying then (with deepest respect), that’s your problem, not mine.

Q_Goest said:
He's not saying the network doesn't possesses perceptual/cognitive abilities. In fact, he's saying it DOES! Please read that portion more carefully and you'll see the distinction he's making is that the BEHAVIOR is what can be considered weakly emergent, analogous to the behavior of the GoL. The BEHAVIOR of the GoL is the resulting "gliders" and other patterns which emerge. The BEHAVIOR of a computational system is the computation itself, and what is knowable by examining the microdynamics of the components. This is the second key to understanding what Chalmers is getting at that you need to understand prior to making the next step.
The thing YOU need to understand, Q_Goest, is that YOUR comment in YOUR post was :
Q_Goest said:
Everything we want to know about the seemingly 'cognitive' behavior which emerges from computations which may or may not be consciously aware can be understood simply by examining the microdynamics of these "threshold logic units".
And it is YOUR comment that I am disagreeing with! We cannot necessarily “know” everything about the system simply by examining the microdynamics, because external knowledge of the microdynamics tells us nothing about self-referential properties of the system! This is the whole point of the argument – the point that YOU are making in YOUR posts – and you referring to Chalmers here is irrelevant, because I am arguing with YOUR statements here!

Q_Goest said:
We need to make a case which suggests something else exists which can explain the phenomena of consciousness within a computer, despite the fact we can understand the BEHAVIOR without resorting to anything else.
Behaviourism is dead! Understanding the behaviour of a system does not mean that you understand everything about the system! This is the 3rd person science fallacy – that we can completely understand a system “from the 3rd person perspective”, that there is nothing at all about the system which is “not knowable” form that perspective. But it’s false! You CANNOT, in principle, know exactly what it is like to be me by examining me from the outside. This does not mean (a la Chalmers) that we need new laws of physics, it only means that we must understand that many properties depend upon perspective, and self-referential properties require a self-referential perspective which NO external agent can perfectly simulate.

Q_Goest said:
The BEHAVIOR of the computational device does NOT need anything BUT weak emergence to understand it. The behavior of a computational device is perfectly understandable without resorting to strong emergence. This is the second key to understanding the problem. Behavior is what we can measure and know. We can measure and know exactly what a computer is doing, right down to the individual switches. This is ALL that needs to be understood in order to understand it's behavior, regardless of whether or not we postulate the computer is conscious and has this extra phenomenon of zortnore; or it is not conscious and is only a p-zombie. There is no difference between a conscious computer and a p-zombie computer that we can detect or measure.
I disagree. The zombie hypothesis assumes consiousness is epiphenomenal, and I do not agree that this assumption is true. If you want to go into discussion of the incoherent nature of the zombie argument I am quite happy to get into that.

Q_Goest said:
They are identical, and furthermore, we can know everything about either one and through weak emergence alone, we can understand everything it is doing, just like THE GoL.
You are assuming that a conscious agent and a zombie agent are in principle indistinguishable to an external agent, but you have not shown that they are. I believe your assumption is false.

Q_Goest said:
So the second issue says that we should, in principal, be able to know everything about an allegedly conscious computer through weak emergence alone. We should not need to resort to anything else, but of course, we will because we are stuck with this fancifull concept of computationalism!
(check spelling of principle). That we should be able to know everything about an allegedly conscious computer is a premise you have assumed to be true, but I dispute the truth of this premise.

Q_Goest said:
Your discussion about these alleged properties is a problem for this discussion. You've said:
moving finger said:
The more interesting properties as far as consciousness is concerned are the self-referential ones – the properties of the conscious state S as judged by the consciousness itself.
Q_Goest said:
Note this is a perfectly circular argument. You can't claim there are self-referential properties in order to explain the phenomena of self-awareness! That's circular.
It’s no more circular than the proposition “all bachelors are unmarried”. Self awareness entails self-reference – by definition! It’s an analytical truth that self-awareness entails self-reference (in the same way, saying that “all bachelors are unmarried” is an analytical truth by definition)! And it is precisely the properties of self-reference which makes any particular self-referential state unique, and impossible in principle to be modeled by an external agent. Agent X cannot have access to the self-referential properties of agent Y, and since self-awareness entails self-reference agent X cannot be self-aware in precisely the same way that agent Y is self-aware. It’s transparently obvious.

Another way to look at this is in the way of perspective. Agent Y has a unique perspective on agent Y, a perspective which no other agent can achieve, because agent Y knows agent Y “from the inside” – the self-referential perspective of agent Y upon Y cannot be modeled precisely by any other agent. The only self-referential perspective that agent X can have is of agent X, agent X can never perfectly model a self-referential perspective of agent Y upon Y.

Q_Goest said:
We need to examine how knowing the microdynamics can be had for the case of an allegedly conscious computer (which we can, and therefore it should be by definition, weakly emergent).
I don’t understand your expression ;
Q_Goest said:
We need to examine how knowing the microdynamics can be had for the case of an allegedly conscious computer

moving finger said:
Computationalism isn’t in trouble at all.
Q_Goest said:
I disagree. Understanding the second key to this puts computationalism in a lot of trouble, which is why Chalmers trys to address this issue (the second issue) directly.
Your argument is based on the premise that consciousness is epiphenomenal, which I believe is false.

moving finger said:
This is Chalmers’ going off the deep end with “gosh, maybe we need a whole new physics!”. Absolute poppycock.
Q_Goest said:
Yes, this is a perfect lead into the third issue. We need to explain how a computational device, which can be perfectly understood in terms of behavior and what it is doing, can possibly create some phenomena which we can not understand or know or measure, in any way, even in principal.
Two premises seem to be fundamental to your argument, and I disagree with them both :
(1) that we can perfectly understand all devices (including knowing exactly the self-referential properties of those devices)
(2) that consciousness is epiphenomenal

Q_Goest said:
This, despite the fact that we don't NEED to. We don't NEED to suggest there is something more because everything this allegedly conscious machine is doing is perfectly understandable from a weak emergence perspective.
Not necessarily. You are assuming consciousness is epiphenomenal.

Q_Goest said:
A computational machine, conscious or not, performs what it does without the need to resort to strong emergence whatsoever.
No, it does not. Consciousness entails self-reference, and self-reference is a strongly emergent property – it cannot be modeled completely by simply knowing the microdynamics as an external observer.

Q_Goest said:
We don't need strong emergence here,
Yes we do.

Q_Goest said:
but Chalmers is a computationalist and would like to understand how it could be that such things emerge and thus he proposes there are things we don't understand that may need some kind of new physics to explain, thus he resorts to strong emergence.
Strong emergence is NOT the same as saying we need new laws of physics.

Q_Goest said:
It seems clear to me from reading your response that you're not understanding what Chalmers is saying, yet you DO seem to agree with much of what he says. Chalmers has found a fine line within the arguments that you're not understanding though and if you're to see the problems with computationalism, you'll need to understand his work more better. ;)
It seems clear to me from reading your responses that you think I am arguing against Chalmers much of the time, when in fact I am arguing against YOUR statements. If you would respond in this thread as Q_Goest, instead of responding apparently as an apologist for Chalmers, we might make some progress. :wink:

Q_Goest, I suggest, if we continue, that we argue the actual issues, instead of continually making references to what Chalmers says or does not say, because what I am arguing against here are YOUR statements in this thread.

Best Regards
 
Last edited:
  • #28
Hi Tournsel,
Q_Goest said: I believe Chalmers is correct in arguing that consciousness is a strongly emergent phenomena.
Tournsel said: There are more than two varieties of emergentism. The strongest
varieties involve downward causation, which Chalmers rejects.
To be clear, I think Chalmers correctly concludes that consciousness must be strongly emergent given computationalism is true. I think his conclusion is valid from that point of view.

What Chalmers believes is laid out in his paper. He says, "My own view is that, relative to the physical domain, there is just one sort of strongly emergent quality, namely, consciousness. I do not know whether there is any strong downard causation, but it seems to me that if there is any strong downard causation, quantum mechanics is the most likely locus for it." He would say this I believe because computationalism precludes strong downward causation.

Personally, I would disagree that ANY computation or series of computations creates strongly emergent phenomena. However, I'm not interested in discussing my own views, so please ignore that statement, I only mean to point out that I'm not debating my own views - this thread is intended to discuss the definition of, the implications of, and the validity of weak and strong emergence, with Bedau and Chalmer's work as a foundation. I'd welcome any other work presented which may have bearing on this, such as what you've proposed, especially a discussion on "innocent emergence" if you'd like to tackle that definition.

Tournsel said: Why would they be inaccessible ? Your argyument that
conscious states of humans are inaccessible seems
to hinge on their complexity. But whatever goes on
in a GoL is comprehensible and predictable in principle,
no matter how complex. Is this an in-principle
inaccesbility, or an in-practice inaccessability ?
Yes, this is exactly it, isn't it? I think you would agree then that the actions of any computational system SHOULD be weakly emergent only, is that correct?

Tournsel said: But computationalism implies that mentallity is entirely
comprehensible, in principle, form a 3rd-person persepctive, since
all computer programmes are.
Do you agree that Chalmers argues against this? He's suggesting there is something more, a phenomena which isn't comprehensible, even in principal.

Ignoring your own beliefs: If you assume computationalism, do you see any alternative than to suggest there is such a thing as strong emergence?

If things have "insides" in some irreducible sense, phsyicalism
is false.
Please elaborate, references would be great. I don't necessarily disagree, but I can't see how this could be proven.

Q_Goest said: Again, we are not talking about ANY GoL, we are talking about THE GoL. THE GoL is totally consistant with weak emergence,
Tournsel said: Of course, just about everything is!
Are you suggesting everything is weakly emergent? Please elaborate.
 
  • #29
Q_Goest said:
Please elaborate, references would be great. I don't necessarily disagree, but I can't see how this could be proven.


Physicalism says everything has only physical
properties.

Mathematics is the language of physics.

Therefore, physicalism says everything has only
mathematical properties

Mathematical properties are entirely objective and third-person.

Therefore, physicalism says everything has only proeprties that are entirely objective and third-person.

Are you suggesting everything is weakly emergent? Please elaborate.

Weak emergence just means that there is some high-level behaviour that is
interesting to an observer, as in Life.
 
  • #30
Tournesol said:
Physicalism says everything has only physical
properties.

Mathematics is the language of physics.

Therefore, physicalism says everything has only
mathematical properties

Mathematical properties are entirely objective and third-person.
I disagree. mathematics does not necessarily have any particular perspective, third person or first person.
Tournesol said:
Therefore, physicalism says everything has only proeprties that are entirely objective and third-person.
Since I disagree with your premise that mathematics necessarily assumes a third-person perspective, it follows that I disagree with your conclusion.

Best Regards
 
  • #31
Hi Movingfinger.
My appologies for not responding to your last post. I honestly couldn't think of how best to respond & then went on vacation for a week. Had a very nice time, but back now. I'll see if I can respond to a few things you wrote.

Regarding Tournsel's last post, I honestly think he's hit a proverbial nail but, Tournsel's response is confusing to certain perspectives because it's not well laid out, and because he's not provided proof along with the statements made.

Physicalism says everything has only physical
properties.
Could be better written: Physicalism says everything has measurable properties.

Mathematics is the language of physics.
Could be better written: Anything measurable is calculable using mathematics

Therefore, physicalism says everything has only
mathematical properties
Could be better written: Therefore, physicalism says that all properties are calculable.

Mathematical properties are entirely objective and third-person.
Could be better written: Calculable properties are entirely objective and third person.

Therefore, physicalism says everything has only proeprties that are entirely objective and third-person.
Could be better written: Therefore, physicalism says everything has only properties that are entirely calculable to any third-person.

The point is that subjective experience can not be calculated in the classical sense. I inserted "classical" to indicate that there is no specific calculation that can be done to determine the magnitude or amplitude, nor any other feature or property of, anything which might be remotely defined as a "subjective experience".

We can't say for example, that a "subjective experience" (ie: seeing the color 'red') has some type of property, analogous to the properties an electron has or analogous to the properties liquid has when compared to other phases of matter, or the hardness that an object has, or the emissivity a reflective surface has, which is measurable. I think that's all the point Tournsel is making, but I think it's a very incitefull one.

I won't offer proof for this series of statements, but I think that proof exists.

How do you know that all of the properties of THE GoL are completely knowable by an entity external to the game? How do you know that there are not some properties which are internal self-referential properties of the game, which would not be (epistemically) accessible to an external observer?

What properties regarding the game of Life are we trying to measure? We see the 'gliders' and other phenomena. These are all perfectly definable. A glider is defined as: <insert definition here> but what you've pointed out is the question we need to answer regarding consciousness. What properties regarding the game of Life can be had by the game but which are not measurable or calculable? If we say the game of Life has some properties which are measurable such as 'gliders' but there are also some properties which are not measurable such as 'subjective experience', then we make a distinction between these two phenomena such that one is measurable and calculable, but the other is not measurable, nor is it calculable.

Punchline: The Jehova Witnesses have knocked at my doorstep for decades in an attempt to convince me that I should convert to their religion because . . . . . <insert unmeasurable, uncalculable reasoning here>.

Why should anyone accept an unknowable, uncalculable theory? If you read through Bedau's paper, you realize he's talking about features or properties of a phenomena which are both measurable and calculable. What he incinuates is that no other properties or phenomena of any type exist! That's beautiful! It exactly follows what Tournsel outlined.

If you read though Chalmers' paper, you find that computationalism forces us to believe in something which is not measurable, nor calculable without creating unknown laws which may or may not depend on local interactions and may or may not depend on causal relationships. It forces us to believe that something exists which we can't meausure, and therefore we can't predict. If we can't predict it, it is certainly not calculable. If it's not calculable, it is not a physicallist explanation. Thus, chomputationalism forces us to believe in "strong emergence".
 
  • #32
Moving Finger said:
This is true only up to a point. Games of life (GoL) that we can conceive of are relatively simplistic. But imho it is possible in principle to have a GoL which is sufficiently complex that it does give rise to consciousness – as an emergent property within the game.
Tournesol said:
In what sense of "emergent" ? Would it then have properties beyond phsycial and computational ones.
I do not believe emergent properties are necessarily non-physical or non-computational. They may simply be perspectival.

Moving Finger said:
Once this happens then it opens the door also to strong emergence – because there are properties of that GoL (subjective properties as experienced from the perspective of the emergent consciousness within the GoL) which properties are in principle inaccessible from our perspective “on the outside looking in”.
Tournesol said:
Why would they be inaccessible ? Your argyument that conscious states of humans are inaccessible seems to hinge on their complexity. But whatever goes on in a GoL is comprehensible and predictable in principle, no matter how complex. Is this an in-principle inaccesbility, or an in-practice inaccessability ?
No, it does not hinge simply on complexity. It hinges on perspective. Every “observation” assumes a perspective. 3rd person science assumes we can ignore (or compensate for) effects of perspective – but this assumption is an approximation and is not necessarily true under all conditions. The predictability of the GoL is predictability of it’s properties based on an external perspective (from outside the GoL) – there is no way, using data from an external perspective, to accurately predict the properties based on a perspective from within the GoL. It is an in-principle inaccessibility.

Moving Finger said:
But subjective properties ARE self-referential – by definition! That is precisely the reason why we cannot experience the same properties – because we have a completely different frame of (conscious) reference.
Tournesol said:
If physicalism is true "frames of refernce" are as third-person comprehensible as anything else.
Subjective properties are bound up with (convolved with) the frame of reference. 3rd person science assumes properties can be measured independently of the frame of reference, hence 3rd person science cannot be applied (in principle) to the explanation of subjective properties.

Moving Finger said:
I cannot know (exactly) what it is like to be a perceiving computer (any more than I can know what it is like to a bat) unless I actually BECOME a perceiving computer (or a bat)
Tournesol said:
IOW, there are irreducibly 1st-personal facts and phsyicalism is false. You seem to be trying to have your cake and eat it.
Not at all – you seem to be using a different definition of physicalism to me. I do not define physicalism to exclude 1st person subjective properties. Physicalism is the thesis that everything supervenes on the physical – it is NOT the thesis that all properties are explainable from a 3rd person perspective.

Moving Finger said:
– but then by definition it wouldn’t be “me” knowing it – it would be the computer (or the bat). You simply “can’t get there from here”.
Tournesol said:
If physicalism is true, everything is entriely comprehensible, in principle, form a 3rd person POV, and it therefore doesn't matter where you start from.
Again, you seem to be using a very strange and restricted definition of physicalism which entails that everything must be comprehensible from a 3rd person. Could you provide a link to where you get this definition from?

Moving Finger said:
Each “experiment” of consciousness is unique and different to every other “experiment” of consciousness, and there is no way in principle that we can precisely replicate one agent’s conscious experiment within another agent - because the precise make-up of the agent is one of the variables in the experiment. Simple as that. It’s all in the perspective. No new laws needed.
Tournesol said:
That doesn't quite follow. A smarter agent could replicate the structure of a dumber one, however unique it is. (Unless there are irreducibly 1st-personal properties, and physicalim is false).
It’s not simply about structure – it’s also about perspective – that’s the point you are missing. Two identical agents (from a structural point of view) can have different perspectives because they occupy different positions and orientations in space. If you want to perfectly replicate an agent’s PoV, you must replicate it’s perspective as well as its structure.

Moving Finger said:
Computationalism isn’t in trouble at all. You just have to recognise that a perception implies a perspective – and there ain’t no way to get the true perspective of a “perceiving computer” from the perspective of a human being.
Tournesol said:
The wouldn't be if there are irreducibly 1st-persoanl properties, But computationalism implies that mentallity is entirely comprehensible, in principle, form a 3rd-person persepctive, since all computer programmes are.
No, computationalism does not imply such a thing (again unless you are using a very strange definition of computationalism). Computationalism is simply the thesis that cognition is a form of computation – it does not necessarily entail a 3rd person perspective comprehension of cognition.

Moving Finger said:
Just like there ain’t no way to get the true perspective of Q-Goest from the perspective of Moving Finger – it’s impossible by definition.
Tournesol said:
By whose definition ? Calculating literal perspectives is just geometry. Physicalsim means everything is 3rd personal, including all "frames" and "perspectives".
Again, you seem to be using a strange definition of physicalism.

Moving Finger said:
None of this is at odds with computationalism.
Tournesol said:
Yes it is , as demonstrated.
No it’s not, as shown above. Your definitions of computationalism and physicalism seem strange. Could you perhaps explain what definitions you are using?

Moving Finger said:
There is nothing we have discussed here which cannot be explained based on a perspectival account of subjective perception.
Tournesol said:
There are no irreducible perspectives under physicalsim and computationalism.
Where do you get this from?

Moving Finger said:
Remember that “not deducible” simply means “not epistemically accessible”. Just because I have no (epistemic) access to the “inside” of your consciousness (I cannot see the world precisely as you see it) does NOT mean that there are new laws of physics at work,
Tournesol said:
If things have "insides" in some irreducible sense, phsyicalism is false.
Not at all. Again, I’ll need to see your definition of physicalism, because I suspect it is different to mine. The thesis of physicalism simply says that everything supervenes on the physical – it does not say that things do not have “insides”, and it does not say that everything is comprehensible from a 3rd person perspective.

Moving Finger said:
and it does NOT mean that determinism or reductionism (in the ontic sense) has failed. There is no way in principle that Moving Finger can see the world in exactly the same way that Q_Goest sees it,
Tournesol said:
According to whose principle ? According to physicalism there is such a way. Just understand Q Goest from a 3rd-person perspective.
Once again, physicalism does not entail that everything is comprehensible from a 3rd person perspective.

Moving Finger said:
What new physical laws? Don’t swallow the Chalmers’ hyperbole hook, line and sinker. There are no new laws, and none are necessary. Everything can be understood and explained based on “it all depends on your perspective”.
Tournesol said:
Once you have abandoned the central claim of physicalism, there is not much point worrying about the laws.
The only central claim to physicalism is the thesis that everything supervenes on the physical. A “3rd person perspective account of all phenomena” is certainly not a central claim of physicalism.

Best Regards
 
  • #33
Hi Q_Goest

Q_Goest said:
Hi Movingfinger.
My appologies for not responding to your last post. I honestly couldn't think of how best to respond & then went on vacation for a week. Had a very nice time, but back now. I'll see if I can respond to a few things you wrote.
Glad you had a nice time – I also took 2 weeks off and am now catching up.

Q_Goest said:
Regarding Tournsel's last post, I honestly think he's hit a proverbial nail but, Tournsel's response is confusing to certain perspectives because it's not well laid out, and because he's not provided proof along with the statements made.
The proverbial nail seems very rusty and bent to me, as you will see from my responses above :wink:

Q_Goest said:
Physicalism says everything has measurable properties.
I don’t agree that it does – physicalism simply says that everything supervenes on the physical. Period. It does NOT say that everything is measurable. But even if I accept your statement, we must ask “Measurable by whom?” What you measure often depends on your perspective – thus the properties measured may not be independent of the measurer.

Q_Goest said:
Therefore, physicalism says that all properties are calculable.
Again, calculable by whom? If measured properties depend on perspective, then so do calculated properties.

Q_Goest said:
Calculable properties are entirely objective and third person.
Absolutely not. Since properties are perspectival (depend on the frame of reference) it follows that the calculated properties are also perspectival.

Q_Goest said:
Therefore, physicalism says everything has only properties that are entirely calculable to any third-person.
Physicalism says only that everything supervenes on the physical – it is silent on the question of whether all properties are entirely calculable to any third-person.

Q_Goest said:
The point is that subjective experience can not be calculated in the classical sense.
Subjective experience cannot be “calculated” using 3rd person science – I agree. But physicalism does not entail that all properties are calculable from a 3rd person perspective.

Q_Goest said:
I inserted "classical" to indicate that there is no specific calculation that can be done to determine the magnitude or amplitude, nor any other feature or property of, anything which might be remotely defined as a "subjective experience".
The inability to calculate the magnitude of a subjective experience using 3rd person science does not imply that subjective experience is incompatible with physicalism.

Q_Goest said:
We can't say for example, that a "subjective experience" (ie: seeing the color 'red') has some type of property, analogous to the properties an electron has or analogous to the properties liquid has when compared to other phases of matter, or the hardness that an object has, or the emissivity a reflective surface has, which is measurable. I think that's all the point Tournsel is making, but I think it's a very incitefull one.
It’s a misdirected point. All this shows is that subjective experience cannot be completely understood (comprehended) from a purely 3rd person perspective – and I agree with this - but this is not incompatible with physicalism.

Q_Goest said:
What properties regarding the game of Life are we trying to measure? We see the 'gliders' and other phenomena. These are all perfectly definable. A glider is defined as: <insert definition here> but what you've pointed out is the question we need to answer regarding consciousness. What properties regarding the game of Life can be had by the game but which are not measurable or calculable? If we say the game of Life has some properties which are measurable such as 'gliders' but there are also some properties which are not measurable such as 'subjective experience', then we make a distinction between these two phenomena such that one is measurable and calculable, but the other is not measurable, nor is it calculable.
Depending on perspective. There may be “internal properties” of the GoL which are accessible only to a consciousness within the GoL. Such properties would not necessarily be accessible to an agent observing the GoL from the outside.

Q_Goest said:
Why should anyone accept an unknowable, uncalculable theory?
A "theory that some things are unknowable" is NOT the same as an "unknowable theory". Godel's theorem says that any sufficiently complex system cannot be both complete (ie knowable) and consistent; and the HUP says that not everything is knowable. These are accepted principles.

It makes perfect sense that an observation is convolved with (depends on) perspective. And one can perfectly replicate an observation only if one can also perfectly replicate that perspective. Completely logical and rational. The theory that knowledge depends on perspective is not an unknowable theory - but it IS a theory that not everything is knowable from one single perspective (which is not the same thing).

Q_Goest said:
If we can't predict it, it is certainly not calculable. If it's not calculable, it is not a physicallist explanation.
Incorrect. Physicalism is the thesis that everything supervenes on the physical – it is NOT the thesis that everything is calculable. If this were correct, then an indeterministic world would be non-physical by definition. And even some deterministic systems can be unpredictable – depending on perspective – but it does not follow that they are therefore non-physical.

Best Regards
 
Last edited:
  • #34
MF said:
Tournesol said:
Mathematical properties are entirely objective and third-person.

I disagree. mathematics does not necessarily have any particular perspective, third person or first person.

It isn't subjective. Whether you call that third-personal
or impersonal is a matter of taste. (Of course
it does contain *literal* perspective, as a branch
of geometery. Presumably your usage is is
of "perspective" is mataphorical here).

Therefore, physicalism says everything has only proeprties that are entirely objective and third-person.

Since I disagree with your premise that mathematics necessarily assumes a third-person perspective, it follows that I disagree with your conclusion.

The distinction you have drawn doesn't make a difference.
 
  • #35
QG said:
Physicalism says everything has only physical
properties.
Could be better written: Physicalism says everything has measurable properties.

No, that wouldn't be better. Not everything is directly measureable
in physics, but everything has a mathematical representation.
 
Last edited:

Similar threads

Back
Top