Is Consciousness an Emergent Property of a Master Algorithm?

In summary, the speaker discusses the changes in the forum since their last visit and explains an emergent property related to consciousness that may explain why it is not perfectly reducible. They mention that this property has been recognized by scientists and philosophers and argue against the concept of "subjective experience" as it has a circular definition and does not have any real meaning. They also mention their own theories on consciousness and how they satisfy certain conditions, but do not explain the concept of "subjective experience".
  • #71
Let me touch on the relevance of this particular definition of zombie. Isn't the conceivability argument about the idea that consciousness does not follow from the laws of the universe as we understand them? Is this the whole point? The whole idea to me seems to be claiming that consciousness is not explained or accounted for with a purely physical explanation. At least no one has been able to do it yet. So let's say we can create (or conceive of) a being where all the laws of nature are not broken. Is it conscious? We do not know because we do not understand how the laws of nature can lead to such a thing. This seems like the whole point to me. Correct me if I have misunderstood.

So why is it so relevant that this being must exhibit the exact same behaviour that I do? So what if he doesn't see the hard problem because he doesn't see anything that cannot be explained via the laws of nature. We cannot know for certain whether this being is conscious(he could just be lying) and have no reason to believe it is by simply looking at it's physical make-up which is still the point. What am I missing?
 
Physics news on Phys.org
  • #72
Fliption said:
Let me touch on the relevance of this particular definition of zombie. Isn't the conceivability argument about the idea that consciousness does not follow from the laws of the universe as we understand them? Is this the whole point? The whole idea to me seems to be claiming that consciousness is not explained or accounted for with a purely physical explanation. At least no one has been able to do it yet. So let's say we can create (or conceive of) a being where all the laws of nature are not broken. Is it conscious? We do not know because we do not understand how the laws of nature can lead to such a thing. This seems like the whole point to me. Correct me if I have misunderstood.

You are referring to the problem of other minds vis a vis the problem of the metaphysical possibility (conceivability) of zombies. You are correct in pointing out that these two problems are intimately tied together. The zombie argument (as used by Chalmers) just goes a little further in making some metaphysical claims about the relationship between consciousness and phsyics. But yes, even if we suppose that the idea of a zombie is logically incoherent (metaphysically impossible), we are still left with the familiar core problems: the existence and nature of P-consciousness, asymmetry of access to P-conscious states, and so on.

So why is it so relevant that this being must exhibit the exact same behaviour that I do?

Zombies are used in this way to illustrate the seeming dissociation between reality as science/physics describes it and reality as it presents itself in P-consciousness. If there is a systematic difference in the behavior of zombies and of humans, then this systematic difference must be explicable in terms of physics (since 3rd person behavior is presumably explicable entirely in terms of physics). As you are stipulating that this difference must be due to lack of P-consciousness, it must follow then that P-consciousness is explicable entirely in terms of physics. In this case P-consciousness would literally be those physical processes missing from zombies such that they behave as if they have no P-consciousness.

So what if he doesn't see the hard problem because he doesn't see anything that cannot be explained via the laws of nature.

My Chalmerian zombie twin must believe in the hard problem just as much as I do. To frame it again: If he doesn't, then there is a difference in his 3rd person behavior which is explicable entirely in terms of physics. So if we say that this difference is due to his lack of P-consciousness, and that this difference is explicable in terms of physics, it follows that P-consciousness exists entirely in terms of extrinsic physics, contradicting the ontological intuition behind the hard problem.

We cannot know for certain whether this being is conscious(he could just be lying) and have no reason to believe it is by simply looking at it's physical make-up which is still the point. What am I missing?

You are correct, the central problems remains as real as ever.

For instance, let us suppose that zombies (in Chalmers' strong sense) are logically impossible, and therefore that P-consciousness exists entirely in virtue of physical laws. If this is the case, there is no ontological gap between physical reality and P-consciousness: they are the same thing. However, in this scenario, we are still left with massive epistemological gaps between the two: the problem of other minds, asymmetry of access, etc. Furthermore, we are left with a further mystifying problem: why should an epistemological gap exist if there is no ontological gap?
 
Last edited:
  • #73
Maybe because the physics explanation of reality is incomplete?
 
  • #74
hypnagogue said:
For instance, let us suppose that zombies (in Chalmers' strong sense) are logically impossible, and therefore that P-consciousness exists entirely in virtue of physical laws. If this is the case, there is no ontological gap between physical reality and P-consciousness: they are the same thing.

Can we consider that possibility for a moment, without falling in the materialist trap? I'm a monist and not a materialist, and even though I find my position difficult to explain, I see a lot of people share it.

However, in this scenario, we are still left with massive epistemological gaps between the two: the problem of other minds, asymmetry of access, etc. Furthermore, we are left with a further mystifying problem: why should an epistemological gap exist if there is no ontological gap?

All those problems have a simple explanation that's not mystifying at all. Our misunderstanding of language constrains our ability to understand things, because most of what we know we learn through language, yet we know very little about language itself. Our situation is not unlike that of a man who travels to a foreign country, hires an incompetent interpreter, and finds himself having trouble communicating with everyone. Until he realizes his interpreter is the source of the problem, he will be lead to think the locals don't make any sense. Our languages often stand between us and reality, and they are not good at interpreting facts.

From that view, the source of your epistemological gap is the fact that any statement about anything must always include three distinct elements. In English those are the subject, the object, and the verb. In math, it's two quantities and an operation, or two sides of an equation and the equal sign. Whenever you look at anything from the point of view of language, you will always see two distinct entities and a relationship between them. Very often the two entities are exactly the same, and the relationship is just a fictitious linguistic device.
 
  • #75
If the epistemological gap is only linguistic, then it should be possible in principle for me to literally see your 'red' and see to what extent it is similar to my 'red.' Are you proposing the only reason I can't do this is because of some linguistic confusion? It seems to run much deeper to me.
 
  • #76
We still don't know if its really impossible to look from other person's subjective persepective. We cannot exclude that in some future a technology can be developed to wire two brains together (some animals can do it, ants, for instance). I wonder what would the two people experience then. I supose that they will keep individual conciousness if the dualists are right, and would "meld" as a conciousness if the materialists are right.
 
Last edited:
  • #77
hypnagogue said:
If the epistemological gap is only linguistic, then it should be possible in principle for me to literally see your 'red' and see to what extent it is similar to my 'red.'

All you are saying above is that you can't know everything. And the idea that there's anything wrong with that is an erroneous notion that's purely linguistic in nature.

Are you proposing the only reason I can't do this is because of some linguistic confusion?

No, the reason you can't know what's on my mind is because you are not omniscient. The linguistic confusion is involved in the fact that you think your limited, imperfect knowledge is an aspect of reality.
 
  • #78
confutatis said:
All you are saying above is that you can't know everything. And the idea that there's anything wrong with that is an erroneous notion that's purely linguistic in nature.

There's more to it than that. If P-consciousness exists entirely in virtue of physical phenomena, it should admit itself to physical analysis. We should be able to know which systems are P-consciousness, and exactly what way in which they are P-conscious. But it appears as if we cannot-- it appears as if it is impossible in principle to do this with any degree of certainty. If it is a physical phenomenon, why should it be opaque to physical analysis even in principle?

I'm not saying we should be able to know everything. I am saying that in principle, we should be able to objectively observe all phenomena that are rightly called physical-- there should be nothing asymmetric or hidden about consciousness from a 3rd person view. But there is, and there are strong arguments that it cannot be otherwise, even in principle.
 
  • #79
Fliption said:
I think you have misunderstood. I don't have an issue with understanding the easy problem. I just found it amusing that Mentat (who claims to not understand what the hard problem is all about) used the term "easy problem" as if he understood the distinction. Which he admittedly doesn't. When he labels a set of activities as "the easy problem", he can't be sure he is correct because he doesn't understand the hard problem.


Thanks for clarifying that up for me. Can you direct me to a link were Mentat made this mistake? I must have skipped over it because if it is in this thread I surely missed it; reason being I only read selective posts.

Thanks, by the way.
 
  • #80
Thanks for this response to my question.


hypnagogue said:
If there is a systematic difference in the behavior of zombies and of humans, then this systematic difference must be explicable in terms of physics (since 3rd person behavior is presumably explicable entirely in terms of physics).

I understand what you're saying. But why is this the case? It seems as if there is an assumption that something non-physical cannot influence the behavior of something physical. Why is this assumption being made? I don't understand why it's necessary to make this assumption because it is obviously not true in this case. If this were true then it proves that consciousness is purely physical. Otherwise we wouldn't be talking about this issue right now. Surely this conversation is influenced by the fact that we have consciousness and can't explain why and not because god is pulling strings?


For instance, let us suppose that zombies (in Chalmers' strong sense) are logically impossible, and therefore that P-consciousness exists entirely in virtue of physical laws. If this is the case, there is no ontological gap between physical reality and P-consciousness: they are the same thing. However, in this scenario, we are still left with massive epistemological gaps between the two: the problem of other minds, asymmetry of access, etc. Furthermore, we are left with a further mystifying problem: why should an epistemological gap exist if there is no ontological gap?

Exactly. I just thought that this was the main point of the zombie thought exercise to begin with. So I didn't see the definition clarification of zombie as being very relevant to the solution of the hard problem, like some people seem to be saying.
 
Last edited:
  • #81
Jeebus said:
Thanks for clarifying that up for me. Can you direct me to a link were Mentat made this mistake? I must have skipped over it because if it is in this thread I surely missed it; reason being I only read selective posts.

Thanks, by the way.


Well I don't know if I'd call it a mistake. I just thought it was interesting. His quote is on page 3 of this thread and here is the paragraph:

I hate to pick at words (though, as you well know, I think it is necessary that the words be correct, so as to avoid the possibility of confusion), but I too see a difference between "measuring" a particular wavelength of light and experiencing the color. What I don't see is the difference between being stimulated by a particular wavelength of light, which you then process in terms of previous stimulations and remember, and "experiencing" a certain color. I don't see what's left to explain, and those things that I mention are all part of the "easy problem".

So he is listing out all the things he thinks the easy problem encompasses when he admits to not understanding the hard problem. Contrary to what he has said here, I would argue that some of these things he listed are indeed part of the hard problem and not the easy problem. The word "experience" is the key. He just assumes that the all physical brain activity equals experience. He just glossed right over the hard problem. I relalize that he thinks it's all easy problems but he said that these were easy problems according to Chalmers and that doesn't seem true at all.
 
Last edited:
  • #82
Fliption said:
I understand what you're saying. But why is this the case? It seems as if there is an assumption that something non-physical cannot influence the behavior of something physical. Why is this assumption being made?

Let me introduce a new term here to make discussion a little easier: a C-zombie is a zombie in Chalmers' sense, i.e. it is a creature physically identical to a human and existing in a metaphysical world physically identical to our own, such that its A-consciousness is identical to that of a human but it has no P-consciousness.

You postulated that no C-zombie should be able to behave as if it appreciates the hard problem, due to its lack of P-consciousness. If this is the case, then P-consciousness must be necessary for the existence of A-conscious behaviors indicating P-conscious beliefs (or "A as if P" for short). But, there is no problem in principle for physics to completely explain A-conscious properties of any kind. Therefore, if A-conscious properties indicating beliefs in P-conciousness must be caused by P-consciousness, and if such A-conscious properties are entirely in the domain of physics, then P-consciousness must be entirely in the domain of physics as well. There is no dissociation here from P-conscious properties and A-conscious properties, and so they wind up becoming the same thing: whenever there is A as if P, on this view, it must follow that there is P, and from this it looks as if physics' ability to explain all A implies that it can explain P.

(Otherwise, we would have to explain why certain physical phenomena-- those embodied by A as if P-- cannot occur without some non-physical component, even though there is every reason to believe that they should be able to occur quite naturally underneath the wing of purely physical laws. A far more natural and less ad hoc interpretation under this condition of necessity is to simply assume that P is A and nothing more, that there is no difference between the two. I don't think this is the view you want to take.)

We don't run into this problem if we suppose that P-consciousness is sufficient, but not necessary, to produce A-conscious properties indicating belief in P-conscious properties. If this is the case, then we can have A as if P but not P. In this scenario, physics does not automatically ensnare all the phenomena involved. Despite its ability to exhaustively explain A, there is something more about P that eludes the grasp of physics. So there is then a dissociation between the two that itself needs explanation.

If we say that P is sufficient but not necessary to produce A as if P, then that means A as if P can be produced via several mechanisms. One mechanism might be the kind of 'dead,' robotic, P-less production of A that you have alluded to before; we can imagine that a computer emulating a human brain's functional properties might be such an instance, where A is duplicated but there is no P. Another mechanism for generating A as if P would be that instantiated in P-conscious human brains, where the window could be open for the kind of interactionist dualism you refer to in your post (although this route is not a necessary one to take for a proponent of the hard problem).

Exactly. I just thought that this was the main point of the zombie thought exercise to begin with. So I didn't see the definition clarification of zombie as being very relevant to the solution of the hard problem, like some people seem to be saying.

The clarification just brings things into sharper focus. It's not surprising that opponents of the hard problem have found more problem with this interpretation of zombies than yours, since your interpretation of zombies (with the necessity of P for A as if P) actually turns out to be closer to their views of strict and complete identity between mind and brain, as I hope I showed successfully above.
 
Last edited:
  • #83
hypnagogue said:
It's not surprising that opponents of the hard problem have found more problem with this interpretation of zombies than yours, since your interpretation of zombies (with the necessity of P for A as if P) actually turns out to be closer to their views of strict and complete identity between mind and brain

Just for the record, I want to clarify that not everyone who opposes the hard problem do so because they hold a view of strict and complete identity between mind and brain. I do not hold such a view but I still oppose the hard problem.

What Chalmers is trying to sell is nothing but good old Cartesian dualism. He certainly deserves the merit of finding a way of expressing Descartes' ideas in a more modern/scientific framework, but the central issue is the same. The "hard problem" is just a modern replacement for the cogito. And that means, with all due respect to the parties involved, that Chalmers and his followers are lagging some 350 years behind when it comes to philosophy. Cartesian dualism is not a tenable philosophical position; that has been shown by people far more qualified than I am, so I won't dwell on it.

That said, dualism, in some form or another, is part of anyone's worldview. Even die-hard materialists such as Dennett and his followers do not really believe in their theories when it comes to an understanding of themselves; their claims to deny the supremacy of a first-person worldview are betrayed by the language they use to describe their own world. Perhaps the only difference between the Cartesian dualist and the materialist monist is their attitude towards what they can't understand: the former accepts it, the latter rejects it. That's all there is to the debate as far as I can tell; the bottom line is neither side really understands why things are the way they appear to be.

However, there is an alternative. It's not well explored because it is somewhat new, at least compared to the two other currents of thought, but my study of the subject has revealed that it is at least about a century old. There is no clear label for the philosophy yet; the best name I've seen for it is "dual-aspect monism". It is a form of monism that successfully incorporates dualism as an attribute of perception rather than an attribute of reality. Central to the idea is an understanding of the role knowledge plays in perception, which also requires an understanding of the role language plays in knowledge. The idea is far from simple, but to those who understand it, it makes far more sense than the other two competing views.

I believe anyone who understands dual-aspect monism will reject both Chalmers' and Dennett's ideas, while still acknowledging both positions have some truth to them. That is my position, but I realize it sounds paradoxical to those who are not familiar with it. Fliption has been kind enough to point that out, even though I can only see his criticisms as failure to see past his current philosophical framework.

As a side note, according to dual-aspect monism the identity between mind and brain can be explained by asserting that, while it's true that the brain contains the mind, it's also true that the mind contains the brain, and that both mind and brain are equally real. It's their mutual containment relationship which makes it possible for both to exist, but it's not correct to say, as the materialists do, that the brain must evolve before the mind appears. Due to their mutual containment, they must necessarily evolve together, as the absence of one would imply the absence of the other.
 
Last edited by a moderator:
  • #84
confutatis said:
What Chalmers is trying to sell is nothing but good old Cartesian dualism. He certainly deserves the merit of finding a way of expressing Descartes' ideas in a more modern/scientific framework, but the central issue is the same. The "hard problem" is just a modern replacement for the cogito. And that means, with all due respect to the parties involved, that Chalmers and his followers are lagging some 350 years behind when it comes to philosophy. Cartesian dualism is not a tenable philosophical position; that has been shown by people far more qualified than I am, so I won't dwell on it.

You are misreading Chalmers. Descartes was an interactionist substance dualist, and Chalmers is committed neither to interactionism nor a 'mind substance.' Chalmers leaves the door open for epiphenomenalsim, and actually prefers monism over dualism.

As I see things, the best options for a nonreductionist are type-D dualism, type-E dualism, or type-F monism: that is, interactionism, epiphenomenalism, or panprotopsychism. If we acknowledge the epistemic gap between the physical and the phenomenal, and we rule out primitive identities and strong necessities, then we are led to a disjunction of these three views. Each of the views has at least some promise, and none have clear fatal flaws. For my part, I give some credence to each of them. I think that in some ways the type-F view is the most appealing, but this sense is largely grounded in aesthetic considerations whose force is unclear.

- Chalmers, http://jamaica.u.arizona.edu/~chalmers/papers/nature.html

Note also that there is no clear distinction between 'dual aspect monism' and 'aspect dualism.' Both pick out the same general concept of different ontological aspects or properties ultimately belonging to the same entity. And in fact, in formulating a tentative theory of consciousness in his paper http://jamaica.u.arizona.edu/~chalmers/papers/facing.html , Chalmers embraces precisely such an aspect dichotomy rather than one of substance:

This leads to a natural hypothesis: that information (or at least some information) has two basic aspects, a physical aspect and a phenomenal aspect. This has the status of a basic principle that might underlie and explain the emergence of experience from the physical. Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing.

So Chalmers is most certainly not a redux of Descartes, and in fact is probably best described as an aspect dualist, or dual aspect monist if you prefer. Forgive me if I am using 'dual aspect monist' in a different manner from you, but if I am, I would like to know what distinguishes dual aspect monism from aspect dualism. In any case, it is clear that aspect dualism can be easily recast as at least some kind of monism, and this is the position that Chalmers seems to prefer.

The type-F monism that Chalmers describes may also be useful in illuminating our recent discussion of zombies. Again from "Concsioucness and its Place in Nature":

A type-F monist may have one of a number of attitudes to the zombie argument against materialism. Some type-F monists may hold that a complete physical description must be expanded to include an intrinsic description, and may consequently deny that zombies are conceivable. (We only think we are conceiving of a physically identical system because we overlook intrinsic properties.) Others could maintain that existing physical concepts refer via dispositions to those intrinsic properties that ground the dispositions. If so, these concepts have different primary and secondary intensions, and a type-F monist could correspondingly accept conceivability but deny possibility: we misdescribe the conceived world as physically identical to ours, when in fact it is just structurally identical.[*] Finally, a type-F monist might hold that physical concepts refer to dispositional properties, so that zombies are both conceivable and possible, and the intrinsic properties are not physical properties. The differences between these three attitudes seem to be ultimately terminological rather than substantive. (emphasis mine)

I myself also find this type-F monism the most attractive solution to the problem of consciousness, and as I maintain the metaphysical possibility of zombies, I would fall under the third category Chalmers describes above. That is, I maintain that physical properties refer to dispositional (extrinsic) properties only, and therefore there could exist a world with the same physical (extrinsic) properties as our world but different intrinsic properties. On the other hand, you may find yourself siding with one of the first two categories, thus accounting for your rejection of the possibility of zombies (although it should be pointed out that rejecting the possibility of zombies does not entail rejecting the hard problem).
 
Last edited by a moderator:
  • #85
hypnagogue said:
Therefore, if A-conscious properties indicating beliefs in P-conciousness must be caused by P-consciousness, and if such A-conscious properties are entirely in the domain of physics, then P-consciousness must be entirely in the domain of physics as well.

I decided to leave this for a day because I felt there was the potential that I wasn't seeing the forest for the trees. As usual, things look different when I come back. On a technical note, I'm not sure I understand why the domain of physics should cover everything that interacts with the physical. If this were really true then I would reasonably conclude that consciousness is physical because the fact that we are talking about this strongly suggest an interaction. But I will accept it for now because I think I see more clearly the intent of the definition and perhaps I have been too picky.

I do understand what you mean when you say that a zombie can say things like "That object is red" and even "I believe in the hard problem". I understand that there is a certain A consciousness state that relates to every single behavior that can be exhibited by a person with P-consciousness. I understand why this is an important point to make in the definition of zombie. But while I agree that the A-consciousness state that allows a zombie to say and believe in the hard problem is possible, I do not believe that such a state would ever occur if we assume the zombie is calculating in a casual, logical way. This was the point I was making. Obviously, I agree that the state is possible in principal. I just don't believe the zombie would ever casually arrive in such a state. Now that I think about it, I'm not sure my point is all that different from saying sufficient but not necessary.

The clarification just brings things into sharper focus. It's not surprising that opponents of the hard problem have found more problem with this interpretation of zombies than yours, since your interpretation of zombies (with the necessity of P for A as if P) actually turns out to be closer to their views of strict and complete identity between mind and brain, as I hope I showed successfully above.

Even though I do not believe that a zombie would ever believe in the hard problem, Chalmers point stands because the A consciousness state that allows me to say the hard problem exists can be mimicked in a zombie in principal. But I still struggle a bit with the issue from above about the domain of physics emcompassing all that interacts with the physical. I still feel the most a scientist could ever get is to say "the belief in the hard problem is equivalent to the differences in these two A consciousness states." To then conclude that a belief in the hard problem is equivalent to P-Consciousness itself doesn't seem like very good logic.
 
Last edited:
  • #86
hypnagogue said:
You are misreading Chalmers. Descartes was an interactionist substance dualist, and Chalmers is committed neither to interactionism nor a 'mind substance.' Chalmers leaves the door open for epiphenomenalsim, and actually prefers monism over dualism.

I agree I may have oversimplified things a bit. The point I was trying to make is that Chalmers' view is similar to Descartes' in the sense that it raises problems that are unsolvable in principle. Or, as Chalmers calls it, "hard".

Note also that there is no clear distinction between 'dual aspect monism' and 'aspect dualism.' Both pick out the same general concept of different ontological aspects or properties ultimately belonging to the same entity.

I disagree. In the philosophy I'm calling dual-aspect monism language plays an extremely important role. Whatever it is that Chalmers has in mind, I do not see language being given enough emphasis to qualify his view as anything resembling the view I'm talking about.

And in fact, in formulating a tentative theory of consciousness in his paper, Chalmers embraces precisely such an aspect dichotomy rather than one of substance

But, again, he does not refer to language as playing any fundamental role.

Forgive me if I am using 'dual aspect monist' in a different manner from you, but if I am, I would like to know what distinguishes dual aspect monism from aspect dualism.

I'm quite positive you're using 'dual aspect monism' in a different manner, but there really isn't a standard vocabulary to talk about what I have in mind. It's just something I thought I came up with by myself, and later realized a lot of other people came up with very similar ideas.

I think the key difference from aspect dualism is that aspect dualism refers to reality made of some "substance" which takes different "aspects" depending on... depending on what? I'm not sure what in the philosophy gives rise to the dichotomy in perception. Dual-aspect monism, as I'm defining it anyway, makes it clear that the dichotomy is just an illusion caused by our misunderstanding of the nature of our knowledge.

Another key difference, I suppose, is that in dual-aspect monism there is no hard problem. The supposed inability to explain subjective experience in terms of objective knowledge is a misperception - objective knowledge itself is the explanation of subjective experience, because the world is perfectly isomorphic to the mind that observes it. It's just that our language tends to conceal that isomorphism. The reason that happens is because we tend to assign meaning to words, rather than to their relationships with other words. Just like you think the meaning of the word 'red' is this, whereas the real meaning of the word 'red' is defined by its relationship with all other words in the language.

you may find yourself siding with one of the first two categories, thus accounting for your rejection of the possibility of zombies (although it should be pointed out that rejecting the possibility of zombies does not entail rejecting the hard problem).

Actually, I reject the conceivability of zombies, and that does entail rejecting the hard problem, as I'm sure you'd agree (with the entailment, not the rejection)
 
  • #87
Fliption--

It's a difficult and subtle point, but I believe it stands. I think I can state it a little more clearly now, and it may be helpful to do so even thoough you seemed to have relaxed your conceptual requirement for the necessity of P for A as if P.

Our problem from earlier in this thread arises from the tension between the set of statements

1. P is necessary for A as if P
2. P is non-physical
3. all A is physical
4. all physical entities/states can be described entirely by the laws of physics

(By "physical" I mean extrinsic/relational/dispositional properties only.)

From premises 1-3, we conclude that a non-physical entity is necessary for the existence of certain physical entities. But this contradicts premise 4, which says essentially that no physical entity requires a non-physical cause. So either we must abandon premise 4, or we must abandon one of 1, 2, or 3.

To abandon premise 4, we would have to show that, for instance, my disposition to say things such as "The sky looks so blue today" is inexplicable, in principle, by physics. But this seems like an impossible task. Science can straightforwardly tell a causal story about light wavelengths striking my retina, being transduced into neural signals, and causing a cascade of neural events in my brain terminating in a set of motoric signals that move my mouth/tongue/throat/etc such that I utter "The sky looks so blue today." So abandoning premise 4 is off limits, and we must abandon one of the other premises.

Premise 3 is safe, since A-consciousness is defined in such a way that it is a purely physical phenomenon. That leaves 1 and 2. If we refuse to reject 1 (as you seemed reluctant to do previously), then we must reject premise 2. But rejecting 2 essentially makes us materialists and leaves us with all the familiar problems, so it becomes clear that we should reject premise 1.

But while I agree that the A-consciousness state that allows a zombie to say and believe in the hard problem is possible, I do not believe that such a state would ever occur if we assume the zombie is calculating in a casual, logical way. This was the point I was making. Obviously, I agree that the state is possible in principal. I just don't believe the zombie would ever casually arrive in such a state. Now that I think about it, I'm not sure my point is all that different from saying sufficient but not necessary.

Possibility in principle is all that is needed, since the possibility in principle for A as if P but not P implies that P is not necessary for A as if P.

There are a number of ways we could imagine this possibility in principle to be realized. If P is epiphenomenal and plays no causal role, then we can easily imagine a world with identical physical laws to our own which followed a course of history identical to our own. In this world, you and I are having the same discussion as we are in our own world, but we in fact do not have P. This is possible since all the causal agents in our world are duplicated in this particular zombie world, leading to the same events.

On the other hand, if P does play some causal role, then perhaps we can imagine a world with a surrogate causal agent for P, such that it replicates P's causal role without having P's phenomenal properties.

But I still struggle a bit with the issue from above about the domain of physics emcompassing all that interacts with the physical. I still feel the most a scientist could ever get is to say "the belief in the hard is equivalent to the differences in these two A consciousness states." To then conclude that a belief in P-Consciousness is equivalent to P-Consciousness itself doesn't seem like very good logic.

We indeed do not have to conclude that P is physical based on what you have presented here. But in order to do this, we must suppose that P is not necessary for A as if P, as discussed above.
 
  • #88
hypnagogue said:
Fliption--
On the other hand, if P does play some causal role, then perhaps we can imagine a world with a surrogate causal agent for P, such that it replicates P's causal role without having P's phenomenal properties.

I do understand what you're saying but I still found myself cringing every once in a while until I read this one paragraph above. It has put what you're saying into perspective and I now think I fully understand your point and agree with what you're saying. I do personally believe beyond doubt that there is a casual relationship here so I continued to struggle but this point about a surrogate helped me to see exactly where you're coming from.

I think that while I agree that in principal a zombie can believe in the hard problem whether P consciousness is casual or not, I still think the possibility of these events happening in practice are not likely which is why we will probably continue to be tempted to call people like Mentat a zombie. :smile:

Thanks for the clarification.
 
  • #89
confutatis said:
I agree I may have oversimplified things a bit. The point I was trying to make is that Chalmers' view is similar to Descartes' in the sense that it raises problems that are unsolvable in principle. Or, as Chalmers calls it, "hard".

Chalmers does not hold that the hard problem is unsolvable even in principle (although some philosphers do, such as Colin McGinn with his concept of cognitive closure). If Chalmers thinks the hard problem is literally unsolvable, I imagine he wouldn't bother trying to solve it.

Besides, materialist viewpoints seem to raise problems that are just as hard. If my visual percepts literally are just an illusion, how could they possibly have the illusory characteristics that they have in virtue of a purely materialist ontology?

I think the key difference from aspect dualism is that aspect dualism refers to reality made of some "substance" which takes different "aspects" depending on... depending on what?

In Chalmers' interpretation, the dual aspects arise as a result of the difference between intrinsic properties and extrinsic properties. As he puts it:

This view holds the promise of integrating phenomenal and physical properties very tightly in the natural world. Here, nature consists of entities with intrinsic (proto)phenomenal qualities standing in causal relations within a spacetime manifold. Physics as we know it emerges from the relations between these entities, whereas consciousness as we know it emerges from their intrinsic nature.

Another key difference, I suppose, is that in dual-aspect monism there is no hard problem. The supposed inability to explain subjective experience in terms of objective knowledge is a misperception - objective knowledge itself is the explanation of subjective experience, because the world is perfectly isomorphic to the mind that observes it. It's just that our language tends to conceal that isomorphism.

Thus far I don't see how your position is any different from Dennett's materialist eliminativism.

The reason that happens is because we tend to assign meaning to words, rather than to their relationships with other words. Just like you think the meaning of the word 'red' is this, whereas the real meaning of the word 'red' is defined by its relationship with all other words in the language.

To this I say, nonsense. Suppose I devise my own new language. My language has only one word, "unga." "Unga" refers precisely to my visual experience as of this. There are no other words to which unga may refer, and yet it has a clear referent.

Besides, if all words are defined by their relationship to other words, then it is impossible for a word to refer to reality (except that part of reality corresponding to these words). If this is the case, then words should have syntax but no semantics. I should be able to study the syntax of a Chinese dictionary and some Chinese texts and eventually come to as complete an understanding of Chinese as a citizen of China, without ever speaking to a Chinese speaker or seeing externally grounded diagrams such as 'table --> (picture of a table)'. By the same token, linguists should not have needed the rosetta stone to decode heiroglyphics. But obviously this cannot be the case; words acquire meaning in virtue of their relationship to the world. They must be grounded in something external to the linguistic system itself. In the case of "red" (as well as "unga"), the word is grounded in (refers to) my visual experience of this color.

Actually, I reject the conceivability of zombies, and that does entail rejecting the hard problem, as I'm sure you'd agree (with the entailment, not the rejection)

It depends on the type of zombies you're talking about. Arguably one could reject the conceivability of Chalmers' zombies without rejecting the conceivability of Block's Chinese Gym zombie.
 
Last edited:
  • #90
hypnagogue said:
Chalmers does not hold that the hard problem is unsolvable even in principle (although some philosphers do, such as Colin McGinn with his concept of cognitive closure).

He does hold that the hard problem is unsolvable within a materialistic context.

Besides, materialist viewpoints seem to raise problems that are just as hard.

I don't want to argue this point because I would seem to be arguing for materialism, which I'm not. I just want to point out that materialism does not, in principle, pose any unsolvable problem. To find unsolvable problems you have to transcend the materialist perspective.

Thus far I don't see how your position is any different from Dennett's materialist eliminativism.

It is different on a very fundamental point: from my perspective, "matter causes mind" is just as true as "mind causes matter". You seem to be overlooking the importance of the second assertion.

To this I say, nonsense. Suppose I devise my own new language. My language has only one word, "unga." "Unga" refers precisely to my visual experience as of this. There are no other words to which unga may refer, and yet it has a clear referent.

Your language doesn't allow you to make any true statements about 'unga'. It's not the kind of language I'm talking about.

Besides, if all words are defined by their relationship to other words, then it is impossible for a word to refer to reality

To this I say, nonsense :smile:

It's perfectly possible for words to refer to reality based on their relationships to other words alone, as long as those relationships reflect asymmetries in reality. When reality exhibits symmetries, words cannot refer to it, which is why we have all those inverted spectrum scenarios. For instance, all you know about 'right' is that it is 'not left', but you have no way to know if my right is the same as your right; for all you know, it could be your left. But it doesn't stop there, it goes into higher levels. For instance, all you know about 'top and bottom' is that it is 'neither right nor left', but again you have no way to know if what I experience as 'top and bottom' is what you experience as 'right and left'.

If you take that to the highest level possible, of the language as a whole, then you clearly see that language is far less connected to reality than you currently dream of. I have a very simple argument for this: if language really reflected reality, then all semantically correct statements would correspond to facts about reality. As you surely know, that is far from being the case.

If this is the case, then words should have syntax but no semantics. I should be able to study the syntax of a Chinese dictionary and some Chinese texts and eventually come to as complete an understanding of Chinese as a citizen of China

Dictionaries do not define a language; they don't expose enough word relationships. In order to learn Chinese, you need to be exposed to an awful lot of it, certainly far more than just a dictionary. But it is not true that you can't learn Chinese by studying the language alone. How do you suppose those geniuses at the army crack enemy codes?
 
Last edited by a moderator:
  • #91
confutatis said:
I don't want to argue this point because I would seem to be arguing for materialism, which I'm not. I just want to point out that materialism does not, in principle, pose any unsolvable problem. To find unsolvable problems you have to transcend the materialist perspective.

I would agree, in a way. Materialism, when applied to its own domain, poses no unsolvable problems. But P-consciousness does not appear to be in the domain of materialism, and it appears as if materialism is not suited to solving the problem of P-consciousness. Even if we choose to label P-consciousness an illusion, it is still paradoxical how it could even have the illusory properties that it does if materialism is true.

It is different on a very fundamental point: from my perspective, "matter causes mind" is just as true as "mind causes matter". You seem to be overlooking the importance of the second assertion.

I don't think you've gone into enough detail on this point.

Your language doesn't allow you to make any true statements about 'unga'. It's not the kind of language I'm talking about.

What if 'unga' means 'I see this color' (or if you prefer, 'there is this color')? Then clearly it can have a truth value, despite it being the only word in my language.

Here is what you said initially:

The reason that happens is because we tend to assign meaning to words, rather than to their relationships with other words. Just like you think the meaning of the word 'red' is this, whereas the real meaning of the word 'red' is defined by its relationship with all other words in the language.

What of a child who learns his first word? His father points to his mother and says "momma," and eventually the child learns to refer to his mother as "momma" himself. The child knows no other words, so there are no other words for his "momma" to achieve meaning from, and yet clearly the word "momma" now has meaning for the child. How can this be if the meaning of the word "momma" is strictly contingent upon other words?

Another scenario: before a hypothesized experimental result is determined empirically, what determines the truth value of the hypothesis? Does it not yet have a truth value? When does it attain a truth value, when the experimenters observe that it has been verified (or falsified), or when the experimenters think internally/speak/write about the empirical results?

If you take that to the highest level possible, of the language as a whole, then you clearly see that language is far less connected to reality than you currently dream of.

I'm not necessarily making claims about the connections between language and reality. What I am making claims about is the connection between language and perceptual experience.

Suppose there is a 5 year old child, A, who has seen and can percpetually distinguish between cats and dogs, but suppose that his limited vocabulary only allows him to make the crudest of linguistic distinctions regarding what makes a dog a dog, such that these distinctions alone are not sufficient to tell dogs and cats apart. To this end we might imagine that A would say "a dog is a furry animal with 4 legs and a tail, a snout, two eyes, a nose," etc.-- a description that agrees perfectly with any description of a cat. So A can perceptually discriminate between cats and dogs, even if he cannot say precisely what it is about dogs that makes them different from cats.

Now suppose that there is another child, B, with the same vocabulary set as A, except for words referring to furry, four legged animals (although he knows what furry, four, legged, and animal mean). Not only does B have no words for furry, four legged animals, he has never seen one. Suppose that B learns what dogs and cats are only in virtue of reading a simple, linguistic description of what they are-- perhaps A has written him a letter telling him about dogs, which matches precisely the description of cats in a children's book (with no pictures). What will B label a cat if one is presented to him? He may call it either a dog or a cat, since it is a furry, four legged animal, or he may claim that he doesn't know which one it is. Why can A distinguish between the two whereas B cannot, if they are working with the same linguistic tools? Because A's linguistic notions of cat and dog are associated with his past perceptual experiences of cats and dogs, whereas B has no such perceptual experience of cats or dogs to ground the semantics of these terms.

Dictionaries do not define a language; they don't expose enough word relationships. In order to learn Chinese, you need to be exposed to an awful lot of it, certainly far more than just a dictionary. But it is not true that you can't learn Chinese by studying the language alone. How do you suppose those geniuses at the army crack enemy codes?

They crack them by finding systematic relationships between the code and a natural language. But such schemes are made much easier due to the fact that symbols in a code stand for letters in an alphabet. Chinese has no alphabet, it has distinct symbols for each concept.

Even putting that objection aside-- to borrow from your example, how would the interpreter, going only by syntax, differentiate between the words for 'left' and 'right'? Even if he manages to narrow things down enough such that he knows one word must mean 'left' and the other 'right,' how is he to differentiate between these without ultimately making some inference grounded in facts about the external world? For instance, if he finds that one word refers to the dominant hand of most people in China, he may conclude that this word means 'right,' but this inference is draw via reference to an externally existing fact about Chinese people; or he may find that one word means 'left' by roundabout reference to the direction in which the sun sets, but this again relies on an empirical fact. (e.g., if the text of some human-like alien civilization fell to Earth tomorrow, we would not know which of their hands tends to be dominant, nor would we know in which direction their sun sets, and so we could not make sense of any of these.)
 
Last edited:
  • #92
hypnagogue said:
What if 'unga' means 'I see this color' (or if you prefer, 'there is this color')? Then clearly it can have a truth value, despite it being the only word in my language.

If your language only had that one word, could you think about other things? For instance, could you think about not-unga? And if you can think about non-unga, can you invent a word for it? If you can come up with a new word, that means you already have the concept in your mind. When I'm referring to language here, I'm referring to the totality of concepts you have in your mind, not the totality of arbitrary symbols which may or may not exist as expressions of those concepts.

What of a child who learns his first word? His father points to his mother and says "momma," and eventually the child learns to refer to his mother as "momma" himself.

The child may only know one word, but his/her head must already be full of concepts before the first word is learned. It's one thing to know that 'momma' is the sound that goes together with a particular concept; it's another thing to become aware of the concept in the first place. I'm talking about the latter, not the former.

Let me use a notation to make things easier: I will append a '+' sign whenever I'm talking about a concept a word refers to, and '-' when I'm talking about the word itself (eg: mother-, mère-, madre-, mutter-, are different words in different languages for the concept mother+)

The child knows no other words, so there are no other words for his "momma" to achieve meaning from, and yet clearly the word "momma" now has meaning for the child. How can this be if the meaning of the word "momma" is strictly contingent upon other words?

The meaning of momma- is momma+. The meaning of momma+ is contingent upon concepts such as object+, room+, person+, face+, eyes+, and so on. Even though it may take years for the child to learn the words object-, room-, person-, face-, eyes-, those concepts must be in place from a very early age.

Another scenario: before a hypothesized experimental result is determined empirically, what determines the truth value of the hypothesis?

Semantics.

When does it attain a truth value, when the experimenters observe that it has been verified (or falsified), or when the experimenters think internally/speak/write about the empirical results?

That depends. The experimenter learns something by observing the experiment, and that knowledge becomes true to him as concepts (eg: this+ causes+ that+). But concepts as such cannot be communicated, so the experimenter must choose some words in his vocabulary, and create a relationship between the words that mirror the relationship between the concepts in his mind. And here is where semantics shows up its ugly head: how can the experimenter choose words that perfectly recreate the concept "this+ causes+ that+" in the mind of everyone else?

I'm not necessarily making claims about the connections between language and reality. What I am making claims about is the connection between language and perceptual experience.

The connection may be clear for the speaker, but for the listener/reader it must be reconstructed. It's one thing to explain what momma- means by pointing your fingers at momma+. It's quite another thing to explain what "consciousness- is- an- epiphenomenon- of- the- brain-"; it's really difficult for anyone to figure out what concepts a person has in mind when uttering that sentence. However, no one is born a speaker, which means our knowledge of what words mean is always imperfect. Which means not everything we learn from other people is true, in the sense that it would be true if we had learned it from personal experience.

Suppose there is a 5 year old child, A, who has seen and can percpetually distinguish between cats and dogs, but suppose that his limited vocabulary only allows him to make the crudest of linguistic distinctions regarding what makes a dog a dog...

To cut a long story short, learning about cats+ and dogs+ is not the same thing as learning about cats- and dogs-. If you know nothing about cats- you can still think about cats+. If you know about cats- but do not know about cats+, then you may be tempted to think cats- is just another word for something you already know (such as dogs+). You may, in fact, enter into a long philosophical discussion as to whether dogs- really exist as every dog- can be shown to be a cat+ (which is of course nonsense if you know that dogs+ are not cats+)

They crack them by finding systematic relationships between the code and a natural language. But such schemes are made much easier due to the fact that symbols in a code stand for letters in an alphabet.

I'm sorry but you're wrong on this. Those forms of encryption (letter substitution) are no longer used since, as you said, they are so easy to crack. What makes cracking codes possible is that people usually know what a coded message probably means - there aren't many things one can talk about during war. But this is a side issue anyway.

Even putting that objection aside-- to borrow from your example, how would the interpreter, going only by syntax, differentiate between the words for 'left' and 'right'? Even if he manages to narrow things down enough such that he knows one word must mean 'left' and the other 'right,' how is he to differentiate between these without ultimately making some inference grounded in facts about the external world? For instance, if he finds that one word refers to the dominant hand of most people in China, he may conclude that this word means 'right,' but this inference is draw via reference to an externally existing fact about Chinese people; or he may find that one word means 'left' by roundabout reference to the direction in which the sun sets, but this again relies on an empirical fact. (e.g., if the text of some human-like alien civilization fell to Earth tomorrow, we would not know which of their hands tends to be dominant, nor would we know in which direction their sun sets, and so we could not make sense of any of these.)

Even though my example was trying to address something different, I will comment on that as it touches on the same issue. The issue is what I referred to as symmetries. There is a symmetry between 'left' and 'right' that prevents you from knowing what other people mean by it, except that if something is on the right then it can't be on the left. That's all you know about left and right; for all you know your left+ might be my right+ and we'd still agree that most people prefer to use their right- hand. So the meaning of right- is not right+, it's something else close to "not left-". But of course there's more, because there are things that are neither right- nor left-. Even so, things that are neither right- nor left- tell you very little about what right+ and left+ could possibly be.

In the end, we can only discover what right- and left- mean to the extent that we can perceive assymetries. And this has two very important consequences:

- the entirety of our perceptions cannot possibly exhibit any kind of assymetry
- as such, any description of our perceptions that implies assymetry (eg: mind vs. body) is an artificial construct
- since descriptions are made of abstract symbols, the dichotomy between the description of our perceptions and the perceptions themselves must have been introduced by the symbols, not by our perceptions themselves

I'm not sure exactly how language, as expressed by symbols, creates this false dichotomy, but I'm sure that it does. The reason I'm so sure is because there is no dichotomy between any aspect of my perceptions and the entirety of them; in other words, I never experience anything that I believe I should not be experiencing. Clearly it is our theories that must be wrong, not our perceptions.
 
  • #93
If an infant has a concept of 'mother' before learning to say 'momma,' then surely a dog has a concept of 'master' despite never learning any words at all. Is a dog, then, a linguistic animal despite never speaking or writing?
 
  • #94
hypnagogue said:
If an infant has a concept of 'mother' before learning to say 'momma,' then surely a dog has a concept of 'master' despite never learning any words at all. Is a dog, then, a linguistic animal despite never speaking or writing?

Parrots can speak many words. I guess that makes them linguistic animals then :mad:
 
  • #95
Some parrots have shown the ability to use language relatively intelligently.

Anyway, my point is that you seem to refer to much more than is normally referred to by 'language.' Concepts of the kind you refer to can exist without linguistic tokens, and are probably best characterized as perceptual concepts (baby's concept of momma, pre-language, is defined by baby's visual perception/recognition of its mother's face). That was my point at the outset-- perception is not a purely linguistic phenomenon, although you seem to be trying to paint it as such.
 
  • #96
Confutatis

I've been following along here, taking advantage of the dialogue you're having with Hypnagogue to once again try to understand your view. It seems the last 2 or 3 posts have been some of the most comprehensive as far as describing the heart of your view that I've seen. It seems there are some arguments being presented that are crucial to understanding your view. I have read these posts several times trying to make sure I understand them before I post any questions or develop any opinions. I do not have an opinion right now. I need a little more clarification.

It seems to me a crucial thing to understand is what you mean by "symmetry" and "asymmetry". While I know what these words mean, I'm not sure how you're applying them here. We have one example of -left and -right that you say has symmetry which leads to the same problem that we have in inverted spectrum scenarios. Sometimes you used the "symmetry/asymmetry" concept when referring to the relationship between words. And other times you referred to these concepts as something that reality would exhibit. Exactly what is it that has symmetry or does not have symmetry? And what criteria classifies it has having symmetry? I just need a little more clarification/examples on of what you mean by these concepts.
 
Last edited:
  • #97
hypnagogue said:
you seem to refer to much more than is normally referred to by 'language.' Concepts of the kind you refer to can exist without linguistic tokens

That doesn't change the fact that we can assign tokens to those concepts, and apply the same rules as we do for all other concepts. There's nothing particularly different about a concept that currently lacks a word, except the fact that it currently lacks a word.

perception is not a purely linguistic phenomenon, although you seem to be trying to paint it as such.

Perception is a purely linguistic phenomenon as far as our theories go, because our theories are also purely linguistic phenomena. There are far more things than things that we can talk about, but there's nothing we can say about those things, except in the languages of art, myth, folklore, etc.
 
  • #98
Fliption said:
It seems to me a crucial thing to understand is what you mean by "symmetry" and "asymmetry".

It certainly is, because ultimately it can be shown that there is a symmetry between "mental" and "physical", and because of that symmetry we have no way to know exactly what is different about them. But let's save that for a future discussion.

While I know what these words mean, I'm not sure how you're applying them here. We have one example of -left and -right that you say has symmetry which leads to the same problem that we have in inverted spectrum scenarios.

I believe the left vs. right problem is also classified as an inverted spectrum scenario, but I'm not sure. In any case, the idea is the same: flip everything, and nothing in our descriptions change.

Sometimes you used the "symmetry/asymmetry" concept when referring to the relationship between words. And other times you referred to these concepts as something that reality would exhibit.

Concepts certainly exhibit symmetry, as in left/right. You can replace every single instance of one word with the other, and your knowledge still remains intact. You can't do that with 'left' and 'top', for instance, so left and top are asymmetrical. Still, taken together, left and right are symmetrical with top and bottom.

As to whether reality exhibits symmetries, the answer is a bit more complex. The existence of a certain symmetry between concepts implies that we have no way to know which aspects of reality the concepts refer to. For instance, if the words 'red' and 'green' are really symmetrical as some people think, then all you can know about reality is the relationship between 'red' and 'green', not what they really are. You can't know if 'red' means this or this.

This is where things start to get interesting, because things that appear different to different observers are not considered real; we usually call them 'illusions'. For instance, if there is no objective way to assert if grass looks like this or like this, then it necessarily follows that grass is neither this nor this, and our perception of color is an illusion. Still we do perceive something, so what is it that we perceive after all?

Let's not argue that last bit for now. First, we can't be sure that 'red' and 'green' are really symmetrical. Second, we're not yet ready to discuss what 'illusion' mean in the context of the kind of monism I'm talking about.

Exactly what is it that has symmetry or does not have symmetry?

Langauge definitely has it. Reality exhibits symmetry to the extent that we are ignorant of some of its aspects. For instance, suppose we have two perfectly identical cards placed side by side on a table. We call the card on the left 'card A', the one on the right 'card B'. We leave them on the table, go away to get something, and when we come back we find the wind has blown them away. We can no longer tell which card is which, even though we are sure both are still there. So we say there is a symmetry between card A and card B by virtue of their identical appearance.

what criteria classifies it has having symmetry?

We find symmetries by using thought experiments, such as the one above about two identical cards.
 
  • #99
Unfortunately, I'm still not clear on exactly what it means for things to be symmetric or asymmetric. It sounds as if the criteria for being symmetric has something to do with our ability, or lack thereof, to know. Know what? It sounds as if it means we can't know what aspect of reality a word refers to? Is that close? I'm just not clear. I'll try to be more specific below.



confutatis said:
Concepts certainly exhibit symmetry, as in left/right. You can replace every single instance of one word with the other, and your knowledge still remains intact. You can't do that with 'left' and 'top', for instance, so left and top are asymmetrical. Still, taken together, left and right are symmetrical with top and bottom.

Why is 'left' and 'right' symmetric and 'top' and 'left' are not?

This is where things start to get interesting, because things that appear different to different observers are not considered real; we usually call them 'illusions'. For instance, if there is no objective way to assert if grass looks like this or like this, then it necessarily follows that grass is neither this nor this, and our perception of color is an illusion. Still we do perceive something, so what is it that we perceive after all?

People seeing different things is different from the inability to objectively prove that people are seeing the same thing. Inverted spectrum scenarios are a statement about our ability to know whether we are referring to the same thing, with the color red for example. This doesn't mean that we necessarily DO see different things thus making it an illusion. But this may be getting too far ahead. I'm not sure I'm prepared to move this far until I understand symmetry better.

First, we can't be sure that 'red' and 'green' are really symmetrical.
Why can't we be sure? It certainly seems that we have an inverted spectrum scenario with them so why would they not be symmetric? I'm hoping your answer will illuminate more on what it means to be symmetric.

Langauge definitely has it. Reality exhibits symmetry to the extent that we are ignorant of some of its aspects. For instance, suppose we have two perfectly identical cards placed side by side on a table. We call the card on the left 'card A', the one on the right 'card B'. We leave them on the table, go away to get something, and when we come back we find the wind has blown them away. We can no longer tell which card is which, even though we are sure both are still there. So we say there is a symmetry between card A and card B by virtue of their identical appearance.

Does this symmetry exists if we had not originally labeled them as 'card A' and 'card B'? If not then again it seems symmetry only applies to concepts and not reality.

The reason I'm trying to understand this distinction is because it seems symmetry is applied to both concepts and external objects differently and it makes the definition of symmetry more confusing to me. And I'm hoping to make it as simple as I can. At least at first. There should only be one definition of symmetry that can be applied to both concepts and reality but I'm not sure what that single definition is yet.
 
Last edited:
  • #100
confutatis said:
That doesn't change the fact that we can assign tokens to those concepts, and apply the same rules as we do for all other concepts. There's nothing particularly different about a concept that currently lacks a word, except the fact that it currently lacks a word.

Perhaps you can find a better word to use than language. The way you are using it, we can easily speak of mice having language, but that doesn't square with the way the word 'language' is used.

Perhaps we might say that a language is some set of concepts existing within an organism's mind/brain that can be expressed externally by a systematic set of abstract symbols. Symbols as such may not be sufficient for language, as in the case of parrots (even if it is arguable that a parrot's 'speech' truly consistitutes a symbol of a concept in the first place), but surely they are necessary. If I never speak or write a word or have internal mental chatter, but have at least some set of concepts in my mind, then surely I cannot be said to have any linguistic properties.

Perception is a purely linguistic phenomenon as far as our theories go, because our theories are also purely linguistic phenomena. There are far more things than things that we can talk about, but there's nothing we can say about those things, except in the languages of art, myth, folklore, etc.

Depends what you mean by theory. Is my perception of what differentiates this color from this a theory? If so, then all animals with red/green color perception can be said to have such theories. If not, then you cannot say that subjective redness is a merely linguistic phenomenon.
 
Last edited:
  • #101
Fliption said:
Unfortunately, I'm still not clear on exactly what it means for things to be symmetric or asymmetric.

Some abstract concepts can be difficult to grasp.

It sounds as if it means we can't know what aspect of reality a word refers to? Is that close?

Sort of. Think of inverted spectrum scenarios: the same word may refer to different experiences, yet that difference never shows up in their usage of the word.

Why is 'left' and 'right' symmetric and 'top' and 'left' are not?

Look at your computer. It is true that there is one pixel on the left side of the screen for every pixel on the right, but it's not true that there's one pixel at the top for each pixel at the left. The image on your computer can be rotated around an imaginary vertical line in the middle of the screen, around an imaginary horizontal line, and around a point in the middle. But there is no form of rotation that can exchange 'top' with 'left' without also exchanging 'bottom' with 'right'. So 'left' is symmetrical with 'right', 'top' is symmetrical with 'bottom', and 'left and right' are symmetrical with 'top and bottom'.

Does this symmetry exists if we had not originally labeled them a 'card A' and 'card B'?

The symmetry exists because you perceive two identical cards as two objects rather than one. What name you give them is immaterial; what matters is that you can give them names, and the names must be arbitrary. There is nothing about one card that makes it different from the other, except the fact that one card is not the other.

If not then again it seems symmetry only applies to concepts and not reality.

It applies to both. Is 'red' a word or a colour? It is both. Same idea.
 
  • #102
hypnagogue said:
Perhaps you can find a better word to use than language. The way you are using it, we can easily speak of mice having language, but that doesn't square with the way the word 'language' is used.

I didn't come up with this idea myself. The first time I saw it I balked at the notion just as you are doing now. It took me a couple of years to understand why 'language' is the right word.

Symbols as such may not be sufficient for language, as in the case of parrots, but surely they are necessary.

Would you say primitive humans had language skills before they invented the first word? If so, how could they have language skills before language existed?

Langauge encompasses more than symbols; syntax and semantics are far more important and far more relevant than which words we choose to express which concepts. Syntax and semantics reflect aspects of our consciousness as well as of reality; lexicon is of no interest to any metaphysical inquiry and therefore can be completely ignored.

Is my perception of what differentiates this color from this a theory?

If I ask you what differentiates this color from this, then your answer will be a theory.

then you cannot say that subjective redness is a merely linguistic phenomenon.

In your mind, is there something about 'redness' which the word 'redness' leaves out? I'm not talking about what 'redness' means to other people, I'm talking about what it means to you. If the word 'redness' encompasses every aspect of your concept of redness, what exactly is different between the word and the concept? What does the concept of redness bring to your mind that the word 'redness' does not?
 
  • #103
confutatis said:
Would you say primitive humans had language skills before they invented the first word? If so, how could they have language skills before language existed?

Perhaps they had a latent ability for language, but it doesn't follow that they literally had language. I may have a latent ability for scuba diving, but I am not a scuba diver.

Langauge encompasses more than symbols; syntax and semantics are far more important and far more relevant than which words we choose to express which concepts. Syntax and semantics reflect aspects of our consciousness as well as of reality; lexicon is of no interest to any metaphysical inquiry and therefore can be completely ignored.

I mostly agree here, but it doesn't make sense to refer to syntax in the absence of tokens to be ordered according to that syntax. If you don't have the tokens, you don't have syntax.

If I ask you what differentiates this color from this, then your answer will be a theory.

I agree that this is trivially the case. However, I maintain that my answer in this instance will be an abstract representation of the process by which I distinguish the colors, not the actual process itself.

In your mind, is there something about 'redness' which the word 'redness' leaves out? I'm not talking about what 'redness' means to other people, I'm talking about what it means to you. If the word 'redness' encompasses every aspect of your concept of redness, what exactly is different between the word and the concept? What does the concept of redness bring to your mind that the word 'redness' does not?

You try to eliminate a distinction between the public and private meanings of the word, but that cannot be done. My word 'redness' refers to everything there is about redness in my subjective space. But not everything in my subjective space can be shared with other subjective spaces. (If it could, an eye doctor would not have to ask me which glasses suited me best-- he would just slip into my mind, literally see through my eyes, and make the determination from there.) So there is necessarily a dichotomy of reference between public and private referents.

Besides this, there is still a distinction between the word and the concept. The word refers to the concept. It is a pointer. Equating the two is like equating my finger with the moon. The moon is not biological, nor is this linguistic.
 
  • #104
confutatis said:
Sort of. Think of inverted spectrum scenarios: the same word may refer to different experiences, yet that difference never shows up in their usage of the word.

The differences may not show up in the abstract language itself, but they most suredly show up once it is made evident to what these words refer. That is, once they have been solidly grounded in their referents, they can be compared and distinguished.

Assume I use 'right' and 'left' the standard way, and you use them the inverted way. Say you, me, and another 'normal' English speaker are standing in a line playing Simon Says. The instruction comes, "Raise your left hand." I raise what I call my left, my normal partner raises what I call his left, and you raise what I call your right. An analogous situation holds for the command "Raise your right hand." The referents of these words have been exposed and made evident to all, and on this basis I can distinguish my meanings of 'left' and 'right' from yours.

The same would hold for the inverted spectrum scenario, if only I could observe the subjective referents of your words 'red' and 'green.' But whereas I can see you raising your right hand upon the command "Raise your left hand," I cannot see what you imagine upon the command "Imagine the color green."
 
  • #105
confutatis said:
Look at your computer. It is true that there is one pixel on the left side of the screen for every pixel on the right, but it's not true that there's one pixel at the top for each pixel at the left. The image on your computer can be rotated around an imaginary vertical line in the middle of the screen, around an imaginary horizontal line, and around a point in the middle. But there is no form of rotation that can exchange 'top' with 'left' without also exchanging 'bottom' with 'right'. So 'left' is symmetrical with 'right', 'top' is symmetrical with 'bottom', and 'left and right' are symmetrical with 'top and bottom'.

Yes I understand this. This is just the traditional usage of the word "symmetry' but what I'm having trouble with is connecting this traditional symmetry property to what I can know about your experiences. Let me see if I can explain what I mean.

Hypnagogue seems to be having a similar issue when he says this:

Assume I use 'right' and 'left' the standard way, and you use them the inverted way. Say you, me, and another 'normal' English speaker are standing in a line playing Simon Says. The instruction comes, "Raise your left hand." I raise what I call my left, my normal partner raises what I call his left, and you raise what I call your right. An analogous situation holds for the command "Raise your right hand." The referents of these words have been exposed and made evident to all, and on this basis I can distinguish my meanings of 'left' and 'right' from yours.

You see how he thinks that we are comparing the words 'right' and 'left' to an actual referent in the outside world? Namely the arms. But in this quote from you it seems different:

For instance, all you know about 'top and bottom' is that it is 'neither right nor left', but again you have no way to know if what I experience as 'top and bottom' is what you experience as 'right and left'.
So here the referrent of the the words 'left' and 'right' isn't the actual external arm. It is the experience of right and left. Used in this way then we do indeed have the same problem that we have with color. So I'm not real clear which method is the correct one: The one Hypnagoue used or this one from you above.

If it's the one from you, then I'm still having a problem with the symmetry concept, as I stated above. If I cannot know that my experience of 'left' is the same as your experience of 'left' and you may actually be experiencing what I would call 'right', then how is it that I know that your experience of 'left' isn't what I would call 'top'? I'm just not understanding how the fact that 2 words are opposites, or symmetrical have anything to do with what I can know about your experiences.
 
Last edited:
Back
Top