The Implications of Materialist Consciousness on Telepathy

In summary, the conversation discusses the concept of consciousness and whether telepathy can occur within a materialistic framework. The idea of an internal mind and a stream of consciousness is explored, with one participant arguing that they do not exist due to logical flaws. Subconscious reactions are also discussed and it is suggested that they are a function of the brain rather than an internal mind. The conversation ends with a question about the possibility of the universe under this presumption.
  • #36
Originally posted by Mentat
Not if I postulate from the beginning that subjective phenomena were never generated in the first place. If I can explain why we might believe that subjective phenomena are generated, when they really are not, then I would have circumvented the "central issue", since it doesn't need resolution if it's asking the wrong question.

ARGH! :smile: This goes back the "illusory" consciousness thing. You are denying that which you know inarguably to be true. It doesn't matter if qualia are "real" or are "illusions" that we are tricking ourselves into believing we see. The simple fact remains that either way, we do see them, hear them, touch them, etc, and this is all that is relevant. Qualia exist, and must be explained if we are to understand consciousness. Unless of course you're one of those philosophical zombies, in which case I sincerely apologize for your inherent inability to understand this dilemma. :wink:

Sure. However, there is no subjective "wood" in the brain, merely the processing itself. IOW, you never really produced a house, you just processed that it was there, and that was enough (no wood required). This occurs (most likely) at a synaptic level, but is really only a question of how we process, encode, and remember information.

I am with you on this. Really. I am assuming for the sake of this argument (and this assumption also doubles as my usual 'belief') that consciousness is brain processing. But there is still a problem here. Again, how do we make the subjective experience intelligible in terms of the functioning of the brain? It is entirely intelligible how memory processes etc. might lay the foundation for seeing the purple cow-- what elements from memory the brain puts together, and so forth. It is not really intelligible how those same processes can account for the subjective experience of seeing the purple cow in all its glorious purpleness. There is no bridge principle.

And here you hit at the same point that I tried to make before with my computer analogy. Yes, the computer never has subjective experience, but we don't either in the sense that you think we do.

Let's clarify things. In what sense do you think that I think we have subjective experience? All I have done is to refuse to deny the fact that I subjectively experience qualia. I have made no further claims about subjective experience; this assertion alone is enough to throw a wrench in the traditional materialist explanation of consciousness.

Remember my analogy? Basically I was saying that a computer doesn't have a monitor or speakers for it's own benefit, since it doesn't process in terms of pictures or text or sound. It processes only in binary code, which is how it encodes all external stimuli, and how it remembers them. Since our brains are just organic computers, there is no need for us to have "monitors" to visibly display "color" in our brains, since there is no "viewer" there to see this display ITFP.

I'm not saying the brain processing displays colors to a homunculan viewer. I'm saying that the brain processing itself is the awareness of that color. But this equation is still problematic. How do we reconcile the objective fact with the subjective experience without a suitable bridge principle?

Think of this, if I were to draw a red rose on a piece of paper, I would be inducing physical stimuli, so that your brain would recognize that which you had seen before, and identify it (all of this occurring in the brain's own code (the synaptic code)). The subjective experience is not of the picture of the rose, but of the subsequent mental processes that occur as a result of that physical stimulus.

Yes, but it is still a subjective experience. If there were no subjective experience, I would not be aware of these underlying mental processes in any way.

The example of blindsight is useful to consider here. A person with blindsight is 'blind' in a certain portion of his/her visual field, insofar as s/he has no subjective experience of vision in that area of the visual field. But the person will still be able to tell you details about the environment in the 'blind' portion of sight. This implies that there is still low level visual processing going on in the blind area; what is missing is the proper higher visual processing necessary to generate (or accompany, if you find that term less offensive) the subjective experience of vision. So tell me, what is it about the higher-level visual processing that differentiates it from the lower level, such that the higher level is correlated with subjective awareness and the lower is not? From your account the two should be indistinguishable, but they clearly are not.

edit: Let me clarify this last sentence. What I mean to say is that your account of 'consciousness' is satisfactory for explaining how the 'blind' part of how blindsight operates. This is precisely because your account is in fact just an explanation of all the neural mechanisms underlying cognition except consciousness itself. You are explaining everything but how neural processes can intelligibly account for the subjective experience of qualia. So if your account were true to reality, we would all have "complete" blindsight-- we would be able to process information about our environment but we wouldn't be aware of it. But we just can't deny that we do in fact have awareness-- the contrast between what is subjectively known to be awareness of qualia vs. the non-awareness apparent in blindsight is the very thing that makes blindsight an interesting phenomenon in the first place.
 
Last edited:
Physics news on Phys.org
  • #37
Originally posted by Mentat
(SNIP) The error is on your part, since you are equating "self" with "conscious self". The "self" is the whole organism, but the "conscious self" evolved from the "self" as our brains' abilities developed. (SNoP)
An entirely subjective attestement if I have ever heard one, including the 'Ego' expression of a Judgment!

Simply put, prove that! (Cause you cannot!)

How do you know it wasn't "conscious self" learning to (evolving into) operate "self", slowly learning to become "operator of self" that is the demonstrable development of "Brains abilities"?? (and that 'self' was operator inherant)
 
  • #38
“phoenixthoth, out of curiosity, are you a believer in a priori knowledge?"

i had some inkling for what a priori knowledge was but i consulted a site on latin phrases just to be sure. “deductive; relating to or derived by reasoning from self-evident propositions; presupposed by experience (this seems to be a contradiction in terms to me); being without examination or analysis (i doubt that); presumptive; formed or conceived beforehand (then it should be open to modification).”

i seems that we are taking it as a self-evident fact that deduction using the classic rules of logic can lead to the production of previously unobserved facts produced by self-evident premises. for example, “all men are mortal / Socrates is a man / ergo, Socrates is mortal.” i assume that the conclusion was a fact before it was observed by this argument; the argument seems to make an appeal to the logical side of your intuition. but what if you show this to someone and they simply don’t buy into aristotelian rules of logic at all? you then have no tools for convincing them of anything using deduction. what happens when two people disagree on what is self-evident, true without requiring proof? for example, the self-evident nature of the utility of deduction.

the law of the excluded middle comes to mind now and i’d like to point out that long before fuzzy logic/fuzzy set theory accounted for “shades of grey” in a mathematical way, some eastern cultures already had their answer to the dubious notion that statements are either true or false: a third choice, “mu.” i’d really like to know what the definitions of “true”, “false,” and “mu” are. if there is a mu, then we can’t simply define false to mean non-true because non-true might mean false and it might mean mu. i’d like to say true means real, but who is to say what reality is. just to give the tip of the iceburg, van gogh (i hope i spelled that right) might give us a different *description* of reality. i heard this story of someone driving a car with his friends who suddenly stopped the car and no one else knew why. the driver said, quite filled with anxiety, that he had just ran over a midget standing in the middle of the road. was the midget really there? perhaps surrounding each consciousness there is a non entirely unique reality, each with its own rules. however, it seems like most of these independent or interrelated realities *often* agree on most things. most people don’t even consider that when you step off your bed, there might be a pool of lava or when you turn on your faucet, you better put a cup above and below because you have no idea where the water will go, up or down.

“I do not believe that there is anything that some form of science will not be able to at least inquire into, even if it is in a more non-material realm. For the things that you are talking to actually be, even if you claim they are non-material, from your position it seems they still have an effect on the material world. this is something in which science can measure.”

science can inquire into anything. so can art, literature, and philosophy, just to name a few. i believe that (para)psychology has the best hope of examining the non-material things i’m talking about, things such as the higher self and true self. i have only my own experiences to draw from that indicate, if not prove, that there is a higher self and true self. the basic theory is this. the true self (soul) has a greater understanding from a universal perspective than we know. the little self (ego) is suddenly aware of something it calls inspiration or insight (or even genius) and creativity and the source of these abnormal thoughts are not known. certainly it seems like these insights can’t be summoned at will. we can’t say with much success, “ok, self, tell me the theory of everything.”

i believe, to reiterate, that the source of the insights is the higher self, the bridge between the little self and the true self. i’ve had dialogues with my higher self and i’ve noted that its writing style is vastly different from mine. it also has a tendency to be quite long-winded, so to speak. despite the superficial appearance of rambling, it is actually quite structured and no words are wasted. the exercise, the basic tool, is this: write down the following, “higher self, what do i need to know at this time?” and see what “floats in.” the best results come when one doesn’t concentrate too hard on the fact that one is writing without thinking or planning what to say. don’t wonder too much about whether this is the higher self or some roaming spirit (i doubt there are such things, but i have no evidence either way). also, don’t jump to the conclusion that God is talking to you, either. we’ll get to God later. i had no idea that this could be used as a tool to contact God, that just happened by accident and without much warning, though it was by my conscious direction (i was instructed on how to do this by my higher self). the intent has to be unmotivated by desire, no expectations of what the answers might be.

i would tend to think that everyone has a higher self probably not too far off from how freud suspected that everyone had a subconscious. it is possible that the higher self and true self are within the subconscious. they say interesting things about the subconscious. that it is aware of far more than what our little selves are aware of and that it is likely what is responsible for dreams. perhaps we are in direct link with it when we sleep, a time when our little self relaxes.

it seems like once we have enough practice having dialogues with our higher selves that our seat of consciousness can be entirely shifted to it. instead of there being a middle man, the little self, that asks questions, the higher self is allowed to operated on a conscious level. at this point, the ego virtually vanishes. at least, one could say that “i own my ego” instead of either “i am my ego” or “my ego owns me.” one also feels obliged to say “i own my higher self” and “i am my true self.” who i thought was me is a fading memory.

may your journey be graceful,
phoenix
 
  • #39
Is something discrete possible?

First I know the signification of discrete (cfr. http://www.wikipedia.org/wiki/Discrete ).

Isn't anything part of a large system? Can you say that a human is discrete?

Other point: brainactivity is only one of the communication systems in our body. You have also membrane causality (since our basic (first) cell sub-divided in three general specialize functions: ectoderm, mesoderm and endoderm.
Next we have also genes and inside we have specific DNA which are kinds of collective experiences.
 
  • #40
Discrete, is the key to the debate of the entire forum. Once this becomes understood by the individual, it cannot be understood by the masses because it is an individual effort, all that is possible and how it works becomes understood. That is if your interested.
 
  • #41
Originally posted by TENYEARS
Discrete, is the key to the debate of the entire forum. Once this becomes understood by the individual, it cannot be understood by the masses because it is an individual effort, all that is possible and how it works becomes understood. That is if your interested.

Then how could one ask for such personal enlightenment, after all, everything one posts is public knowledge. Your not asking us to PM and sacrifice the all-important rise in post-counts, are you ?
 
  • #42
Wrong discrete, as in discrete particle or no particle and it would be? The answer to that question is what I was alluding. This not understood in words but as an experience is the foundation for uncovering any question one can possbily come up up with. It is a grounding, but you will not have to go to your room.
 
  • #43
Originally posted by hypnagogue
ARGH! :smile: This goes back the "illusory" consciousness thing. You are denying that which you know inarguably to be true. It doesn't matter if qualia are "real" or are "illusions" that we are tricking ourselves into believing we see. The simple fact remains that either way, we do see them, hear them, touch them, etc, and this is all that is relevant. Qualia exist, and must be explained if we are to understand consciousness. Unless of course you're one of those philosophical zombies, in which case I sincerely apologize for your inherent inability to understand this dilemma. :wink:

LOL!

I have a more comprehensible (or, rather, one that better fits what you (and I) are trying to explain) explanation for these "qualia", but I will reserve it for later in the post. Note: it is the same explanation as before, but I have added further explanation that doesn't seem to have been made in the minds of the readers of my previous posts.

I am with you on this. Really. I am assuming for the sake of this argument (and this assumption also doubles as my usual 'belief') that consciousness is brain processing. But there is still a problem here. Again, how do we make the subjective experience intelligible in terms of the functioning of the brain? It is entirely intelligible how memory processes etc. might lay the foundation for seeing the purple cow-- what elements from memory the brain puts together, and so forth. It is not really intelligible how those same processes can account for the subjective experience of seeing the purple cow in all its glorious purpleness. There is no bridge principle.

My new good buddy, hypnagogue! :smile:

I had the idea that we might be working on exactly the same thing (though I have a proposal for a resolution, that may or may not be right), and that you were just asking the questions you were asking to resolve the problem (almost devil's advocate...with my help, you could be the best ).

Anyway, let me ask you something (though this may be premature, as I don't know what you've said in the rest of your post): What do you think happens when I see an actual cow? You see, there needn't be phenomenological "qualia" then, since the cow is really there in front of me. All that need happen is that the motion-sensors, color-sensors, texture-sensors, patter-recognizers, etc... in my brain must be stimulated just as they were the first time I saw a cow. Obviously I don't see the cow as though it were a flattened image on my retina, or as an photonic beam, but I see the cow as it is, because all of those "vision" and "pattern" parts of my brain are functioning correctly. So, when I need to "imagine" a cow in my brain, all that my memory needs to do is stimulate those same areas of the brain, in very nearly the same manner that they were when I actually saw a cow, and then I will process one just as I do when there actually is one.

Really, if there is nothing more to vision than processing, then there is nothing more to imagining than that same processing.

Let's clarify things. In what sense do you think that I think we have subjective experience? All I have done is to refuse to deny the fact that I subjectively experience qualia. I have made no further claims about subjective experience; this assertion alone is enough to throw a wrench in the traditional materialist explanation of consciousness.

Most of the time that is true, but I'm still persisting because I think I can overcome that.

Also, I'd like to clear one thing up: I don't deny that we experience things subjectively, only that "subjectively" means something other than physically. IOW, "subjective experience", to me, is just something that happens within my brain.

I'm not saying the brain processing displays colors to a homunculan viewer. I'm saying that the brain processing itself is the awareness of that color. But this equation is still problematic. How do we reconcile the objective fact with the subjective experience without a suitable bridge principle?

By examining how we actually saw that color ITFP.

The example of blindsight is useful to consider here. A person with blindsight is 'blind' in a certain portion of his/her visual field, insofar as s/he has no subjective experience of vision in that area of the visual field. But the person will still be able to tell you details about the environment in the 'blind' portion of sight. This implies that there is still low level visual processing going on in the blind area; what is missing is the proper higher visual processing necessary to generate (or accompany, if you find that term less offensive) the subjective experience of vision. So tell me, what is it about the higher-level visual processing that differentiates it from the lower level, such that the higher level is correlated with subjective awareness and the lower is not? From your account the two should be indistinguishable, but they clearly are not.

edit: Let me clarify this last sentence. What I mean to say is that your account of 'consciousness' is satisfactory for explaining how the 'blind' part of how blindsight operates. This is precisely because your account is in fact just an explanation of all the neural mechanisms underlying cognition except consciousness itself. You are explaining everything but how neural processes can intelligibly account for the subjective experience of qualia. So if your account were true to reality, we would all have "complete" blindsight-- we would be able to process information about our environment but we wouldn't be aware of it. But we just can't deny that we do in fact have awareness-- the contrast between what is subjectively known to be awareness of qualia vs. the non-awareness apparent in blindsight is the very thing that makes blindsight an interesting phenomenon in the first place.

Alright, well, first off, "blindsight" itself (the phenomenon of being able to describe something that you aren't really "looking at") can be explained by the constant saccades of our eyes.

However, I do see what you are saying, and it is the reason for the "Multiple Drafts Theory" that Dennett proposed in Consciousness Explained. After I see something once, I have it stored in my memory, and don't need to see it again. However, every time I see something similar to it, a new "draft" of the same thing is processed, and thus I can notice more and more detail about something the longer I look at it. IOW, the longer I look at something, the more "drafts" are processed of the same thing, and thus more detail can be analyzed.

So, when "blindsight" occurs, it is probably (though I can offer nothing absolutely conclusive, since I know very little about the actual case studies of "blindsight") just that they only caught one "quick draft" of whatever their eye's saccaded towards (is "saccaded" a word? ).

What do you think?
 
  • #44
Originally posted by Mentat
My new good buddy, hypnagogue! :smile:

I had the idea that we might be working on exactly the same thing (though I have a proposal for a resolution, that may or may not be right), and that you were just asking the questions you were asking to resolve the problem (almost devil's advocate...with my help, you could be the best ).

You're pretty groovy too Mentat. :wink:

I think we are both stuck at the same brick wall. Not surprising given the history of confusion over the idea. The catch is that I'm not at all sure how to scale the wall without adopting some new climbing equipment! (That is, questioning our assumptions of material reality.) If conventional materialism can give a satisfactory explanation of qualia I'll be the first to embrace it. Unfortunately the prospects seem pretty dim thus far (to me at least).

Anyway, let me ask you something (though this may be premature, as I don't know what you've said in the rest of your post): What do you think happens when I see an actual cow? You see, there needn't be phenomenological "qualia" then, since the cow is really there in front of me. All that need happen is that the motion-sensors, color-sensors, texture-sensors, patter-recognizers, etc... in my brain must be stimulated just as they were the first time I saw a cow. Obviously I don't see the cow as though it were a flattened image on my retina, or as an photonic beam, but I see the cow as it is, because all of those "vision" and "pattern" parts of my brain are functioning correctly. So, when I need to "imagine" a cow in my brain, all that my memory needs to do is stimulate those same areas of the brain, in very nearly the same manner that they were when I actually saw a cow, and then I will process one just as I do when there actually is one.

Well, claiming to see the cow "as it is" is already opening a big can of philosophical worms, but I'll leave that assertion alone. I agree that all that is important in seeing the cow is having the proper portions of your brain functioning in the right way, so as to illicite subjective awareness of said cow, whether the cow is really there or not. But this still doesn't address the problem of qualia.

I think this statement might be indicative of a misunderstanding between us: "What do you think happens when I see an actual cow? You see, there needn't be phenomenological "qualia" then, since the cow is really there in front of me." This isn't really the case. Qualia are not something like little subatomic particles of awareness. Qualia are simply those sensual percepts that you are aware of-- redness, wetness, a flowery smell, etc. As such, the definition of qualia is quite independent of any underlying cause of the appropriate brain activity, in the sense that I can experience the set of qualia that I call a cow whether a cow is really there or not (provided I have a vivid enough imagination, or am having a dream, etc).

So we can approach sensual awareness from two angles: 1) from the point of view of appropriate brain activity and 2) from the point of view of the corresponding sensual awareness (qualia). We directly perceive the latter, and go on to observe that it is highly correlated with the former. Now the question is, how do we get from the former to the latter, ie, how do we make qualia intelligible in terms of brain activity? How can electrons swirling around in our brains account for the visual awareness of the color red? Another way to put this is that if we did understand how qualia are intelligible in terms of physical processes, we should theoretically be able to predict whether or not a given system (such as a computer) has some sort of awareness corresponding to its dynamic physical properties.

Really, if there is nothing more to vision than processing, then there is nothing more to imagining than that same processing.


Agreed, but the question is exactly how that processing can account for the corresponding visual awareness, whether it is 'real' or imagined.

Also, I'd like to clear one thing up: I don't deny that we experience things subjectively, only that "subjectively" means something other than physically. IOW, "subjective experience", to me, is just something that happens within my brain.

We need not make a distinction between subjective awareness and physical existence. When I say "subjective" in this thread I only mean that which is consciously experienced-- eg qualia.

Alright, well, first off, "blindsight" itself (the phenomenon of being able to describe something that you aren't really "looking at") can be explained by the constant saccades of our eyes.

Blindsight occurs in patients with particular brain lesions, although the underlying mechanisms explaining blindsight may be similar to what you're talking about.

Here's a very informative link that also reiterates some of the points I was trying to make by appealing to blindsight:

http://serendip.brynmawr.edu/bb/blindsight.html

(The java demonstration doesn't seem to be working for me though. )

However, I do see what you are saying, and it is the reason for the "Multiple Drafts Theory" that Dennett proposed in Consciousness Explained. After I see something once, I have it stored in my memory, and don't need to see it again. However, every time I see something similar to it, a new "draft" of the same thing is processed, and thus I can notice more and more detail about something the longer I look at it. IOW, the longer I look at something, the more "drafts" are processed of the same thing, and thus more detail can be analyzed.

This again explains the underlying mechanisms for consciousness without really making the phenomenon itself intelligible. It might be a good place to start, but as it stands it still doesn't address the problem. In particular, why should having multiple drafts of information in the brain suddenly 'ignite' into some sort of conscious awareness? I can see how the multiple drafts theory explains information processing, but not how that information processing somehow culminates into a conscious experience.

Put another way: as it stands I can imagine a 'multiple drafts' kind of processing occurring unconsciously in someone's brain without that person having any corresponding subjective awareness. What part of the theory can explain to me why my imagined hypothetical situation can be false, ie, how can multiple draft theory explain to me how and why it intelligibly accounts for subjective awareness of qualia in terms of information processing in the brain?

[edit for grammar]
 
Last edited:
  • #45
Originally posted by hypnagogue
Well, claiming to see the cow "as it is" is already opening a big can of philosophical worms, but I'll leave that assertion alone.

While you are, of course, right, we do need to ignore such philosophical problems as are not directly related to the subject at hand.

I agree that all that is important in seeing the cow is having the proper portions of your brain functioning in the right way, so as to illicite subjective awareness of said cow, whether the cow is really there or not. But this still doesn't address the problem of qualia.

You made the mistake of saying "so as to illicit subjective awareness of the cow". More on this later.

I think this statement might be indicative of a misunderstanding between us: "What do you think happens when I see an actual cow? You see, there needn't be phenomenological "qualia" then, since the cow is really there in front of me." This isn't really the case. Qualia are not something like little subatomic particles of awareness. Qualia are simply those sensual percepts that you are aware of-- redness, wetness, a flowery smell, etc. As such, the definition of qualia is quite independent of any underlying cause of the appropriate brain activity, in the sense that I can experience the set of qualia that I call a cow whether a cow is really there or not (provided I have a vivid enough imagination, or am having a dream, etc).

True enough, except that I (and, apparently, we) do not accept that we can experience such qualia without the same (or very similar) physical stimulus every time. You see, when I saw the cow the first time, certain parts of my brain became active. Note: They did not become active so that I could become conscious of the cow (that is an Idealistic assumption), but they become active because that is what they do (that is what the human brain has evolved to be good at). So, since they became active then, I processed "cow". Now, I am capable of re-stimulating those same parts of my brain so that they again process "cow".

So we can approach sensual awareness from two angles: 1) from the point of view of appropriate brain activity and 2) from the point of view of the corresponding sensual awareness (qualia).

But this distinction is really the biggest problem, and I don't think such a distinction should exist. As I see it (currently) the brain's processing (the processing of information about texture, color, etc...) is the sensual awareness. More on this later.

We directly perceive the latter, and go on to observe that it is highly correlated with the former. Now the question is, how do we get from the former to the latter, ie, how do we make qualia intelligible in terms of brain activity? How can electrons swirling around in our brains account for the visual awareness of the color red? Another way to put this is that if we did understand how qualia are intelligible in terms of physical processes, we should theoretically be able to predict whether or not a given system (such as a computer) has some sort of awareness corresponding to its dynamic physical properties.

Jackpot! You have hit one of the most important points, and I will return to it also later.

Agreed, but the question is exactly how that processing can account for the corresponding visual awareness, whether it is 'real' or imagined.

The visual awareness is not accounted for by the processing, it is the processing, nothing more.

We need not make a distinction between subjective awareness and physical existence. When I say "subjective" in this thread I only mean that which is consciously experienced-- eg qualia.

Sure, but I intend to show that "qualia" are brain activity (which I think is the same thing you intend to show, and which I will return to later int he post).

This again explains the underlying mechanisms for consciousness without really making the phenomenon itself intelligible. It might be a good place to start, but as it stands it still doesn't address the problem. In particular, why should having multiple drafts of information in the brain suddenly 'ignite' into some sort of conscious awareness? I can see how the multiple drafts theory explains information processing, but not how that information processing somehow culminates into a conscious experience.

And that is the big point, it doesn't culminate into a conscious experience. It (the processing on all different parts of the brain, no "center") is the conscious experience. Just wait, I'm almost there...

Put another way: as it stands I can imagine a 'multiple drafts' kind of processing occurring unconsciously in someone's brain without that person having any corresponding subjective awareness.

And now I can finally explain all of the things that I wanted to explain before. The point of Dennett's theory is that anything that accomplishes the processing of Multiple Drafts, as we do, is conscious. That's the real counter-intuitive part. In Dennett's framework, there is no room for "Zombies", because he takes the "intentional stance" that anything with the appropriate physical characteristics is "experiencing" subjectively, since the subjective experience itself is the physical interactions.

Now, first I said I'd get back to the fact that it is a mistake to say "the physical interactions illicited the subjective experience of the cow". It is a mistake because any computer (organic or otherwise) that processes in mulitple drafts, as we do, is conscious because the physical interactions are the consciousness. IOW, the physical interactions do not illicit subjective experience, the are subjective experience, and if anything else produced the same physical interactions, it too would be conscious.

I said you hit the jackpot, because you hit on the fact that we should be able to determine whether a specific computing system is or is not conscious. Well, if Dennett's theory is correct, we can - because we take the intentional stance, and assume that anything under-going these physical interactions is conscious. Really, this should not be so shocking to so many so-called materialist philosophers (though many have just ignored the theory off-hand) because they themselves have been saying "there is nothing more to subjective experience than the physical interactions", but they didn't realize the greatest implication (which logically follows) of such a stance: Anything that under-goes these physical interactions is conscious, since there is nothing more to consciousness than those physical interactions. I cannot stress enough the importance of that point.

What part of the theory can explain to me why my imagined hypothetical situation can be false, ie, how can multiple draft theory explain to me how and why it intelligibly accounts for subjective awareness of qualia in terms of information processing in the brain?

By showing you that the physical interactions are the subjective qualia. The physical interactions are the consciousness of the "cow" in the first place (when physical stimulus enters our retina that has been affected by the presence of a cow), and they are the consciousness of the "cow" in the second instance (when the physical stimulus comes, from memory, to the same visual processing centers).

I'm sure that I have not answered all of your questions satisfactorily. Just keep asking them, because I may be missing the point myself. I don't claim to have the answer, I just like this possibility :wink:.
 
  • #46
OK, I think I see where our mutual misunderstanding lies now. This is a microcosm of it:

The visual awareness is not accounted for by the processing, it is the processing, nothing more.

Let me be more precise on what I mean by "accounting for" and "illiciting." The latter may be a little misleading, but the former I think is a more accurate description of what I'm getting at. By "accounted for," I basically mean "made intelligible."

Let's make an analogy. Now, I think we both would agree that visual awareness is an emergent property of the brain-- individual neurons or sufficiently small groups of neurons are not necessarily visually aware, but their mass behavior in the occipital lobe certainly is. Similarly, the fluidity of water is an emergent property-- individual molecules of water or sufficiently small groups of molecules are not fluid, but the mass behavior of a glass of water certainly is.

So, we can say the fluidity of the water simply is the mass behavior of all the molecules of water. Assuming no prior scientific understanding of the phenomenon, we can come to this conclusion simply by analyzing the microscopic motion of water molecules and noting that it is highly correlated with the macroscopic fluid motion of the water. But we still have not accounted for exactly how and why this correspondance exists, beyond the fact that we know the water itself is composed of these water molecules. Thus, we undergo some more scientific inquiry and eventually construct a bridge principle explaining how and why the correspondance exists-- we note that water molecules are polar and undergo various electrostatic interactions and show how these microscopic interactions can account for the fluid dynamics of the entire system. Thus, we see how the microscopic behavior "accounts for" the macroscopic behavior.

(As a side note, you could also substitute "illicits" or even "causes" for "accounts for" in the above sentence. In a way it doesn't make sense to say that the microscopic motion of water molecules causes the macroscopic fluidity, since the latter really is the former. But it also easy to see how if we are a little more liberal with our definitions exactly how "causation" is a relevant and valid concept in this context. I believe this is what is meant by saying that brain activity "causes" consciousness, or at least this is the sense that I understand it-- but for the sake of this discussion I will continue to abstain from using this terminology.)

Analogously, we can say that consciousness simply is the mass behavior of neurons in the brain. We come to this conclusion by documenting the numerous correlations between conscious awareness and brain activity. But we still have not accounted for how and why this correlation exists-- we have not yet made it intelligible in terms of a bridge principle.

For instance, while we know that it doesn't make sense to call a single water molecule or a sufficiently small group of water molecules fluid, if we are told the number of water molecules there are in a given system and are also given the other relevant properties of the system-- volume of the container, temperature and pressure, strength of gravity, etc., we can predict whether or not this collection of water molecules will display macroscopic fluidity. This predictive power comes as a consequence of our understanding of the bridge principle linking the microscopic behavior of the system to the macroscopic. However, we can't do the analogous operation for consciousness. We know that a single neuron or a sufficiently small group of neurons interacting will not be conscious, true enough. But if we are given a certain system of neurons undergoing a set of dynamic interactions, we will not be able to predict if these neurons are collectively conscious. This comes as a consequence of our lack of a coherent bridge principle. We know there is a correlation between the microscopic and macroscopic properties of the system, but we don't understand the conceptual details and nuances of that correlation.

Note that it is not enough even to say that we know such and such interactions in so and so part of the brain will produce subjective awareness of qualia X, Y, and Z. This constitutes a good understanding of brain functionality, but not necessarily a good understanding of the general physical principles linking elementary physical behavior of a system to consciousness.

Another thing I want to say on this subject:

And that is the big point, it doesn't culminate into a conscious experience. It (the processing on all different parts of the brain, no "center") is the conscious experience.

Let me better explain what I meant in the part of my post that you replied to in the clip qouted above. We know from the example of blindsight that not all brain processes are correlated with conscious awareness. So, very roughly speaking, we can speak of two dynamic systems of patterns of activity in the brain C and U, defined such that C is correlated with consciousness and U is not. Now, what are the properties that differentiate C from U such that C is correlated with conscious awareness but U is not? This is precisely the sort of question we can't answer without a good bridge principle.
 
Last edited:
  • #47
Originally posted by hypnagogue
Let me be more precise on what I mean by "accounting for" and "illiciting." The latter may be a little misleading, but the former I think is a more accurate description of what I'm getting at. By "accounted for," I basically mean "made intelligible."

Let's make an analogy. Now, I think we both would agree that visual awareness is an emergent property of the brain-- individual neurons or sufficiently small groups of neurons are not necessarily visually aware, but their mass behavior in the occipital lobe certainly is. Similarly, the fluidity of water is an emergent property-- individual molecules of water or sufficiently small groups of molecules are not fluid, but the mass behavior of a glass of water certainly is.

Alright, true enough, though I think the "fluidity" of water is actually just a human categorization, since the liquid is the same as the solid just arranged a little differently. IOW, there is nothing "liquidy" about liquids, they are just freer. "Fluidity" as a term in itself, seems to imply that there is a new (emergent) property about the material, but this is not the case.

So, we can say the fluidity of the water simply is the mass behavior of all the molecules of water. Assuming no prior scientific understanding of the phenomenon, we can come to this conclusion simply by analyzing the microscopic motion of water molecules and noting that it is highly correlated with the macroscopic fluid motion of the water. But we still have not accounted for exactly how and why this correspondance exists, beyond the fact that we know the water itself is composed of these water molecules. Thus, we undergo some more scientific inquiry and eventually construct a bridge principle explaining how and why the correspondance exists-- we note that water molecules are polar and undergo various electrostatic interactions and show how these microscopic interactions can account for the fluid dynamics of the entire system. Thus, we see how the microscopic behavior "accounts for" the macroscopic behavior.

Fine, but this is an attempt at explaining a behavior of a physical substance. The attempt that is often being made in philosophies of the mind is to create a bridge principle between that which does exist (the physical interactions of the mind) that which does not (emergent properties of the mind). Thus, while I think a lot more research/discussion can be done on the matter of exactly which processes are occurring at what times in order to process (for example) the Mona Lisa but there is no need to (as you seem to imply, though I'm not sure it's what you mean) explain how these processes "translate" into conscious experience, since they themselves are conscious experience.

(As a side note, you could also substitute "illicits" or even "causes" for "accounts for" in the above sentence. In a way it doesn't make sense to say that the microscopic motion of water molecules causes the macroscopic fluidity, since the latter really is the former...

Good point.

But it also easy to see how if we are a little more liberal with our definitions exactly how "causation" is a relevant and valid concept in this context. I believe this is what is meant by saying that brain activity "causes" consciousness, or at least this is the sense that I understand it-- but for the sake of this discussion I will continue to abstain from using this terminology.)

Thank you. I don't like to be so picky about semantics, and am not usually in general or scientific discussions, but, in Philosophy, it can really destroy the intended meaning if the words are wrong (or even if they are right, but they "sound wrong" to the other person).

Analogously, we can say that consciousness simply is the mass behavior of neurons in the brain. We come to this conclusion by documenting the numerous correlations between conscious awareness and brain activity. But we still have not accounted for how and why this correlation exists-- we have not yet made it intelligible in terms of a bridge principle.

Well, I think that has much to do with the evolution of consciousness, don't you? IOW, isn't the question really, "How did the simple (mostly reactive) behavior of our "ancestors" develop into self-consciousness and proactive calculation?"?

For that, there is an entire (rather large) chapter in Consciousness Explained that gives hypothetical scenarios along with known scientific principles to explain everything; from the evolution of reactive movements and memories, to the propogation of memes and cultural evolution.

For instance, while we know that it doesn't make sense to call a single water molecule or a sufficiently small group of water molecules fluid, if we are told the number of water molecules there are in a given system and are also given the other relevant properties of the system-- volume of the container, temperature and pressure, strength of gravity, etc., we can predict whether or not this collection of water molecules will display macroscopic fluidity. This predictive power comes as a consequence of our understanding of the bridge principle linking the microscopic behavior of the system to the macroscopic. However, we can't do the analogous operation for consciousness. We know that a single neuron or a sufficiently small group of neurons interacting will not be conscious, true enough. But if we are given a certain system of neurons undergoing a set of dynamic interactions, we will not be able to predict if these neurons are collectively conscious.

Not necessarily so. Remember what I was saying in the previous post: Anything that performs the functions of the Multiple Draft processing (as humans do) is conscious (at least, in Dennett's theory, this is taken for granted). Thus, we could easily show whether something really was or was not conscious, by looking for the kind of processing that occurs in the human brain.

This comes as a consequence of our lack of a coherent bridge principle. We know there is a correlation between the microscopic and macroscopic properties of the system, but we don't understand the conceptual details and nuances of that correlation.

Well, there are a lot of details that are still unknown, sure, but there are a lot of things about a PC that are unknown to me and I still take for granted that it is processing in terms of binary code, and that its mulitple parts work together in not only processing new information (in the terms of that binary code) but also in displaying (for the benefit of the viewer) the text or picture (or whatever else) that is "asked for".

Note that it is not enough even to say that we know such and such interactions in so and so part of the brain will produce subjective awareness of qualia X, Y, and Z. This constitutes a good understanding of brain functionality, but not necessarily a good understanding of the general physical principles linking elementary physical behavior of a system to consciousness.

Do you mean that it's not enough to say that these brain functions produce conscious awareness, but we must also understand how they do so, and what it means to "produce conscious awareness" ITFP?

Let me better explain what I meant in the part of my post that you replied to in the clip qouted above. We know from the example of blindsight that not all brain processes are correlated with conscious awareness.

Sure, much like not all bodily processes have something to do with consciousness, that's just not their "area" :wink:...

So, very roughly speaking, we can speak of two dynamic systems of patterns of activity in the brain C and U, defined such that C is correlated with consciousness and U is not. Now, what are the properties that differentiate C from U such that C is correlated with conscious awareness but U is not? This is precisely the sort of question we can't answer without a good bridge principle.

What properties differentiate them? Well, I'd say it's mostly just their ability (or lack thereof) to take in input and to take part in the "question/answer game" of the brain. IOW, they are never "asked" about anything that is going on and they never "ask" for new information. (Please remember that "asking" and "answering" are purely physical phenomena, I'm just using terms easy to identify with. I think my explanation of the "asking/answering game" is in "Why the bias against materialism?".)
 
  • #48
Any response, good buddy? :smile:
 
  • #49
Whoops, sorry for the late response. I went away for the weekend and forgot to check back in on the archives. It kind of worries me that we're toiling away here in the relative obscurity of the archival section in the first place instead of one of the more viable and visible new philosophy forums, but oh well. Onward!

Originally posted by Mentat
Fine, but this is an attempt at explaining a behavior of a physical substance. The attempt that is often being made in philosophies of the mind is to create a bridge principle between that which does exist (the physical interactions of the mind) that which does not (emergent properties of the mind). Thus, while I think a lot more research/discussion can be done on the matter of exactly which processes are occurring at what times in order to process (for example) the Mona Lisa but there is no need to (as you seem to imply, though I'm not sure it's what you mean) explain how these processes "translate" into conscious experience, since they themselves are conscious experience.

Although I pretty much agree with this, it's important to remember that it is an assumption that we're making here. In any case, there is a need to explain, since we have an explanatory gap: why are these neural processes conscious in the first place? Saying "they just are" is not much of an explanation.

I think your point about the link between consciousness and evolution also overlooks this question-- you talk about how the brain processes information, but not how certain information-crunching processes somehow become consciousness.

Not necessarily so. Remember what I was saying in the previous post: Anything that performs the functions of the Multiple Draft processing (as humans do) is conscious (at least, in Dennett's theory, this is taken for granted). Thus, we could easily show whether something really was or was not conscious, by looking for the kind of processing that occurs in the human brain.

To do this we'd need to have a much more sophisticated understanding of how the brain processes information. For instance, even if we accept that Multiple Draft processing really is a factor in (to use as neutral a word as I can think of) establishing consciousness, we still have not addressed the question of the importance of MD in the context of the functioning of the entire brain. IOW, is MD processing sufficient for consciousness? Is it even necessary?

We also have to take into account the role that physical properties play in the phenomenon of consciousness if we are to have a complete understanding of it in a materialistic paradigm. If we explain consciousness entirely by recourse to abstract information processing (the theory of functionalism), then we are essentially operating in an idealistic paradigm where an abacus or a pile of rocks can be conscious, so long as they perform the right 'computations' over a long enough period of time.

Well, there are a lot of details that are still unknown, sure, but there are a lot of things about a PC that are unknown to me and I still take for granted that it is processing in terms of binary code, and that its mulitple parts work together in not only processing new information (in the terms of that binary code) but also in displaying (for the benefit of the viewer) the text or picture (or whatever else) that is "asked for".

A subjective mental image is not explicable or intelligible in terms of an objective computer 'image,' or to use less obfuscating language, a computer's set of photon outputs. An image on a monitor, in itself, is only a conglomeration of information. In the case of the brain, some conglomerations of information have the added property of being conscious. I know you are leery of subjective/objective distinctions, but they are essential to acknowledge this point. In the spirit of our conversation, we can think of subjective phenomena as being subsets of objective phenomena-- what matters is that we can say for some physical systems that there exists a subjective element (eg human brains) and for others there does not (eg computer monitors).

Do you mean that it's not enough to say that these brain functions produce conscious awareness, but we must also understand how they do so, and what it means to "produce conscious awareness" ITFP?

Basically, although I would rephrase it as such: it's not enough to say so-and-so brain functions are correlated with such-and-such conscious awareness, to any level of detail. We must also understand how and why this correlation exists in the first place.

Maybe an even better way to say it would be: to truly understand consciousness, we must construct a theoretical mapping not just from human brain states to human conscious experiences, but from any arbitrary physical system's states to any arbitrary conscious experience.

Sorry if my little slip-up with the word "produces" drew your suspicions. :wink:

What properties differentiate them? Well, I'd say it's mostly just their ability (or lack thereof) to take in input and to take part in the "question/answer game" of the brain. IOW, they are never "asked" about anything that is going on and they never "ask" for new information. (Please remember that "asking" and "answering" are purely physical phenomena, I'm just using terms easy to identify with. I think my explanation of the "asking/answering game" is in "Why the bias against materialism?".)

Well, maybe I am misunderstanding you, but it seems that with blindsight we clearly have a contradiction to this Q/A theory. Neural systems involved with visual processing are engaged in a "question/answer" game insofar as they collect input from the external world, meaningfully process it, and share this processed information meaningfully with further neural systems which go on to guide behavior back out in the external world (eg information is meaningfully shared with the motor cortex to guide a hand to reach out and pick up a cup, or with Broca's area to answer a question about the environment). In spite of all this, the Q/A activity of these visual neural systems with other neural systems does not have any attendant conscious visual awareness for those 'blind' portions of blindsight.
 
Last edited:
  • #50
Originally posted by hypnagogue
Whoops, sorry for the late response. I went away for the weekend and forgot to check back in on the archives. It kind of worries me that we're toiling away here in the relative obscurity of the archival section in the first place instead of one of the more viable and visible new philosophy forums, but oh well. Onward!

I was wondering if you'd abandoned the discussion altogether. Glad to see you back :smile:.

Although I pretty much agree with this, it's important to remember that it is an assumption that we're making here. In any case, there is a need to explain, since we have an explanatory gap: why are these neural processes conscious in the first place? Saying "they just are" is not much of an explanation.

I think your point about the link between consciousness and evolution also overlooks this question-- you talk about how the brain processes information, but not how certain information-crunching processes somehow become consciousness.

I see. Well, that's actually why I brought up evolution. After all, there are obviously certain reactive processes in "lesser" animals that resemble our consciousness. For example, there's the tendency to dodge "incoming objects". This has, apparently, evolved into the ability to become proactive, so that we can be "on guard" against any potential dangers of that kind. This proaction could then have evolved in an advanced perception of depth, which allows for 3-D imaging, and so on...

So, I didn't mean to dodge with the mention of evolution, but rather to refer you to the particular chapter in Consciousness Explained that deals with that, since I find it pertinent to the discussion.

Also, as to what processes "are conscious" and "aren't conscious", I think that that can easily be determined by examining which ones are necessary to just continue living (provided no danger or new circumstance presents itself), and which ones are just for greater understanding of the surrounding world (which, of course, leads to greater chances of survival).

For example: The parts which process visiual stimulus are a part of consciousness, and so are the parts which contain memory.

To do this we'd need to have a much more sophisticated understanding of how the brain processes information. For instance, even if we accept that Multiple Draft processing really is a factor in (to use as neutral a word as I can think of) establishing consciousness, we still have not addressed the question of the importance of MD in the context of the functioning of the entire brain. IOW, is MD processing sufficient for consciousness? Is it even necessary?

I'd considered this, but, in Dennett's exposition of this theory, he didn't just present what it was, but rather presented (many) instances where Cartesian approaches fail miserably, and then replaces them with the MD approach which (obviously) succeeds. The fact that MD approach implies the question/answer processes within the brain that I described before (and illustrated with the "party game") is what makes it (in Dennett's opinion) rather necessary for an explanation of consciousness.

We also have to take into account the role that physical properties play in the phenomenon of consciousness if we are to have a complete understanding of it in a materialistic paradigm. If we explain consciousness entirely by recourse to abstract information processing (the theory of functionalism), then we are essentially operating in an idealistic paradigm where an abacus or a pile of rocks can be conscious, so long as they perform the right 'computations' over a long enough period of time.

But that is not Idealistic, in anyway. On the contrary, it is utterly Materialistic, since it completely eliminates "non-physical" parts of consciousness.

By this theory, any PC (for example) that performed the right physical functions would be conscious.

A subjective mental image is not explicable or intelligible in terms of an objective computer 'image,' or to use less obfuscating language, a computer's set of photon outputs. An image on a monitor, in itself, is only a conglomeration of information. In the case of the brain, some conglomerations of information have the added property of being conscious. I know you are leery of subjective/objective distinctions, but they are essential to acknowledge this point. In the spirit of our conversation, we can think of subjective phenomena as being subsets of objective phenomena-- what matters is that we can say for some physical systems that there exists a subjective element (eg human brains) and for others there does not (eg computer monitors).

So you are referring to the processing center as "subjective elements"?

btw, I want to avoid falling into the homunculun trap so let's clarify further about the distinction between the monitor and subjective experience. The monitor is an output system, that exists for the benefit of a "viewer". However, no such "viewer" or output system can exist in the human brain (otherwise one gets infinite regress...the homonculun problem). Thus, saying "I have a picture in my mind" or speaking of a continual "narrative" or "display" of consciousness is illogical, since it implies some internal "viewer".

Instead of this - in the MD Theory - there is stimulation, of the same areas that must be stimulated for conscious experience of external phenomena, by the memory.

Basically, although I would rephrase it as such: it's not enough to say so-and-so brain functions are correlated with such-and-such conscious awareness, to any level of detail. We must also understand how and why this correlation exists in the first place.

Maybe an even better way to say it would be: to truly understand consciousness, we must construct a theoretical mapping not just from human brain states to human conscious experiences, but from any arbitrary physical system's states to any arbitrary conscious experience.

Sorry if my little slip-up with the word "produces" drew your suspicions. :wink:

On the last sentence of the second paragraph (quoted) did you mean arbitrary unconscious experience?

You know, I think that this problem - which you correctly see as vital - may be resolved in determining which parts of the brain participate in the question/answer cycles. After all, there are many parts that are just don't do this, and the fact that they don't participate in this is what makes them "unconscious" processes.

Let me clarify on that: When I say "participates in the question/answer cycle" I mean the question/answer cycle that begins with external stimulus.

Well, maybe I am misunderstanding you, but it seems that with blindsight we clearly have a contradiction to this Q/A theory. Neural systems involved with visual processing are engaged in a "question/answer" game insofar as they collect input from the external world, meaningfully process it, and share this processed information meaningfully with further neural systems which go on to guide behavior back out in the external world (eg information is meaningfully shared with the motor cortex to guide a hand to reach out and pick up a cup, or with Broca's area to answer a question about the environment). In spite of all this, the Q/A activity of these visual neural systems with other neural systems does not have any attendant conscious visual awareness for those 'blind' portions of blindsight.

But it did participate with memory, and that's what's really important. The fact these people are impared in their visual centers' abilities to "question" the memory, in certain instances, should be what accounts for "blindsight".

g2g, sorry. Time's up, but I think there is more to be said.
 
  • #51
Originally posted by Mentat
But that is not Idealistic, in anyway. On the contrary, it is utterly Materialistic, since it completely eliminates "non-physical" parts of consciousness.

The functionalist position is not materialistic in any meaningful sense, insofar as it denies the importance of material properties in producing consciousness. According to the functionalist view, any physical process embodying the right 'calculations' will be conscious. This holds for a human brain, or a super-fast computer simulating a human brain, or a pile of rocks jumbled around over eons whose 'calculations' also simulate a human brain. There is something quite physically arbitrary in the notion of 'calculation.' Functionalism states that an abstract relationship among physical things is responsible for consciousness; to formulate a materialist understanding of consciousness, we must show consciousness to be grounded in at least some actual physical properties, and not entirely on abstract and essentially arbitrary relationships.

But it did participate with memory, and that's what's really important. The fact these people are impared in their visual centers' abilities to "question" the memory, in certain instances, should be what accounts for "blindsight".

Not so. Please check out the 'consciousness' thread in the Biology forum.

As to the rest of your points, I still don't think they are striking at the heart of the matter; they attempt to give descriptions of consciousness but nonetheless speak only of material concepts, and not how those material concepts are to be conceptually linked with qualitative mental phenomena in any meaningful sense beyond saying "these processes just are consciousness." This is really the central point where you and I have not been able to see eye to eye.

In reading "What is it like to be a bat?" by Thomas Nagel for the aforementioned consciousness thread, I see that Nagel discusses exactly the sort of conceptual difficulty that I have been talking about. Please read over this essay, and maybe you will better understand it via Nagel's language.

http://members.aol.com/NeoNoetics/Nagel_Bat.html
 
  • #52
Originally posted by hypnagogue
The functionalist position is not materialistic in any meaningful sense, insofar as it denies the importance of material properties in producing consciousness. According to the functionalist view, any physical process embodying the right 'calculations' will be conscious. This holds for a human brain, or a super-fast computer simulating a human brain, or a pile of rocks jumbled around over eons whose 'calculations' also simulate a human brain. There is something quite physically arbitrary in the notion of 'calculation.' Functionalism states that an abstract relationship among physical things is responsible for consciousness; to formulate a materialist understanding of consciousness, we must show consciousness to be grounded in at least some actual physical properties, and not entirely on abstract and essentially arbitrary relationships.

Well, I agree that it's supposed to be grounded in some physical properties, but functionalism is not wrong in saying that anything that processes like the human brain will be conscious. However, "processing" or "calculating" are hard concepts (without the Multiple Drafts (and question/answer) model, or some other alternate model, to explain how we calculate).

Not so. Please check out the 'consciousness' thread in the Biology forum.

As to the rest of your points, I still don't think they are striking at the heart of the matter; they attempt to give descriptions of consciousness but nonetheless speak only of material concepts, and not how those material concepts are to be conceptually linked with qualitative mental phenomena in any meaningful sense beyond saying "these processes just are consciousness." This is really the central point where you and I have not been able to see eye to eye.

In reading "What is it like to be a bat?" by Thomas Nagel for the aforementioned consciousness thread, I see that Nagel discusses exactly the sort of conceptual difficulty that I have been talking about. Please read over this essay, and maybe you will better understand it via Nagel's language.

http://members.aol.com/NeoNoetics/Nagel_Bat.html

Will do. Until then, I want to remind you that the intentional stance (which Dennett advocates) requires that one accept that such physical processes are consciousness, as opposed to "producing" consciousness, or "being linked to" consciousness. If there is nothing else to consciousness (and I see no reason, yet, why there should be) then it no longer needs to be a "mystery".
 

Similar threads

Replies
62
Views
11K
Replies
16
Views
2K
Replies
16
Views
2K
Replies
27
Views
17K
Replies
28
Views
6K
Replies
142
Views
13K
Back
Top