Is Consciousness Dependent on Time or Complexity?

In summary, if consciousness is a result of complex physical processes, and could thus be simulated by a hugely complex computer program, it is controversial whether or not this would produce conscious experiences.
  • #1
pe3
9
0
IF consciousness is a result of complex physical processes, and could thus be simulated by a hugely complex computer program (which should not be extremely controversial) -

WHAT IF the finite execution of the computer program would be presented, not with temporal steps (using the time dimension) but in spatial two or three dimensions, for example by drawing it graphically onto a really large (but not infinite) sheet of paper?

Would there be consciousness in 2D?

I am interested in whether consciousness is inseparably bound to time, or if it can exist where ever there is causality and enough complexity.

I guess the 2D "mind" would "feel" that it is flowing through "time", where it would be actually flowing through branches of the graph and thus cumulating a "memory" of past "events" and "decisions".
 
Physics news on Phys.org
  • #2
I could write a huge series of algorithms on a piece of paper describing the functionality of a computer spreadsheet program. But I couldn't use it to crunch numbers. Until you implement it on something that can change and store states, it's just a plan.
 
  • #3
A computer or Turing machine is a combination of tape and gate. So you have the data but also the processor. Time is a required part of the story.
 
  • #4
Thanks for your comments!

apeiron said:
A computer or Turing machine is a combination of tape and gate. So you have the data but also the processor. Time is a required part of the story.

I understand your claim sounds natural, but I'm still not convinced.

Wikipedia (the most realiable source of all) defines Turing machines as "A Turing machine that is able to simulate any other Turing machine is called a Universal Turing machine".
I can't find any reason why you couldn't simulate the Turing machine with a 2D or 3D presentation (for example, the one dimensional tape in the Turing machine being stretched as a sheet). The "processor" would thus be the rules how you set up the symbols on the sheet.

"Math Is Hard": I also understand your reasoning, but I was talking about laying out the execution of the program in some suitable 2D or 3D format, not the program source code itself.

Anyway, if Max Tegmark is right in that all mathematical structures do exist, I guess my original question would be obsolete since all the possible consciousnesses do exist whether we produce them with a processor or not.
 
  • #5
"IF consciousness is a result of complex physical processes, and could thus be simulated by a hugely complex computer program (which should not be extremely controversial)"

It is controversial, or at least contested. Roger Penrose wrote a book (maybe more than one) on why consciousness cannot be simulated by a computer. The idea that consciousness is reducible to algorithms is known as "strong AI".
 
  • #6
madness said:
"IF consciousness is a result of complex physical processes, and could thus be simulated by a hugely complex computer program (which should not be extremely controversial)"

It is controversial, or at least contested. Roger Penrose wrote a book (maybe more than one) on why consciousness cannot be simulated by a computer. The idea that consciousness is reducible to algorithms is known as "strong AI".

Yes I admit that, that's why I wrote "extremely controversial". Indeed if the simulation is impossible, that would nicely solve my questions (as obsolete). If you can simulate consciousness with a computer, you'll get many weird possibilities.
 
  • #7
Hi pe3. Your suggestion that a piece of paper with a given structure may be conscious is similar other arguments in the literature. From my perspective, the argument falls into the category of how one defines a computer. Others have challenged computationalism on this basis, most notably Putnam and Searle. Putnam has retired at this point, but Mark Bishop has taken up Putnam's line of attack and tried to extend it, as have others.

In the opposite corner stand the defenders of computationalism including Chalmers and Christley, and others such as Copeland and Endicott, who have tried to define a computer and how it is realized. For example, one of Chalmers' papers (http://consc.net/papers/rock.html" ) is available on the net. Chalmers tries to attack the work of Putnam, stating for example:
The argument that the system [such as a rock] implements the FSA is straightforward. We simply define physical state a to be the disjunction s_1 v s_3 v s_5 v s_7, and state b to be s_2 v s_4 v s_6, and we define the mapping f so that f(a)=A and f(b)=B.
...
The problem, I think, is that Putnam's system does not satisfy the right kind of state-transition conditionals. The conditionals involved in the definition of implementation are not ordinary material conditionals, saying that on all those occasions in which the system happens to be in state p in the given time period, state q follows. Rather, these conditionals have modal force, and in particular are required to support counterfactuals: if the system were to be in state p, then it would transit into state q. This expresses the requirement that the connection between connected states must be reliable or lawful, and not simply a matter of happenstance. It is required that however the system comes to be in state p, it transits into state q (perhaps with some restriction ruling out extraordinary environmental circumstances; perhaps not). We can call this sort of conditional a strong conditional.

It is not quite clear whether Putnam intends his conditionals to have this sort of modal force. He requires that the relation be a causal relation, and goes to some lengths to argue that his construction indeed satisfies the causal relation by arguing that its being in state a fully determines its transition into state B, so he may have some such requirement in mind. In any case, I will argue that his system does not satisfy the strong conditionals in the way that implementation of an automaton requires.

In short, there are arguments in the literature that uphold both sides, and no clear winner has emerged. The possibility that an object that doesn't undergo some kind of active causation (as described by Math is Hard above) might be conscious is not an argument that is supposed to be taken seriously in the sense that Putnam, Searle and others are actually suggesting a rock is conscious. What they are suggesting is that we don't have a clear, philosophical understanding of how to define a computer, and without that, computationalism remains as problematic as ever. They are also saying that we CAN'T define a computer sufficiently, so computationalism is FALSE.

If you're interested in the topic, I can suggest plenty of philosophical papers that have been published on this issue.
 
Last edited by a moderator:
  • #8
Thank you, Q_Goest! The information was very interesting.

I don't really believe a piece of paper to be conscious, but couldn't myself find (other than practical) arguments against it. Computer generated consciousness (whether possible or not) is still interesting.

If you would be so kind to give some pointers to related papers, I would be grateful.
 
  • #9
pe3 said:
Wikipedia (the most realiable source of all) defines Turing machines as "A Turing machine that is able to simulate any other Turing machine is called a Universal Turing machine".
I can't find any reason why you couldn't simulate the Turing machine with a 2D or 3D presentation (for example, the one dimensional tape in the Turing machine being stretched as a sheet). The "processor" would thus be the rules how you set up the symbols on the sheet.

Yours is a two-step argument.

1) the proposition that consciousness is a computational process that can be simulated by a Turing machine.

2) a Turing machine does not need to actually process data to turn input to output. Just having the tape - all input and output states represented as data - gives you what is essential to the idea of a "computational process".

So forget the first proposition for the moment and just focus on the validity of the second.

You need actual change to do computation. The gate has to be able to print and erase marks. It has to be able to move a step left or right. Can you imagine a way these extra actions, happening in time, could be collapsed to a static representation on the tape?

Another way of looking at it is that I have just created this static tape-based input-output mapping. It is called 5x5=32. You protest that I cannot be right. Well, I reply what makes it wrong unless there is some act that selects the number 25 instead of all the other possible numbers that could be written down as output.

I agree there is something seductive about the Tegmarkian idea that if something is possible, it exists. The same with the block universe story peddled by Barber, the modal realism of Lewis, the many QM worlds of Everett, and the One of Parmenides (to get back to the origin of these timeless, changeless speculations).

But it simply represents the philosophical urge to collapse all reality to a single principle or essence. To treat existence or location as the monadic fundamental by making time, change, process, development, somehow an emergent illusion.
 
  • #10
Hi pe3,
I just want to clarify that the argument put forth by Putnam does not require that... "the finite execution of the computer program would be presented, not with temporal steps (using the time dimension) but in spatial two or three dimensions, for example by drawing it graphically onto a really large (but not infinite) sheet of paper..." Putnam tries to make his argument as general as possible, so making those kinds of limiting claims (ie: that the actual program execution must be symbolized on the paper) isn't necessary and actually detracts from the generality. Putnam also recognizes that the time evolution of a system is computational in the sense that it follows a mathematical description as Tegmark does. That part isn't very controversial as near as I can tell, albeit there are limitations on our ability to describe nature using mathematics (radioactive decay for example).

Putnam is widely quoted as saying*, "every ordinary open system is a realization of every abstract finite automaton" meaning that (per Mark Bishop) "The computational states of a system are always relative to the observed function and the underlying physics of the system. ie Unlike say mass or form, computational states are not intrinsic to physical states of matter but always require a mapping from physical state to logical state."
See Bishop: http://www.doc.gold.ac.uk/~mas02mb/Selected Papers/2004 BICS.pdf

I'd suggest reading through the paper from Chalmers and the one by Bishop to start. The papers are pretty painful reading if you ask me, not direct and to the point unless perhaps you're a philosopher, which I'm not. If you're looking for more support of the 2D consciousness claims you make in the OP, I really can't help you. All I'm offering is a broader understanding of similar arguments that are found in the literature.

*See "Representations and Reality" by Putnam, in appendix.
 
  • #11
pe3, I wanted to add one more observation. In your OP, you want to add a symbolic representation of an algorithm to a piece of paper. I think you've simply added another symbolic layer between the computation and the meaning. So you really haven't added anything interesting between the two. It's nothing more than another symbolic interpretation which you don't need. As I'd pointed out earlier, Putnam's argument is very general, so there's no need to limit the argument by adding the symbolic representation of the computation that's allegedly the source of consciousness.

I don't necessarily agree with Putnam's argument but in the end, I think he's correct.
 
  • #12
Q_Goest said:
As I'd pointed out earlier, Putnam's argument is very general, so there's no need to limit the argument by adding the symbolic representation of the computation that's allegedly the source of consciousness.

The extreme position that Putnam was positing was that the molecules of the paper would pass through a succession of computational states. An any moment, there would be somewhere within an infinity of configurations, some configuration that could directly stand for the state of some computational process. So the paper would be conscious because at any moment, there would be some stage of calculation represented, then an instant later, somewhere else among the molecules, the next required state. With the further unsupported assumption that consciousness is a sequence of state mappings.

So pretty crazy and open to attack from so many directions that it is hardly worth the bother. Except that it is the kind of thought experiment that can bring people's unwitting assumptions about the brain~mind to the surface for discussion.

However, in relation to the OP, it can be seen that time is being invoked. There is no program making things happen. But time is needed to get from one state to the next as the molecules jitter about to throw up new candidate configurations for someone's phantom (non-)choice.

The OP was about a frozen timeless world of symbols and their (non-)manipulation. So a fundamentally different argument I would have thought. In kind if not spirit.
 
  • #13
Hi apeiron,
apeiron said:
The extreme position that Putnam was positing was that the molecules of the paper would pass through a succession of computational states.
Actually, wasn't it Searle that said:

…you can assign a computational interpretation to anything.
...there is some pattern in the molecule movements which is isomorphic with the formal structure of Wordstar
See on the web at SEP: http://plato.stanford.edu/entries/chinese-room/#5.1

Putnam's argument is more general than that, but certainly is along the same lines.
apeiron said:
With the further unsupported assumption that consciousness is a sequence of state mappings.
What is sometimes referred to as "Putnam mapping" isn't the suggestion that consciousness is a sequence of state mappings. Putnam's mapping is simply the recognition that computationalism posits that consiousness comes about due to physical changes in state, and that these states can be arbitrarily compared to other physical states and thus claimed to be "equal". They are equal because the computational interpretation of a physical state is imposed on it, not the other way around. There is nothing intrinsic about the computational state (this would make an excellent topic for discussion). With that in mind, Putnam is saying that one can arbitrarily call one physical state equal to another. In other words, a computer goes through states A, B, C... and a rock goes through states 1, 2, 3... so one can 'map' state A = 1 and B = 2 and C = 3, etc... Note that this contention isn't rejected by other philosophers, and isn't really that controversial, believe it or not. I have a hard time accepting it myself, but I think Putnam (and Searle) are on to something, they just haven't worked through the logic sufficiently to convince everyone.
 
  • #14
Q_Goest said:
Actually, wasn't it Searle that said:

Searle certainly was the one to frame the ideas in their most vivid form.

Q_Goest said:
There is nothing intrinsic about the computational state (this would make an excellent topic for discussion). With that in mind, Putnam is saying that one can arbitrarily call one physical state equal to another. In other words, a computer goes through states A, B, C... and a rock goes through states 1, 2, 3... so one can 'map' state A = 1 and B = 2 and C = 3, etc... Note that this contention isn't rejected by other philosophers, and isn't really that controversial, believe it or not.

That camp of philosophers - even Searle and Dennett - never came close to really getting down to brass tacks IMHO. Yet in their era, there were theoretical biologists who really understood semiotics and symbol grounding - from a solid physical perspective.

Howard Pattee/Binghampton would be the sharpest I came across, building on Von Neumman's self replicating automata.

You are knowledgeable about the issue so might like to check this introductory paper here:
http://informatics.indiana.edu/rocha/pattee/umerez.pdf

And I have a few e-copies of Pattee's papers I could send, which are now hard to get hold of.
 
  • #15
Thanks apeiron, I'll read the paper.

In my OP I posed questions more than claims and I got interesting replys. I'm pretty confident I can now safely throw away my old Game of Life source printouts :)

Yes I admit I'm a bit seduced by Tegmark, block universe, many QM worlds etc.

I understand now that it is difficult, if not impossible, to draw a line for what is a computer. Makes me wonder if it is possible at all to simulate consciousness with a computer. But then again, what is it that the brain is doing differently? I will study more before further questions.
 
  • #16
pe3 said:
I understand now that it is difficult, if not impossible, to draw a line for what is a computer. Makes me wonder if it is possible at all to simulate consciousness with a computer. But then again, what is it that the brain is doing differently? I will study more before further questions.

I studied this question fairly intensely and I guess the outcome was that it seems possible to be sure the brain is not doing any of the simple kinds of computation we talk about, from Turing Machines, to cellular automata, to non-linear, to neural nets, to massively parallel, to whatever. But also that it does share some broad connection to some of these things - generative neural nets or forward models in particular.

The reason why I switched away from the computation/neuroscience/psychophysics/philosophy orbit of mind studies was because of a dissatisfaction, the sense that where things were clearly modeled (as in Turing Machines) they then abstracted away stuff that was actually essential to brains.

So I now operate in a different orbit of theoretical biology, hierarchy theory, semiotics, systems science, thermodynamics, dissipative structures. Which still has had surprisingly little impact on mind science. These biology guys are addressing the issue of "what is life?". And what they conclude also must be the basis for our answers about "what is mind".

Another sub-orbit in all this would be chaos and complexity theory. But actually I found the "Santa Fe" brand approach to be rather mired in the computationalist mindset.

So there is this divide in academia. And it seems to be between atomists and systems thinkers, between those who believe in information theoretic approaches to modelling, and those who believe that semiotic approaches - ones which are about meaning rather information - are what need to be developed.

In earlier times, the divide might be described as that between reductionists and holists.

Anyway, that has been my experience. Mind science is dominated by informational atomism (and direct reactions to it, like QM consciousness and panpsychism). Biology hosts a reasonably respectable camp of holists, systems thinkers and semioticians who operate from a fundamentally different mindset.

If you read only the standard mind science literature, you will never see that this alternative camp even exists.
 
  • #17
Everthing works on computational intelligence. eg a bouncing ball is just following its program and is a simple form of intelligence.
If the intelligence is better, such as in a brain or computer, then it is easy to 'know that it knows' - there is nothing supernatural in that. Its clever, sure, but does not need special unkown powers. Its all about a logical rationality asking questions and looking for answers. It was rational thought that discovered atoms, not seeing them.

Does not need anything special - just a good 'program'. Why add secret mysteries? Not neeeded.

There is a dispute between what consciousness means exactly (ie a 'thing' that floats around for all eternity)
 
Last edited:
  • #18
debra said:
Everthing works on computational intelligence. eg a bouncing ball is just following its program and is a simple form of intelligence.

Wonderful. We used to have anthropomorphism, imputing animate qualities to inanimate objects. Now we have computomorphism where it is just so obvious that everything is a program.
 
  • #19
Hi apeiron,
I read through (skimmed through) the article you posted. Sorry, but I'm not convinced that Pattee has an angle on cognitive science that I'm interested in... yet.. The article wasn't written by Pattee though, and I'm still at a loss as to what his contributions were/are. I'd be curious to know what his angle is.

There's those for example that have suggested that strongly emergent phenomena occur at a level between the classical and quantum level. See for example Robert Laughlin (http://www.complexityforum.com/members/Laughlin%20etal%202000%20The%20Middle%20Way" ) and Paul Davis (Essay in the book The Re-Emergence of Emergence). However, there also seems to be an interest throughout the scientific community to extend that to all "levels" of science. And I suspect these two authors would also like to extend it. What I'd be interested in is Pattee's perspective on this. Would he suggest that there are levels in nature that allow for additional emergent phenomena? Or would Pattee suggest that only at this level between classical and quantum mechanics, a level where biological molecules and similar interactions take place, is there a place for strongly emergent phenomena.

Please note that I'm referring to strongly emergent phenomena including downward causation, not just emergent phenomena or weakly / nominally emergent phenomena as described by Mark Bedau for example. Chalmers of course also differentiates between strong and other forms of emergence including downward causation.

One reason I'm curious about Pattee is because I see he's referenced by Boyle who defines computation in some interesting terms. I've included Boyle's paper as a reference in one I'm writing myself, so I'm very curious to see if Pattee might have something worth while that I'd quote.
 
Last edited by a moderator:
  • #20
Q_Goest said:
Hi apeiron,
I read through (skimmed through) the article you posted. Sorry, but I'm not convinced that Pattee has an angle on cognitive science that I'm interested in... yet..

Hi Q_Goest - you'd have to PM me an email contact if you want copies of his papers. They use to be on the net but he, or someone, took them down at some stage.

I think there may be some confusion as you seem to be talking ontology and Pattee's points are epistemological.

So he is talking about the relationship between minds and realities (hence the epistemic cut between observers and observed).

Where Pattee is ontically relevant is the symbol grounding issue in computationalism. So there is a general class of systems with something extra in the form of symbolic codes - genes, words, membranes, neurons.

You want to talk about emergence or self-organisation in the naked physical world. What in Pattee's jargon is the rate dependent, rather than rate independent, part of a complex system.

On Laughlin, to me what he is talking about is the edge of chaos or criticality. And this is very much what I am talking about too.

So at the limits of classical substances like water, you start to get an interesting realm where the physics looks different. It is a transition zone where upwards and downwards causation are in dynamic tension.
 

Related to Is Consciousness Dependent on Time or Complexity?

What is consciousness?

Consciousness is the state of being aware of one's surroundings, thoughts, feelings, and perceptions. It is the ability to experience and process information, make decisions, and have self-awareness.

Is consciousness a physical or non-physical phenomenon?

This is still a topic of debate among scientists and philosophers. Some argue that consciousness is purely a result of brain activity and therefore a physical phenomenon. Others believe that consciousness is non-physical and cannot be fully explained by neuroscience.

How does consciousness relate to the concept of time?

Consciousness and time are closely intertwined. Our perception of time is influenced by our conscious experiences and thoughts. Time also plays a role in our ability to be aware of our surroundings and make decisions in the present moment.

Can consciousness exist outside of the physical body?

There is no scientific evidence to suggest that consciousness can exist outside of the physical body. However, some philosophical theories propose the idea of an afterlife or a non-physical realm where consciousness may continue to exist after death.

Do all living beings have consciousness?

This is a difficult question to answer definitively, as the concept of consciousness is still not fully understood. Some scientists argue that consciousness is a unique trait of humans, while others believe that other animals may also possess some level of consciousness. It is a topic of ongoing research and debate.

Similar threads

  • General Discussion
2
Replies
62
Views
11K
  • Sci-Fi Writing and World Building
Replies
15
Views
424
  • Beyond the Standard Models
Replies
2
Views
2K
Replies
2
Views
1K
Replies
6
Views
4K
Replies
1
Views
2K
  • General Discussion
Replies
5
Views
3K
  • Programming and Computer Science
Replies
29
Views
3K
Replies
67
Views
14K
  • Programming and Computer Science
Replies
6
Views
3K
Back
Top