- #36
Ferris_bg
- 88
- 0
Q_Goest said:(...) Once we dispence with counterfactuals (because they're simply wrong), computationalism predicts panpsychism, and panpsychism is unacceptable. (...) Putnam is retired now, so Bishop has taken up his flag so to speak, and continues to work on advancing Putnam's argument.
Here is the response from David Chalmers to Putnam - http://consc.net/papers/rock.html" .
Does a Rock Implement Every Finite-State Automaton? said:If Putnam's result is correct, then, we must either embrace an extreme form of panpsychism or reject the principle on which the hopes of artificial intelligence rest.
Here is a response from Mark Bishop to Chalmers - http://docs.google.com/viewer?a=v&q...OWBL2&sig=AHIEtbTl0sSFE9SFNzqkg0u6CSWaJ3523Q".
Bishop asks what will happen if a "conscious" claimed robot R1 is step by step transformed into a robot Rn, by deleting the counterfactual states at each step (R1, R2, R3 ... Rn-1, Rn) - how will be changed the phenomenal perception through the stages? His only argument, before making the conclusion that the counterfactual hypothesis is wrong, is "it is clear that this scenario is implausible".
Basically from such example, illustrating the http://consc.net/papers/qualia.html" by Chalmers (which has weak points too, but that is another discussion), follow these possible results:
1) Every robot has the same degree of mentality (M):
-1.1) M == 0 -> Functionalism is wrong, it is reduced to behaviorism.
-1.2) M != 0 -> Functionalism could only exist as panpsychic theory.
2) Every robot has different degree of mentality -> Counterfactual states don't play causal role by itself, but somehow removing them the degree of consciousness changes -> See 1.2.
3) R1 is conscious, while the others are not -> Non-triviality condition holds.
I think both Chalmers and Bishop are looking at one hypothetical thing from different angles, giving arguments in the context of their own view. Only the time will show who was on the right side.
Last edited by a moderator: