The Mathematics of Self-Awareness

  • Thread starter phoenixthoth
  • Start date
  • Tags
    Mathematics
In summary, the conversation revolved around the development of a mathematical theory of awareness, particularly in relation to sets. Various ideas were proposed, including the use of functions and fixed points to measure and quantify awareness between sets. The concept of self-awareness was also discussed, with suggestions that it may be a non-linear phenomenon. However, there was disagreement on how to define and model awareness mathematically.
  • #36
You weren't suppposed to understand them, hell, I don't really understand them. And these are on small part of one small part of... of mathematics.
 
Physics news on Phys.org
  • #37
I recall that Paul Cohen proved that the continuum hypothesis and the axiom of choice are independent of the other axoms of set theory and of each other. Taking the maximal multiverse view, there would exist four types of rational universes: Zornian Cantorian, Zornian non-Cantorian, non-Zornian Cantorian, and non-Zornian non-Cantorian. The last one is the one constrictivists prefer.

Stable homotopy categories and other categorical structures may exist in our universe! In the sense of being important defining properties of a TOE. See the new research by John Baez and Urs Schreiber.
 
  • #38
hmm, the categories are models of these phenomenon, and they've been known for a while, for instance cobordims are the morphisms in a triangulated category. It isnt' even clear that arbitrary homotopy colimits exist in triangulated categories. In fact we know that they don't as there are obstructions called Toda classes.

Would you say compact special unitary groups exist since that's all elementary particles seem to be... (joke)
 
Last edited:
  • #39
Fact:
|A(x,y)/~|=|P(x&y)\{Ø}|
 
  • #40
"continuum hypothesis will have any use in a model" I think I might have a use for it. Give me two months and I think I can connect it to Awareness.

Someone I came across told me that the awareness X has of Y is at least superficially covered in information theory in the context of "mutual information".

If I can just get a definition for the entropy of a set, H(X) and some kind of joint entropy, H(X,Y), then
A(X,Y), the awareness of X of Y could be modeled by
H(X)+H(Y)-H(X,Y).

http://en.wikipedia.org/wiki/Information_theory

I will now google for definitions of entropy of a set, if possible.
 
  • #41
Seems as though entropy is only defined for random variables. Any clue how to define entropy for a set?
 
  • #42
a new idea in SASs

From this site: http://star.tau.ac.il/~inon/wisdom1/node8.html
We get this quote of interest to this subject:
To refer to the genome as being self-aware is a very strong statement with far-reaching implications. The issue will be presented in a forthcoming publication [60]. I briefly describe here the main points needed for this presentation. Our logic and mathematics are based on the notion of a set composed of elements. Implicitly, the set is closed and static, the elements have a fixed identity (it does not change due to the fact that they are part of the set) and they either do not have internal structure or, if they do, it is not relevant to the definition of the set. The set is defined by an external observer, i.e., it is not a result of self-assembly of the elements under a common goal. The elements, being passive and of no structure, do not have any information about the set. The definition of sets leads to logical paradoxes (Russel-type, like the famous barber paradox) when we try to include a notion of self-reference. Russel and others have devoted much effort to construct formal axiomatic systems free of inherent logical paradoxes. Gödel's theorem [62,63] proved that they all have to be "incomplete", including the Principa Mathematica of Russell and Whitehead. It is important to emphasize that Gödel's theorem applies to closed systems which are also fixed in time. I propose that one has to take an entirely different approach and not start with the notion of sets of elements. I believe that here is exactly where the reductionist approach fails. We cannot reach self-awareness starting from passive elements, no matter how intricate their assembly. I propose to replace elements by agents, that possesses internal structure, purpose and some level of self-interest, and whose identity is not fixed. The notion of a set is replaced by a cell, which refers to a collection of agents with a common goal and mutual dependence. It also implies that the system of agents is open, i.e., it exchanges energy and information with the environment. I argue that, in order for a cell of agents to be self-aware, it must have an advanced language, i.e., a language which permits self-reference to sentences and to its grammar. The language also enables the individual agents to have information about the entire system.

So if I understand it right, a SAS is a structure equipped to make self-referential statements.

Do self-referential statements even exist in mathematics, I wondered. Then I cracked open "Mathematical Logic" (which is really a math theory of language in my opinion). In chapter 10, section 7, we have, "self-referential statements and Godel's Incompleteness theorem." I tried reading this and it will take some time to remember what all the notation means. There's a "fixed point theorem" which says something I don't yet understand. But just below the theorem, it says,
Intuitively, phi says "I have the property psi."

That means that phi is self aware!

phi and psi are wffs.

To go further, so we have wffs that are "self aware." These are precisely the self-referential wffs. We could then define a set to be self-aware if it is of the form {x|phi(x)} where phi is self aware.

Thoughts? Comments?
 
Last edited by a moderator:
  • #43
You missed the word "intuitively" by which the author is signalling you that he's going to lie for pedagogical purposes. Just like ascribing will-power to genes when talking about adaptation. It ain't so, McGee.

The statements are self-referent; their inner nature is determined by a refenerence to itself. They don't have awareness, but a great interrelated congeries of them might generate a kind of awareness as an "emergent" property.
 
  • #44
selfAdjoint said:
You missed the word "intuitively" by which the author is signalling you that he's going to lie for pedagogical purposes. Just like ascribing will-power to genes when talking about adaptation. It ain't so, McGee.

The statements are self-referent; their inner nature is determined by a refenerence to itself. They don't have awareness, but a great interrelated congeries of them might generate a kind of awareness as an "emergent" property.

We can define self-awareness to mean, in relation to wffs, self-referentiality. But then, of course, we could define self-awareness to mean, in relation to functions, differentiability. What the who, right? The difference is that defining self-awareness to mean self-referentiality has roots in intuition, as indicated in the quote.

What makes you say that statements don't possesses awareness yet a great interrelated congeries of them might generate a kind of awareness as an emergent property? You have to then deal with the issue about when it becomes self-aware. How many interrelated self-referential statements are required to produce a self-aware structure; and what makes it self-aware and not the statements themselves?

In the plan offered where we define self-awareness to mean self-referentiality, you don't need to worry about emergent properties or how to interrelate the statements, etc. To top it off, it has a basis in intuition in the sense that the fixed point theorem says that there are wffs (statements) that when translated to English say, "I have property psi."
 
  • #45
phoenixthoth said:
If a law is untenable then it won't last long, no matter who wants it too. If it works well as a part of physical reality, it could stick around for a while to come.

But I have an an awareness that isn't just self reference in the wff sense. Where did that come from? Not from the wffs! Apparently it's emergent; and yes, with all emergent properties, since they cannot be tracked back to the elements, there arise difficulties of bounding. How many grains of sand does it take to make a pile, if a pile can slump, but a grain can't?
 
  • #46
selfAdjoint said:
But I have an an awareness that isn't just self reference in the wff sense. Where did that come from? Not from the wffs! Apparently it's emergent; and yes, with all emergent properties, since they cannot be tracked back to the elements, there arise difficulties of bounding. How many grains of sand does it take to make a pile, if a pile can slump, but a grain can't?

I never said that all SASs have to be self-referential wffs; at least, I would withdraw that if I did. I said that self-referential wffs are self-aware. I (no longer) want to imply that human self-awareness stems from wffs directly.

However, you say that your awareness is not from wffs. How do you know this? I would say that our psyche may be reducible to many, many wffs such as:
if I am pricked then I will say "ouch"
I am made of mostly water (this would come from a self-referential wff)
if the blood is low in oxygen, then increase breathing rate
etc. Obviously, proving this should not entail actually reducing the psyche to many wffs by explicitly spelling out what the wffs are. If it's true that our psyche is a collection of wffs (or just one wff that is the conjunction of component wffs), then this will be proved some other way.

Well, actually the heap of sand dilemma can be solved, not necessarily to your satisfaction, by fuzzy logic. Same with self-awareness. I didn't expect to find anything remotely obviously self-aware and I once expected that self-awareness should be a measureable and would vary between 0 and 1 (for example). But now considering self referential statements I would say that if a statement can refer to itself it has self-awareness. Likewise, if a wff can refer to another formula, then that wff has awareness of the other formula.

Obviously, or at least apparently, human awareness is not like this. But we wouldn't really expect it to because our awareness changes from moment to moment while a wff's selfawareness is not subject to the passage of time.
 
  • #47
Do you mean real fuzzy logic or just the buzz word that gets tossed around? If the sandpile problem, which is well known, has been solved by any method, I have not heard of it. Have you any citations?
 
  • #48
If you feel inclined to pay:
click here.
I'd like to know what those "natural assumptions" are.

I think some of this article is summarized here.

The key seems to be this so called "almost true" unary relation.

But I'd rather not talk about Sorites Paradox.

What's your opinion: does the following statement demonstrate on some level self awareness:
This statement consists of letters and spaces and punctuation.
?

When I tell some people that statement, their opinion is that it is self-aware. What is awareness anyway? I read somewhere that awareness is "poised for appropriate interaction with the immediate environment." Here, appropriate interaction with the immediate environment means appropriate interaction with itself. Then the appropriate interaction would be to say something, which it does: about itself. And if you prefer, substitute the word "mean" for "say."
 
  • #49
phoenixthoth said:
What's your opinion: does the following statement demonstrate on some level self awareness:
This statement consists of letters and spaces and punctuation.
?

I do not find the statement to be self-aware. In this case I think it is because the self-reference is STATIC. The statement does not transform in any way because of the reference. Whereas the kind of self-reference generated by Goedel numbering, where an arithmetic statement, in expressing its arithmetic value, turns out to also be making a statement about its own provability, that is much closer to awareness, but I don't think Goedel would have thought so. Certainly Wittgenstein who disdained all work with wffs as cheap tricks, wouldn't have.

I guess my minimum and not claimed to be sufficient for awareness would be something you could call background-free dynamic recursion. Whatever that means! :rolleyes:
 
  • #50
selfAdjoint said:
I do not find the statement to be self-aware. In this case I think it is because the self-reference is STATIC. The statement does not transform in any way because of the reference. Whereas the kind of self-reference generated by Goedel numbering, where an arithmetic statement, in expressing its arithmetic value, turns out to also be making a statement about its own provability, that is much closer to awareness, but I don't think Goedel would have thought so. Certainly Wittgenstein who disdained all work with wffs as cheap tricks, wouldn't have.

I guess my minimum and not claimed to be sufficient for awareness would be something you could call background-free dynamic recursion. Whatever that means! :rolleyes:

When I say: "I am a man," I suppose that does change me in that while up to ten minutes ago, my memory did not contain myself writing that I am a man, now it does. My conclusion isn't that a self-referential wff (or perhaps even a statement in natural language) is not self-aware; it is that the two awarenesses are different.

Your example of a self-referential wff of the kind you listed being "closer to awareness" is precisely the type of wff mentioned in the very first post when I resurrected this thread. The one in my logic book quoted as intuitively meaning, "I have the property psi." For example, we can show that a phi exists that says, "This sentence has at most 1000 symbols". And the theorem I'm referring to is Tarski's self-reference lemma which was not named as such in my book where it was called a fixed point theorem.

Funny you should mention background free dynamic recursion. I wonder if that's like the ideas of Chaotic Logic:
http://www.goertzel.org/books/logic/chapter_seven.htm
 
  • #51
Phoenixthoth, the Goertzel link was fascinating, if a little shallow and handwaving. Consider this passage:

Let us go back to the LEGO metaphor. It would be easy to build a computable LEGO universe following Kampis's instructions. For the set of all LEGO structures is countable, and may therefore be mapped into the set of binary sequences, in a one-to-one manner. And each binary sequence may be represented as a Turingmachine program, i.e. as a map from binary sequences to binary sequences. Therefore, using Turing machines, each LEGO structure could be interpreted as a function acting on other LEGO structures. The only problem with this arrangement is that it does not satisfy clause (c) of the definition of component-system. Not every LEGO structure is realizable by our dynamics. Only some computable subset of LEGO structures is realizable.

But now -- and here is where my thinking differs from Kampis's -- suppose one adds a random element to one's Turing machine. Suppose each component of the Turing machine is susceptible to errors! Then, in fact, every possible LEGO structure becomes realizable! Structures may have negligibly small probability, but never zero probability! This is an example of a component-system which is computable by a stochastic Turing machine.

I don't see that he shows that a Turing machine with a stochastic component can generate all possible Lego configurations. Surely it can generate solutions that are not in the computable universe (at least, modulo some theorem that says a stochastic-modified Turing machine can't be modeled on a regular Turing machine)
but he hasn't shown the the universe of Lego constructions is contained in the universe of stochastic-Turing solutions, or even that they intersect. The stochastic solutions might all be malformed in Lego terms.

Kastin's Lego constructions remind me strongly of cellular automata, and I wonder if they might be modeled as such. There is apparently a large literatur on what cellular automata can achieve, and I can't resist referencing a new result: http://www.cscs.umich.edu/~crshalizi/weblog/375.html

Any way, very interesting, thanks for the link. BTW I found your other link on the sorites problem interesting too, although I don't know that the solution of the mathematical vague boundary problem solves the physical one.
 
  • #52
I came across "agents" or "introspective agents" in my surfing.

Here is a site that fascinated me and maybe you'll enjoy:
http://cs.wwc.edu/~aabyan/Colloquia/Aware/aware2.html

So if a type 2 agent is allowed to be called self-aware by the community, then I'd just have to prove or disprove that a self-referential wff is a type 2 agent. And... I'm a little confused by the beginning of that article. Is <SB, v> the agent? What is the agent? Does that mean it is a model for the set of beliefs?

Ok, so now I wonder what Type an agent would be if the set of beliefs is a single self-referential wff. Seems like that would depend on whether it is a tautology and for the sr wffs I've come across, they are tautologies.
 
Last edited by a moderator:
  • #53
Phoen, I tried to link to it and got this
The XML page cannot be displayed
Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later.


--------------------------------------------------------------------------------

Parameter entity must be defined before it is used. Error processing resource 'http://www.w3.org/TR/MathML2/dtd/xhtml-math1...

%xhtml-prefw-redecl.mod;
-^

Never saw that one before.
 
Last edited by a moderator:
  • #54
selfAdjoint said:
Phoen, I tried to link to it and got this


Never saw that one before.
That's too bad.

I just now clicked on the link I gave and bamn! there it was.
 
  • #55
I just wanted to mention... it almost sounds as if the author thinks that turing machines cannot be self-referential, but the recursion theorem says, in effect, that any program for a turing machine can obtain a string containing itself.

More precisely:

Let T be a turing machine that takes two things as input: a string description a turing machine, and an arbitrary string.

Then, there exists a turing machine M that when given input S behaves exactly as if you ran T on the input <M>, S.

Which we interpret as saying a turing machine can "obtain its own description".
 

Similar threads

Replies
34
Views
4K
Replies
190
Views
12K
Replies
21
Views
5K
Replies
4
Views
1K
Replies
62
Views
11K
Replies
3
Views
1K
Replies
6
Views
2K
Back
Top