Is Time Merely Constant Change?

  • Thread starter Outlandish_Existence
  • Start date
  • Tags
    Time
In summary, the concept of time is slowly deteriorating from the mind of the speaker. They believe that time is just a measurement of movement and is not a fundamental aspect of the universe. They also question the appeal of discussing whether time is an illusion and suggest examining bolder questions about the nature of time.
  • #456
Rade said:
As I see it, "time" is defined by "moments", time is not composed of moments, thus "moments" are outside of time but are the bounds of time,
Are 'moments' composed of time? If not, what are they composed of?
 
Physics news on Phys.org
  • #457
Siah said:
Are 'moments' composed of time? If not, what are they composed of?
NO, moments are not composed of time, moments are an "attribute" of time. An attribute is something that is not the entity itself, yet the entity and attribute are not two different things. A "moment" as an attribute of "time" is what can be separated only mentally from time--as opposed to a "part" which can be materially separated from the whole. It is not possible to have a concept of "moment" without a concept of "time", nor a concept of "time" without a concept of "moment". Moments are like electrons, they are "composed" of themselves. Moments, like all attributes of entities, are indivisible. Moments are the "now", the "present". Moments are the limit of the "past" and "future"--the "before" and "after". Moments are infinite in number.
 
  • #458
Rade said:
NO, moments are not composed of time, moments are an "attribute" of time. An attribute is something that is not the entity itself, yet the entity and attribute are not two different things. A "moment" as an attribute of "time" is what can be separated only mentally from time--as opposed to a "part" which can be materially separated from the whole. It is not possible to have a concept of "moment" without a concept of "time", nor a concept of "time" without a concept of "moment". Moments are like electrons, they are "composed" of themselves. Moments, like all attributes of entities, are indivisible. Moments are the "now", the "present". Moments are the limit of the "past" and "future"--the "before" and "after". Moments are infinite in number.

Isn't time just a rudimentary form of calculus or calculus of variations. Time, in this sense would then be the result of early human studies of the rate of change. How far off am I? Its been my explanation for time all along so I'm biased.:rolleyes:
 
  • #459
Doctordick said:
If by, “how we have chosen to describe reality thus far”, you mean your world view, then you understand exactly what I meant.

Yup.

There are a few other minor details which will have to be cleared up sooner or later but for the moment, I would like to get over to that symmetry issue as I think you understand enough of my attack to understand it. At the moment, I have defined the knowledge on which any explanation must depend as equivalent to a set of points in an (x, tau, t) space: i.e., a collection of numbers associated with each t index which I have referred to as B(t). Any explanation can be seen as a function of those indices (the explanation yielding a specific expectation for that set of indices at time t. The output of that function is a probability and may be written

[tex]P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t)[/tex]

Now, the thoughts we need to go through here are subtle and easy to confuse but I think you have the comprehension to follow them. Suppose someone discovers a flaw free solution to the problem represented by some given collection of ontological elements. That means that their solution assigns meanings to those indices used in P. But, if we want to understand his solution, we need enough information to deduce the meanings he has attached to those indices. It is our problem to uncover his solution from what we come to know of the patterns in his assignment of indices. The point being that the solution (which has to contain the definitions of the underlying ontological elements) arises from patterns in the assigned indices. And the end result is to yield a function of those indices which is the exact probability assigned to that particular collection implied by that explanation.

But the indices are mere labels for those ontological elements. If we were to create a new problem by merely adding a number a to every index, the problem is not really changed in any way. Exactly the same explanation can be deduced from that second set of indices and it follows directly that

[tex]P(x_1+a,\tau_1+a,x_2+a,\tau_2+a,x_3+a,\tau_3+a,\cdots,x_n+a,\tau_n+a,t)[/tex]

must yield exactly the same probability. That leads to a very interesting equation.

[tex]P(x_1+a,\tau_1+a,x_2+a,\tau_2+a,\cdots,x_n+a,\tau_n+a,t)-P(x_1+b,\tau_1+b,x_2+a,\tau_2+b,\cdots,x_n+b,\tau_n+b,t)=0[/tex]

Simple division by (a-b) and taking the limit as that difference goes to zero makes that equation identical to the definition of a derivative. It follows that all flaw free explanations must obey the equation.

[tex]\frac{d}{da}P(x_1+a,\tau_1+a,x_2+a,\tau_2+a,x_3+a,\tau_3+a,\cdots,x_n+a,\tau_n+a,t)=0[/tex]

Let me know if you have any problems with that.

It took me a while to figure out the mathematical expressions, but thank god for Wikipedia :) I studied derivatives and differentiation, and with that limited understanding, I cannot see a fault in the above. But what does it mean? Something being symmetric in our models, implies there is invalid ontological element in use? Hmmm, I think I can see some kind of relationship between this and the artificial concepts in our worldviews (mental models of reality).

Well, how would you put it, what does this say about "symmetry"?

-Anssi
 
  • #460
Rade said:
NO, moments are not composed of time, moments are an "attribute" of time. An attribute is something that is not the entity itself, yet the entity and attribute are not two different things. A "moment" as an attribute of "time" is what can be separated only mentally from time--as opposed to a "part" which can be materially separated from the whole. It is not possible to have a concept of "moment" without a concept of "time", nor a concept of "time" without a concept of "moment". Moments are like electrons, they are "composed" of themselves. Moments, like all attributes of entities, are indivisible. Moments are the "now", the "present". Moments are the limit of the "past" and "future"--the "before" and "after". Moments are infinite in number.

I am trying to clarify this earlier statement:
"Time is that which is intermediate between moments"
You say 'moments are an "attribute" of time. As I understand it you are saying that moments have a time-span. Is this correct?
 
  • #461
Siah said:
I am trying to clarify this earlier statement:
"Time is that which is intermediate between moments"
You say 'moments are an "attribute" of time. As I understand it you are saying that moments have a time-span. Is this correct?
No, this is not how I see it. Moments do not have a "time-span"--moments are not divisible, thus no span concept exists for moments. To be "between" logically requires a concept of three. Suppose two moments (A) and (D) at the present, the now. "Time" (B ---> C) is that which is intermediate between the moments, time is neither within A nor D as the present, A and D are limits of time (B----> C). So you see the concept of three--this is what I mean when I say "time is intermediate between moments": (A) | (B ---> C) | (D).
 
  • #462
AnssiH said:
Well, how would you put it, what does this say about "symmetry"?
The equation is a direct consequence of “symmetry”. The addition of a to every term in a collection of reference numbers is essentially what is normally referred to as a “shift symmetry”. With regard to symmetry, I think I already gave you a link to a post I made to “saviormachine” a couple of years ago (post number 696 in the “Can everything be reduced to physics” thread.”) That post, selfAdjoint’s response to it (immediately below that one) and my response to selfAdjoint’s (post number 703) should be read very carefully before googling around. I will paste one quote which I think is the central issue here.
Doctordick said:
My interest concerns an aspect of symmetry very seldom brought to light. For the benefit of others, I will comment that the consequences of symmetry are fundamental to any study of mathematical physics. The relationship between symmetries and conserved quantities was laid out in detail through a theorem proved by http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Noether_Emmy.html sometime around 1915. The essence of the proof can be found on [URL='https://www.physicsforums.com/insights/author/john-baez/']John Baez's web site[/URL]. This is fundamental physics accepted by everyone. The problem is that very few students think about the underpinnings of the circumstance but rather just learn to use it. :frown:
What I feel everyone seems to miss is the fact that there exists no proof which yields any information which is not embedded in the axioms on which the proof is based. In fact, that comment expresses the fundamental nature of a proof! In my opinion, the fundamental underpinning of Noether’s proof is the simple fact that any symmetry can be seen as equivalent to the definition of a specific differential: i.e., in a very real sense, Noether’s theorem is true by definition as are all proofs.

I was somewhat sloppy when I wrote my last post because the issue was to get you to think about the impact of shift symmetry in ontological labels. It is very interesting to note that x, tau and t are all totally independent collections of indices (the fact that we have laid them out as positions in a three dimensional Euclidean space says that shift symmetry is applicable to each dimension independently). In other words, that equation can actually be divided into three independent equations.

[tex]\frac{d}{da}P(x_1+a,\tau_1,x_2+a,\tau_2,x_3+a,\tau_3,\cdots,x_n+a,\tau_n,t)=0[/tex]

[tex]\frac{d}{da}P(x_1,\tau_1+a,x_2,\tau_2+a,x_3,\tau_3+a,\cdots,x_n,\tau_n+a,t)=0[/tex]

[tex]\frac{d}{da}P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t+a)=0[/tex]

I think you should find that quite satisfactory. If not, let me know what confusion it engenders.

The next step involves what is called “partial” differentiation. A partial differential is defined on functions of more than one variable (note that above we are looking at the probability as a function of one variable: i.e., only a is being presumed to change; all other variable being seen as a simple set of constants). When one has multiple variables, one can define a thing called the “partial” derivative. A partial derivative is the derivative with respect to one of those variables under the constraint that none of the other variables change (all other variables are presumed to be unchanging). Essentially, the equations above can be seen as partials with respect to a except for one fact: the probability P is not being expressed as a function of “a”. That is to say, “a” is not technically an argument of P.

On the other hand, the equation does say something about how the other arguments must change with respect to one another. In order to deduce the correct implied relationship, one needs to understand one simple property of partial derivatives. The property that I am referring to is often called “the chain rule of partial differentiation’. I googled “the definition of the chain rule of partial differentiation” and got a bunch of hits on “by use of the definition of the chain rule of partial differentiation …” which seems pretty worthless with regard to exactly what it is. If you know what it is, thank the lord. If you don’t, do you know anyone with enough math background to explain it to you? It is a lot easier to explain in person with a black board; but, if necessary, I will compose a document I think you can understand.

If anyone out there feels they can do the deed in a quick and dirty fashion I will accept the assistance. Or, if anyone can give Anssi a link to a good presentation of the definition, I would certainly appreciate it. Meanwhile, I will await your response.

Have fun -- Dick

PS I’m having a ball. Our first grandchild (we thought we would never get one) will be one year old Sunday and she can sure wear out an old man. She’s not quite walking yet (not by herself anyway) and wants to walk everywhere holding on to your finger (which requires me to walk bent over).
 
  • #464
Thank you Rade; those are all excellent links to good information on the chain rule and how it applies to functions of many variables. With regard to my presentation, the link to “case 1” of http://tutorial.math.lamar.edu/AllBrowsers/2415/ChainRule.asp[/url (your second reference) is the most directly applicable to my next step. Paul gives case 1 as the problem of computing dz/dt when z is given as a function of x = g(t) and y =h(t) or, to put it exactly as he states it, Case 1: z=f(x,y), x=g(t), y=h(t) and compute dz/dt).

What we want to do is compute is dP/da, which we know must vanish, but is expressed in terms of the reference labels of our valid ontological elements. We have established that the probability of a specific set of labels is given by an expression of the form,

[tex]Probability= P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t)[/tex]

or, just as reasonably

[tex]Probability= P(z_1,\tau_1,z_2,\tau_2,z_3,\tau_3,\cdots,z_n,\tau_n,t)[/tex]

where our shift symmetry has resulted in the fact that those arguments, when expressed as functions of x and a are given by

[tex]z_1=x_1+a, z_2=x_2+a, z_3=x_3+a,\cdots, z_n=x_n+a.[/tex]

With regard to our representation that dP/da vanishes, we can apply the example given by Paul,

[tex]\frac{dz}{dt}=\frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dx}[/tex]

as, in our case, equivalent to

[tex]\frac{dP}{da}=\sum_{i=1}^{i=n}\frac{\partial P}{\partial z_i}\frac{dz_i}{da};[/tex]

however, in our case,

[tex]\frac{dz_1}{da}=\frac{dz_2}{da}=\frac{dz_3}{da}=\cdots=\frac{dz_n}{da}=1[/tex].

which yields the final result that

[tex]\frac{dP}{da}=\sum_{i=1}^{i=n}\frac{\partial}{\partial z_i}P = 0[/tex]

when the x arguments of P are symbolized by z. But z is just a letter used to represent those arguments; one can not change the truth of the equation by changing the name of the variable. This same argument can be applied to the other independent arguments of P, yielding, in place of the differential expressions in post 462, the following three differential constraints.

[tex]\sum_{i=0}^{i=n}\frac{\partial}{\partial x_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

[tex]\sum_{i=0}^{i=n}\frac{\partial}{\partial \tau_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

and

[tex]\frac{\partial}{\partial t}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

which has utterly no mention of the shift parameter a.

If I get confirmation that the above is understood and accepted as a rational expectation from any mathematical expression of a flaw free explanation of the information represented by those ontological elements underlying that explanation, I will continue by showing you how all of the relationships so far developed can be seen as a single mathematical expression which must be obeyed by each and every flaw free explanation which can be constructed.

I am very much looking forward to your response -- Dick
 
Last edited by a moderator:
  • #465
Sorry for being so slow to reply again. I am having a summer vacation and was away for couple of days, and on top of that it takes me a while to figure out all the math concepts since I need to study them before I understand what is being said :)

Doctordick said:
Our first grandchild (we thought we would never get one) will be one year old Sunday and she can sure wear out an old man. She’s not quite walking yet (not by herself anyway) and wants to walk everywhere holding on to your finger (which requires me to walk bent over).

Heh, don't break your back :) I also became an uncle couple months back, plus my two other sisters are just about to multiply as well :)

Doctordick said:
What I feel everyone seems to miss is the fact that there exists no proof which yields any information which is not embedded in the axioms on which the proof is based. In fact, that comment expresses the fundamental nature of a proof! In my opinion, the fundamental underpinning of Noether’s proof is the simple fact that any symmetry can be seen as equivalent to the definition of a specific differential

Yeah that makes sense.

I was somewhat sloppy when I wrote my last post because the issue was to get you to think about the impact of shift symmetry in ontological labels. It is very interesting to note that x, tau and t are all totally independent collections of indices (the fact that we have laid them out as positions in a three dimensional Euclidean space says that shift symmetry is applicable to each dimension independently). In other words, that equation can actually be divided into three independent equations.

[tex]\frac{d}{da}P(x_1+a,\tau_1,x_2+a,\tau_2,x_3+a,\tau_3,\cdots,x_n+a,\tau_n,t)=0[/tex]

[tex]\frac{d}{da}P(x_1,\tau_1+a,x_2,\tau_2+a,x_3,\tau_3+a,\cdots,x_n,\tau_n+a,t)=0[/tex]

[tex]\frac{d}{da}P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t+a)=0[/tex]

I think you should find that quite satisfactory.

Yeah, can't see any fault with that.

The next step involves what is called “partial” differentiation. A partial differential is defined on functions of more than one variable (note that above we are looking at the probability as a function of one variable: i.e., only a is being presumed to change; all other variable being seen as a simple set of constants). When one has multiple variables, one can define a thing called the “partial” derivative. A partial derivative is the derivative with respect to one of those variables under the constraint that none of the other variables change (all other variables are presumed to be unchanging). Essentially, the equations above can be seen as partials with respect to a except for one fact: the probability P is not being expressed as a function of “a”. That is to say, “a” is not technically an argument of P.

On the other hand, the equation does say something about how the other arguments must change with respect to one another. In order to deduce the correct implied relationship, one needs to understand one simple property of partial derivatives. The property that I am referring to is often called “the chain rule of partial differentiation’. I googled “the definition of the chain rule of partial differentiation” and got a bunch of hits on “by use of the definition of the chain rule of partial differentiation …” which seems pretty worthless with regard to exactly what it is. If you know what it is, thank the lord.

I didn't, but now I have some idea about it with the links Rade posted (thanks).

Doctordick said:
Paul gives case 1 as the problem of computing dz/dt when z is given as a function of x = g(t) and y =h(t) or, to put it exactly as he states it, Case 1: z=f(x,y), x=g(t), y=h(t) and compute dz/dt).

What we want to do is compute is dP/da, which we know must vanish, but is expressed in terms of the reference labels of our valid ontological elements. We have established that the probability of a specific set of labels is given by an expression of the form,

[tex]Probability= P(x_1,\tau_1,x_2,\tau_2,x_3,\tau_3,\cdots,x_n,\tau_n,t)[/tex]

or, just as reasonably

[tex]Probability= P(z_1,\tau_1,z_2,\tau_2,z_3,\tau_3,\cdots,z_n,\tau_n,t)[/tex]

where our shift symmetry has resulted in the fact that those arguments, when expressed as functions of x and a are given by

[tex]z_1=x_1+a, z_2=x_2+a, z_3=x_3+a,\cdots, z_n=x_n+a.[/tex]

With regard to our representation that dP/da vanishes, we can apply the example given by Paul,

[tex]\frac{dz}{dt}=\frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dx}[/tex]

as, in our case, equivalent to

[tex]\frac{dP}{da}=\sum_{i=1}^{i=n}\frac{\partial P}{\partial z_i}\frac{dz_i}{da};[/tex]

however, in our case,

[tex]\frac{dz_1}{da}=\frac{dz_2}{da}=\frac{dz_3}{da}=\cdots=\frac{dz_n}{da}=1[/tex].

Here I'm starting to have some troubles understanding what is being said. What is meant with [tex]\sum_{i=1}^{i=n}[/tex] ? Something about this applying to every entry in the table?

I understood we are using [tex]z_i[/tex] to express [tex]x_i+a[/tex], but I don't understand how [tex]\frac{dz_1}{da}=1[/tex]

which yields the final result that

[tex]\frac{dP}{da}=\sum_{i=1}^{i=n}\frac{\partial}{\partial z_i}P = 0[/tex]

Hmmm, that final result [tex]\frac{dP}{da}= 0[/tex]
Isn't it the same as was established earlier already? I.e. changing "a" will not change the probability P?

when the x arguments of P are symbolized by z. But z is just a letter used to represent those arguments; one can not change the truth of the equation by changing the name of the variable. This same argument can be applied to the other independent arguments of P, yielding, in place of the differential expressions in post 462, the following three differential constraints.

[tex]\sum_{i=0}^{i=n}\frac{\partial}{\partial x_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

[tex]\sum_{i=0}^{i=n}\frac{\partial}{\partial \tau_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

and

[tex]\frac{\partial}{\partial t}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

which has utterly no mention of the shift parameter a.

Hmm, how should I read these expressions...? That the probability doesn't change when we change... what? I hope you (or anyone) can clear up the things I am not getting :)

-Anssi
 
  • #466
AnssiH said:
Sorry for being so slow to reply again.
Don’t worry about it.

Regarding “they are % and % is not a thing.
Anssi said:
Heh, isn't it interesting to try to force yourself through this barrier?
:) At least it gives us a better understanding about how there really is a barrier there, doesn't it?
In is interesting to note that this exchange concerns exactly what I am talking about: i.e., getting on the other side of that barrier. Tarika is using “%” for exactly the reason I am using numerical labels. The only reason I am using “numerical labels” is that there are a lot more of them then there are things like %, #, @, &, etc. Plus that, I have the advantage that there exists a world of internally self consistent defined operations on those numerical labels. That is, I don’t have to explain each and every manipulation I want to perform on the labels. (See Russell’s works on definition of mathematics.) You can google the phrase and get enough stuff to keep anyone busy for years. The only reason I bring it up is that he was very much interested in defining mathematics from “ground zero”. That is exactly the problem which constitutes the essential nature of the barrier being referred to above.
AnssiH said:
I am having a summer vacation and was away for couple of days, and on top of that it takes me a while to figure out all the math concepts since I need to study them before I understand what is being said :)
Yeah, I knew that was going to be a problem; but I think we are beginning to clear up the true depth of the difficulty. I think we can handle it.
AnssiH said:
Here I'm starting to have some troubles understanding what is being said. What is meant with [tex]\sum_{i=1}^{i=n}[/tex] ? Something about this applying to every entry in the table?
The capital sigma is used as a shorthand notation to represent a sum. The definitions of i given above and below the sigma tell you the starting value of i and the ending value of i. The term to be summed has an i reference in it which tells you how to construct the ith term in that sum. If you look at Paul’s example (for Case 1) you will see that the original function was a function of two variables and that his “total derivative”, dz/dt, is given by a sum of two terms: a partial with respect with each of those two variables times the “total derivative of each variable with respect to t. (“Total derivative” is the term used for what was originally defined to be “a derivative” so as to contrast it with the idea of a “partial derivative”). In our case, we have n arguments subject to our shift parameter "a" so our total derivative consists of a sum of n terms, one partial for each term in the function (times the respective total derivative).

This defined operation (the thing called the partial derivative with respect to the given argument multiplied by the common derivative of the same argument with respect to a) is to be performed for every numerical label in the collection of labels which constitute the arguments of that probability function (the mathematical function which is to yield the probability that the specific set of labels will be in the table). The n different results which are obtained by performing that specific mathematical operation which (if we happen to know what the function looks like, will yield a new function for each chosen i) are to be added together.

The requirement that the shift of "a" cannot yield any change in that resultant expression yields a rule which the probability function can not violate. Putting it simply, if we did indeed know exactly the correct function for n-1 of those arguments, we could use that differential relationship to tell us exactly the appropriate relationship for the missing argument. This is a simple consequence of “self consistency” of the explanation.
AnssiH said:
I understood we are using [tex]z_i[/tex] to express [tex]x_i+a[/tex], but I don't understand how [tex]\frac{dz_1}{da}=1[/tex]
Our shift symmetry can be seen as a simple change in variables where each x has been replaced by a related z where each z has been defined by adding a to the respective x.

[tex]z_1=x_1+a, z_2=x_2+a, z_3=x_3+a,\cdots, z_n=x_n+a.[/tex]

In order to evaluate the sum expressing the total derivative of P with respect to a (the derivative which we deduced earlier must vanish) we need the total derivative of each z with respect to a. But each z is obtained from a by adding a to the appropriate x. This constraint (as a function of a) presumes there is no change in the base x (as it is a shift on all x’s). From this perspective, each z can be see as a constant x plus a; it follows that dx/da vanishs (x is not a function of a) and da/da is identically one by definition.
AnssiH said:
Hmmm, that final result [tex]\frac{dP}{da}= 0[/tex]
Isn't it the same as was established earlier already? I.e. changing "a" will not change the probability P?
Exactly right except for one thing. We haven’t proven dP/da = zero here; what we have done is shown how that result (as you say “established earlier”) is totally equivalent to the assertion that the sum over all partials with respect to each argument must vanish.

We first proved that we could see any specific explanation of our “what is”, is “what is” table as a mathematical function which would yield the probability of seeing a specific entry in that table. Then we argued that shift symmetry required that the total derivative with respect to that shift to vanish. Now I have shown that that requirement is totally equivalent to requiring a specifically defined sum of partial derivatives of that probability function, with respect to those numerical labels (numerical labels which are defined by that explanation), to vanish.
AnssiH said:
Hmm, how should I read these expressions...? That the probability doesn't change when we change... what? I hope you (or anyone) can clear up the things I am not getting :)
This says that every ontological element (valid or invalid) associated with “that explanation” has associated with it, another thing (a consequence of symbolic shift symmetry). If we have the function for the probability relationships and the numerical labels, we can deduce a proper label (numerical label) to be assigned to that ontological element. What is interesting is the fact that the sum over all those “deduced proper labels” must be zero. We are talking about here is a conserved quantity; the sum over all of them is unchanging though the individual quantities associated with each ontological element might very well change.
AnssiH said:
Heh, don't break your back :) I also became an uncle couple months back, plus my two other sisters are just about to multiply as well :)
Don’t worry, we’ve survived it. We will be heading home this weekend. That’s the great thing about being grandparents; you can always go home when the strain begins to show (and believe me, it's beginning to show; I am looking forward to our own schedule and our own home). You can’t do that with your own kids.

Have fun -- Dick
 
  • #467
Doctordick said:
Regarding “they are % and % is not a thing.
In is interesting to note that this exchange concerns exactly what I am talking about: i.e., getting on the other side of that barrier. Tarika is using “%” for exactly the reason I am using numerical labels.

Although he isn't thinking about finding any requirements or constraints for our ontological assumptions. He just said that because he actually stopped and thought of my assertion, and tried to see if it was water-proof. Anyhow, seems like people get some bad vibes from the word "barrier" in this context, for no good reason at all... (Makes us feel little bit retarded I guess? :)

Doctordick said:
Yeah, I knew that was going to be a problem; but I think we are beginning to clear up the true depth of the difficulty. I think we can handle it.
The capital sigma is used as a shorthand notation to represent a sum. The definitions of i given above and below the sigma tell you the starting value of i and the ending value of i. The term to be summed has an i reference in it which tells you how to construct the ith term in that sum. If you look at Paul’s example (for Case 1) you will see that the original function was a function of two variables and that his “total derivative”, dz/dt, is given by a sum of two terms: a partial with respect with each of those two variables times the “total derivative of each variable with respect to t. (“Total derivative” is the term used for what was originally defined to be “a derivative” so as to contrast it with the idea of a “partial derivative”). In our case, we have n arguments subject to our shift parameter "a" so our total derivative consists of a sum of n terms, one partial for each term in the function (times the respective total derivative).

This defined operation (the thing called the partial derivative with respect to the given argument multiplied by the common derivative of the same argument with respect to a) is to be performed for every numerical label in the collection of labels which constitute the arguments of that probability function (the mathematical function which is to yield the probability that the specific set of labels will be in the table). The n different results which are obtained by performing that specific mathematical operation which (if we happen to know what the function looks like, will yield a new function for each chosen i) are to be added together.

Okay I see.

The requirement that the shift of "a" cannot yield any change in that resultant expression yields a rule which the probability function can not violate. Putting it simply, if we did indeed know exactly the correct function for n-1 of those arguments, we could use that differential relationship to tell us exactly the appropriate relationship for the missing argument. This is a simple consequence of “self consistency” of the explanation.

That makes sense.

Our shift symmetry can be seen as a simple change in variables where each x has been replaced by a related z where each z has been defined by adding a to the respective x.

[tex]z_1=x_1+a, z_2=x_2+a, z_3=x_3+a,\cdots, z_n=x_n+a.[/tex]

In order to evaluate the sum expressing the total derivative of P with respect to a (the derivative which we deduced earlier must vanish) we need the total derivative of each z with respect to a. But each z is obtained from a by adding a to the appropriate x. This constraint (as a function of a) presumes there is no change in the base x (as it is a shift on all x’s). From this perspective, each z can be see as a constant x plus a; it follows that dx/da vanishs (x is not a function of a) and da/da is identically one by definition.

Doh! Of course!

Exactly right except for one thing. We haven’t proven dP/da = zero here; what we have done is shown how that result (as you say “established earlier”) is totally equivalent to the assertion that the sum over all partials with respect to each argument must vanish.

We first proved that we could see any specific explanation of our “what is”, is “what is” table as a mathematical function which would yield the probability of seeing a specific entry in that table. Then we argued that shift symmetry required that the total derivative with respect to that shift to vanish. Now I have shown that that requirement is totally equivalent to requiring a specifically defined sum of partial derivatives of that probability function, with respect to those numerical labels (numerical labels which are defined by that explanation), to vanish.

This says that every ontological element (valid or invalid) associated with “that explanation” has associated with it, another thing (a consequence of symbolic shift symmetry). If we have the function for the probability relationships and the numerical labels, we can deduce a proper label (numerical label) to be assigned to that ontological element. What is interesting is the fact that the sum over all those “deduced proper labels” must be zero. We are talking about here is a conserved quantity; the sum over all of them is unchanging though the individual quantities associated with each ontological element might very well change.

Right, okay. I can now understand what you are saying with the math above, albeit somewhat superficially, but nevertheless...

-Anssi
 
  • #468
Thank you Anssi. This is the first time I have ever gotten anyone (other than Paul Martin, who is a personal friend) this far along in my arguments. Everyone else drops out long before we get to this point. We only have a small number of steps to complete my deduction. Remember post number 426 on this thread? It was there that I pointed out that there had to exist a set of invalid ontological elements which would guarantee that a function existed who's roots would yield that exactly that "what is", is "what is" table.
Doctordick said:
This means that the missing index can be seen as is a function of the other indices. Again, we may not know what that function is but we do know that the function must agree with our table. What this says is that there exists a mathematical function which will yield

[tex](x,\tau)_n(t) = f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t)[/tex]

It follows that the function F defined by

[tex]F((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_n) = (x(t),\tau(t))_n - f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t) = 0 [/tex]

is a statement of the general constraint which guarantees that the entries conform to the given table. That is to say, this procedure yields a result which guarantees that there exists a mathematical function, the roots of which are exactly the entries to our "what is", is "what is" table. Clearly, it would be nice to know the structure of that function.
What is somewhat more important is the fact that I have proved that such a function exists and that one achieves that function through the addition of “invalid ontological elements”. What you need to remember is that these “invalid ontological elements” are invalid, not because the yield incorrect answers regarding the information to be explained but rather because they are not actually among the ontological elements which constitute the information our explanation is to explain. They are instead, total figments of our imagination. That is to say that they are inventions; inventions created to provide us with the ability to say what can and can not be under the presumed rule our explanation implements (i.e., the rule being that F=0): i.e., they are ontological elements our explanation presumes exist. If our explanation is indeed flaw free, it will be totally consistent with the existence of these invalid ontological elements.

What is really profound about this realization is the fact that it implies there exists a fundamental duality: the rule and what is presumed to exist are exchangeable concepts. That is to say, what the rule has to be is a function of what is presumed to exist: it is possible to exchange one for the other so long as one maintains some complex internal relationships. It turns out this is exactly the freedom which allows us construct a world view consistent with what we know; without this freedom the problem of “explaining the universe” could not be accomplished. Another way to state the circumstance is to point out that the “explanation of reality” is actually a rather complex data compression mechanism. One's best bet for the future is very simply: one's best expectations are given by how much the surrounding circumstances resemble something already experienced.

But let's get back to this F=0 rule. There exists a rather simple function which can totally fulfill the need required here. That function is the Dirac delta function (google “Dirac delta function” for a good run down on its properties). The Dirac delta function is usually written as [itex]\delta(x)[/itex] and is defined to be exactly zero so long as x is not equal to zero; however, it also satisfies the relationship:

[tex]\int_{-\infty}^{+\infty}\delta(x)dx= 1. [/tex]

Clearly, since it is exactly zero everywhere except when x=0, it must be positive infinity at x=0. It is that property which makes it so valuable as a universal F=0 function. First, it is a very simple function and is quite well defined and well understood. Second, as it is only positive, the sum indicated below will be infinite if any two labels are identical (have exactly the same x, tau numerical label).

[tex]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j) = 0, [/tex]

It is thus a fact that the equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated. Now that sounds like an insane suggestion; however, it's really not as insane as it sounds and it ends up yielding an extremely valuable representation which I will show to you in my next post (after I have read your response to this post).

Sorry I was so slow to respond but I needed time to decide exactly how I was going to present this last step as it clearly seems like an rather extreme move to make even if it is true.

Have fun -- Dick
 
  • #469
Doctordick said:
Thank you Anssi. This is the first time I have ever gotten anyone (other than Paul Martin, who is a personal friend) this far along in my arguments. Everyone else drops out long before we get to this point. We only have a small number of steps to complete my deduction. Remember post number 426 on this thread? It was there that I pointed out that there had to exist a set of invalid ontological elements which would guarantee that a function existed who's roots would yield that exactly that "what is", is "what is" table.
What is somewhat more important is the fact that I have proved that such a function exists and that one achieves that function through the addition of “invalid ontological elements”. What you need to remember is that these “invalid ontological elements” are invalid, not because the yield incorrect answers regarding the information to be explained but rather because they are not actually among the ontological elements which constitute the information our explanation is to explain. They are instead, total figments of our imagination. That is to say that they are inventions; inventions created to provide us with the ability to say what can and can not be under the presumed rule our explanation implements (i.e., the rule being that F=0): i.e., they are ontological elements our explanation presumes exist. If our explanation is indeed flaw free, it will be totally consistent with the existence of these invalid ontological elements.

What is really profound about this realization is the fact that it implies there exists a fundamental duality: the rule and what is presumed to exist are exchangeable concepts. That is to say, what the rule has to be is a function of what is presumed to exist: it is possible to exchange one for the other so long as one maintains some complex internal relationships. It turns out this is exactly the freedom which allows us construct a world view consistent with what we know; without this freedom the problem of “explaining the universe” could not be accomplished.

Yeah this makes perfect sense to me. It sounds like its essentially the same issue as what I called "fallacy of identity". I guess it's interesting that I approached this issue by thinking about how do we go about understanding anything about reality. We need to classify reality into things and assign properties to them, in order to understad "this is a tennis ball and this is how it behaves". And indeed it appears we do that just for the purpose of being able to predict the future, and it does not entail a fundamental identity to the tennis ball; what we tack identity on and what properties those things are ought to have are intimately married, and one can always change the other if the other is also changed accordingly.

This certainly becomes especially important when we start discussing "fundamental particles", which don't appear so fundamental after all.

Another way to state the circumstance is to point out that the “explanation of reality” is actually a rather complex data compression mechanism. One's best bet for the future is very simply: one's best expectations are given by how much the surrounding circumstances resemble something already experienced.

Yeah, we have to discuss your ideas about practical AI at some point.

But let's get back to this F=0 rule. There exists a rather simple function which can totally fulfill the need required here. That function is the Dirac delta function (google “Dirac delta function” for a good run down on its properties). The Dirac delta function is usually written as [itex]\delta(x)[/itex] and is defined to be exactly zero so long as x is not equal to zero; however, it also satisfies the relationship:

[tex]\int_{-\infty}^{+\infty}\delta(x)dx= 1. [/tex]

Clearly, since it is exactly zero everywhere except when x=0, it must be positive infinity at x=0. It is that property which makes it so valuable as a universal F=0 function. First, it is a very simple function and is quite well defined and well understood. Second, as it is only positive, the sum indicated below will be infinite if any two labels are identical (have exactly the same x, tau numerical label).

[tex]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j) = 0, [/tex]

It is thus a fact that the equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated. Now that sounds like an insane suggestion; however, it's really not as insane as it sounds and it ends up yielding an extremely valuable representation which I will show to you in my next post (after I have read your response to this post).

That sounds insane alright! Let's see what you have in mind...

Sorry I was so slow to respond but I needed time to decide exactly how I was going to present this last step as it clearly seems like an rather extreme move to make even if it is true.

Good thing I'm not the only slow one here :)

-Anssi
 
  • #470
ya really time isn't a thing to argue, since we just invented it to keep track of things. i mean, time isn' anything, but a measurement. like saying, are centimeters real...no, what kind of question is tht, they are just a handy tool.
 
  • #471
AnssiH said:
Yeah, we have to discuss your ideas about practical AI at some point.
Well, since it is pretty well based on what I am showing you right now, I think it will have to be put off until you understand the essence of this presentation.
AnssiH said:
That sounds insane alright!
As I said, it's really not as insane as it sounds. Stop and think about vacuum polarization: i.e., the problems with conceiving of the vacuum as “absolutely empty” thing, impossible to interact with. The existence of a “pure” vacuum in the sense originally put forth by scientists seems very much to be in conflict with modern physics; if there is no such thing as an “empty spot” doesn't that imply every location is full of something? I only make that comment to point out that one cannot count the idea as insane if one has any faith in modern science. However, note that I use it as a collection of “invalid ontological elements” because of its ability to yield all possible observed results, not because modern science has come to the conclusion that it is correct (I like deduction, not induction). (By the way, that “observed result” would be any possible collection of ontological elements we need to explain: i.e., it's a very powerful tool.)

Well Anssi, you've gotten a long way since we started. At this point, I think we have enough to lay out what I call my “fundamental equation”. The central issue being that all explanations can be seen as mathematical functions of arbitrary labels assigned to those “noumenons" which stand behind those explanations. What I am going to show is that all the constraints I have deduced to be necessary can be expressed in a single equation and that all flawfree explanations must satisfy that equation.

Let me first review exactly what we now have to work with at this point. First, we have the fact that all explanations of anything can be seen as a mathematical function: the probability of a particular set of ontological elements (which is a number bounded by zero and one) is a function of the set of ontological elements being referred to and the time (as defined earlier) which can be represented by a set of numerical labels.

[tex]Probability = P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)[/tex]

We now understand that the ignorance with regard to what is a correct zero reference for that display of numerical labels (shift symmetry) requires the following equations to be valid.
Doctordick said:
This same argument can be applied to the other independent arguments of P, yielding, in place of the differential expressions in post 462, the following three differential constraints.

[tex]\sum_{i=0}^{i=n}\frac{\partial}{\partial x_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

[tex]\sum_{i=0}^{i=n}\frac{\partial}{\partial \tau_i}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

and

[tex]\frac{\partial}{\partial t}P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

which has utterly no mention of the shift parameter a.
I further showed how viewing that probability as a square of some function (the vector dot product) provided a valuable consequence: i.e., I introduced a mechanism for guaranteeing that the constraints embodied in the concept of probability need no longer be extraneous constraints. Under my representation, they are instead embodied in the representation without constraining the remaining possibilities in any way! This is the central issue behind the representation

[tex]P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=\vec{\Psi}^{\dagger}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)\cdot\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)dV[/tex]

Note that the "[itex]\dagger[/itex]” is there solely to bring the representation closer to the common Schrödinger representation of quantum mechanics: i.e., allowing the components of that indicated vector to be “complex” is essentially adding nothing which could not just as easily be represented by twice as many “real” components in the vector nature of [itex]\vec{\Psi}[/itex]. The fact that the number of components must be even is of no account at all when seen from the perspective of the availability of invalid ontological elements (if that really needs clarification, I will clarify it). It turns out to be no more than a convenience which brings mathematical relationships already worked out in detail to bear directly on the problem we need to solve. At issue is expressing the constraints on the mathematical function [itex]\vec{\Psi}[/itex] instead of dealing with the additional constraints were we to work with the probability function itself. It is straight forward calculus to that the constraint,

[tex]\sum_i\frac{\partial}{\partial x_i}P \equiv \sum_i\frac{\partial}{\partial x_i}\vec{\Psi}^\dagger \cdot \vec{\Psi}=0,[/tex]

is exactly equivalent to the constraint,

[tex]\sum_i\frac{\partial}{\partial x_i}\vec{\Psi}=i \kappa \vec{\Psi}.[/tex]

By adding a very simple relationship to the above constraints (and adding some trivial notation), it turns out that we can write a single equation which expresses exactly the constraints so far discussed. The simple relationship involves defining a set of anti commuting entities quite analogous to Pauli spinners (google Pauli if you want to know through what path these entities came to be conceived). What is important is the issue of anti-commutation and the possibility of consistent definition of such a thing yields some powerful mathematical operations. The commutation rule of ordinary mathematics says that, under multiplication, ab = ba. In a discussion of “anti-commutation”, one generally defines the following notation: [a,b] stands for the operation (ab + ba). Using that notation, we can define the following anti-commutating entities:

[tex][\alpha_{ix},\alpha_{jx}]\equiv \alpha_{ix}\alpha_{jx} + \alpha_{jx}\alpha_{ix} = \delta_{ij}[/tex]

[tex][\alpha_{i\tau},\alpha_{j\tau}]\equiv \alpha_{i\tau}\alpha_{j\tau} + \alpha_{j\tau}\alpha_{i\tau} = \delta_{ij}[/tex]

[tex][\beta_{ij},\beta_{kl}]\equiv \beta_{ij}\beta_{kl} + \beta_{kl}\beta_{ij} = \delta_{ik}\delta{jl}[/tex]

[tex][\alpha_{ix},\beta_{kl}] = [\alpha_{i\tau},\beta_{kl}] = 0[/tex]

where [itex]\delta_{ij}[/itex] is zero if i is different from j and one if i=j.

Finally, introducing the common vector notation that

[tex] \vec{\alpha}_i = \alpha_{ix}\hat{x}+\alpha_{i \tau}\hat{\tau}[/tex]

and

[tex]\vec{\nabla}_i = \frac{\partial}{\partial x_i}\hat{x} + \frac{\partial}{\partial \tau_i}\hat{\tau} [/tex]

one may write all the constraints we have discussed in a very simple form.

If one sets the additional constraint on the universe (i.e., if the solution covers the entire universe),

[tex]\sum_i \vec{\alpha}_i \equiv \sum_{ij}\beta_{ij} \equiv 0[/tex]

then all solutions to the following equation will exactly satisfy the differential constraints we have deduced to be necessary to our mathematical representation of any explanation and, secondly, every mathematical function which satisfies the constraints we have deduced can be mapped directly into a solution to that equation. Thus it is that following equation embodies the most fundamental constraints on any mathematical expression of any explanation of anything. That is, we may state unequivocally that it is absolutely necessary that any algorithm which is capable of yielding the correct probability for observing any given pattern of data in any conceivable universe must obey the following relation:

[tex]\left{\sum_i \vec{\alpha}_i \cdot \vec{\nabla}_i + \sum_{i \neq j} \beta_{ij} \delta(\vec{x}_i - \vec{x}_j) \right}\vec{\Psi}= K \frac{\partial}{\partial t}\vec{\Psi} = iKm \vec{\Psi}[/tex]

where the vector x sub i specifies the x tau label of the ith ontological element the explanation presumes to exist at the time t.

Absolutely do not worry about solving that equation, it is not a trivial endeavor. I had deduced the fact that that equation had to be valid when I was a mere graduate student. At the time, I felt very strongly that a solution would be a valuable thing to find but, for something like ten years, I had not managed to drag out a single solution. In the late seventies, I saw a viable attack and solutions have been rolling out ever since. I can now show that ninety percent of modern physics is no more than an approximation to solutions to that equation and I suspect it is not one hundred percent merely because modern physics contains some subtle errors not yet recognized by the authorities.
AnssiH said:
Good thing I'm not the only slow one here :)
Everybody is slow when they are not sure what should be done.

At this point, there are three paths open to us. One, we could spend some time discussing anything underlying my deduction which seems shaky to you; two, I could show the details of those solutions I spoke of; or three, we could talk about the philosophical implications of my discovery. Personally, I would like the third; however, that path would require a certain acceptance of my assertions that the second is an accurate representation of the facts. The problem with actually pursuing the second is it is not at all trivial and requires a good understanding of mathematics (it could take a good length of time, particularly for someone unfamiliar with partial differential equations of many variables). That would be a kind of comprehension seldom found in professionals trained and indoctrinated in common plug and play physics typically found in the field. I leave the decision up to you but I think it should be on a new thread. If you would start such a thread, I would be happy to post to it. Hopefully there are others who are following us though don't be surprised if there aren't.

It's been a lot of fun and I think my presentations are much improved over what I did years ago. Thank you for your attention.

Have fun -- Dick

PS Thank you to whoever fixed the LaTex implementation. Being able to edit the LaTex in the preview saves a lot of time.
 
  • #472
Dr. Dick,
I have a question. Looking at this part of your final equation:
[tex]K \frac{\partial}{\partial t}\vec{\Psi} = iKm \vec{\Psi}[/tex]

Do you see any application to the thinking of David Bohm--that is, a type of fundamental duality to reality where:
[tex]K \frac{\partial}{\partial t}\vec{\Psi}[/tex]
represents the "explicate order" of Bohm (e.g., the universe as we see it)
while:
[tex]iKm \vec{\Psi}[/tex]
represents the "implicate order" of Bohm, (e.g., the veiled underlying order that governs the universe) ?
 
  • #473
Rade said:
...represents the "implicate order" of Bohm, (e.g., the veiled underlying order that governs the universe) ?
Again you make it quite clear that you did not follow my presentation. My equation says absolutely nothing about reality. It speaks entirely to the problem of interpreting reality. My source data is taken to be explicitly uncorrelated in any manner (the ”what is”, is “what is” information table). What I show is that absolutely any flaw-free explanation of anything can, through the presumption of implied ontological elements (and there are presumptions made unconsciously in any attempt to understand anything), can always be interpreted in a manner such as it will obey my fundamental equation.

It follows that, “obeying that equation” is a consequence of internal consistency of that explanation and absolutely nothing else. It is, by construction, a tautology and the, fact that all modern physics appears to be no more than a collection of solutions to that equation, implies that modern physics is itself a very complex tautology in exactly the same sense that the old religious explanations of reality (the gods did it) was a tautological explanation of reality.

Prior to Newton, everyone worked on those “celestial spheres” which controlled the motions of heavenly bodies. After all, if they didn't exist, the moon would just fall to the ground (something has to be keeping it up there). Newton was the first man to examine exactly what it would look like if there were nothing holding the moon up there – low and behold - he discovered that it would look just like it does “the moon is just continually falling around the earth”. What I have done is shown something quite analogous to his discovery of gravity only what I have done is applicable to the whole of scientific investigation.

By the way, I think it would be quite worthwhile to show students how Newton's examination of a falling moon walks one right into his theory of gravity. If anyone expresses an interest, I will lay it out for them.

Have fun -- Dick
 
  • #474
Doctordick said:
My equation says absolutely nothing about reality. It speaks entirely to the problem of interpreting reality

Good gravy--do you not see the contradiction of your words. You cannot on the one hand say that your equation "says nothing about reality" (absolutely even you say), and then on the other hand claim "it speaks to interpreting reality". Well good Dr. when you say you "interprete reality" you most clearly do say "some"thing" about reality.

I am very sorry I tried a civil attempt at communication with you, it is clear you have absolutely no idea what I was asking in my question about Bohm.
 
  • #475
Rade said:
Good gravy--do you not see the contradiction of your words. You cannot on the one hand say that your equation "says nothing about reality" (absolutely even you say), and then on the other hand claim "it speaks to interpreting reality". Well good Dr. when you say you "interprete reality" you most clearly do say "some"thing" about reality.

I am very sorry I tried a civil attempt at communication with you, it is clear you have absolutely no idea what I was asking in my question about Bohm.
I am sorry I have upset you; that was not my intention. You simply have no idea of the difference between an explanation and the constraints on such; they are actually rather different concepts.

Have fun -- Dick
 
  • #476
Hello, finally have had time to concentrate on your post properly. Actually started yesterday but I've just been going back to the older posts to get a better grasp of this.

Doctordick said:
At this point, there are three paths open to us. One, we could spend some time discussing anything underlying my deduction which seems shaky to you; two, I could show the details of those solutions I spoke of; or three, we could talk about the philosophical implications of my discovery.

We need to stick with option #1 for a while. Although, it could be beneficial to hear about your philosophical interpretation because that ought to be closer to my mode of thinking, and so it could help me in grasping some of the mathematical details.

Anyway, reading the old posts carefully again, I found an answer to many things I was wondering by now, but there were still few things that I couldn't figure out for sure.

Actually, let me get back to that older quote about recovering missing indices. I don't know if the answers are supposed to be obvious to me but they are not :) Hopefully you can pick up what am I missing.

Doctordick said:
This means that the missing index can be seen as is a function of the other indices. Again, we may not know what that function is but we do know that the function must agree with our table. What this says is that there exists a mathematical function which will yield

[tex](x,\tau)_n(t) = f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t)[/tex]

I.e. when we are missing just one entry from some specific B, there is a function that will tell us what that missing entry is.

A partially filled "what is, is what is"-table must be part of that function, right? Just one B alone cannot be enough data to tell us what some missing index is supposed to be?

Is this valid only when there is only 1 missing index, or is it valid for larger number of missing indices?

It follows that the function F defined by

[tex]F((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_n) = (x(t),\tau(t))_n - f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t) = 0 [/tex]​

is a statement of the general constraint which guarantees that the entries conform to the given table. That is to say, this procedure yields a result which guarantees that there exists a mathematical function, the roots of which are exactly the entries to our "what is", is "what is" table. Clearly, it would be nice to know the structure of that function.

I took it on faith that the above expression "guarantees that there exists a mathematical function, the roots of which are exactly the entires...", but I don't fully grasp what that expression says. There is a function F, whose input is some set of x & tau indices. Of a specific B? I don't understand why is it equal to [tex](x(t),\tau(t))_n - f((x,\tau)_1, (x,\tau)_2, \cdots, (x.\tau)_{n-1},t)[/tex]

The part that I thought I understood is that it would be possible to recover one missing index from a specific B, if we had a function that gave "0" with the input of the correct (full) set of indices of that B. So we could just test which index gave a 0. That was the idea with this?

About the use of Dirac delta function here;
Clearly, since it is exactly zero everywhere except when x=0, it must be positive infinity at x=0. It is that property which makes it so valuable as a universal F=0 function. First, it is a very simple function and is quite well defined and well understood. Second, as it is only positive, the sum indicated below will be infinite if any two labels are identical (have exactly the same x, tau numerical label).

[tex]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j) = 0, [/tex]​

It is thus a fact that the equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated.

I suppose the expression essentially means we take a specific B, and its every X is compared with every other X and every tau is compared with every other tau. So that we'll see if any of them are the same. Or in other words, we are simply labeling every entry as unique? I am missing, why do we need a dirac delta function to make every single entry unique? This point has rather something to do about a general role of something like Dirac delta function?

Hmm, I definitely think your philosophical interpretation would make it easier for me to see what is truly essential about the mathematical expressions. For example...

As I said, it's really not as insane as it sounds. Stop and think about vacuum polarization: i.e., the problems with conceiving of the vacuum as “absolutely empty” thing, impossible to interact with. The existence of a “pure” vacuum in the sense originally put forth by scientists seems very much to be in conflict with modern physics; if there is no such thing as an “empty spot” doesn't that imply every location is full of something?

...that makes a perfect sense to me.

Well, it's getting late again and I need to get to the rest of the post (that "fundamental equation") more sometime soon. But in the meantime:

I further showed how viewing that probability as a square of some function (the vector dot product) provided a valuable consequence: i.e., I introduced a mechanism for guaranteeing that the constraints embodied in the concept of probability need no longer be extraneous constraints. Under my representation, they are instead embodied in the representation without constraining the remaining possibilities in any way! This is the central issue behind the representation

[tex]P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=\vec{\Psi}^{\dagger}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)\cdot\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)dV[/tex]

Note that the "[itex]\dagger[/itex]” is there solely to bring the representation closer to the common Schrödinger representation of quantum mechanics: i.e., allowing the components of that indicated vector to be “complex” is essentially adding nothing which could not just as easily be represented by twice as many “real” components in the vector nature of [itex]\vec{\Psi}[/itex]. The fact that the number of components must be even is of no account at all when seen from the perspective of the availability of invalid ontological elements (if that really needs clarification, I will clarify it).

Yeah I think some things need clarification at least. I don't know what the [itex]\dagger[/itex] means. I am not familiar with Schrödinger representation (as I am not familiar with mathematical representation of much of anything :)

Does [tex]\Psi[/tex] symbol mean simply any function? (whose results we will take as the components of a vector)

I may have forgotten something but, why does the number of components have to be even?

Couldn't figure out what does dV mean either.

I should be faster to reply for a while, although couple weeks from now I'll be away for a week again due to visiting San Diego. Thank you for your patience :)

-Anssi
 
  • #477
Hi Anssi, it's nice to have you back. I sure have missed your posts. Your knowledge of mathematics may be limited but that can change; your mind is like a breath of fresh air. I used to put down a signature quote: “Knowledge is Power; but all power can be abused. The most popular abuse of the power of knowledge is to use it to hide stupidity.” It does not apply to you. Education can be a stupefying experience and it is for many. Young minds are so often overwhelmed by their own ignorance that they begin to “believe” their professors and education turns into faith. One must have faith in their own ability to think and always maintain a doubt of authority. I think you have kept that doubt.

I got your note Monday but was waiting for this post in order to get a grasp of what you were misunderstanding. As you have already noticed, I posted on that other forum you mentioned, pretty well for naught. People just don't seem to think; I was hoping for a little more than what I got. You are a very rare person in that you have a strong tendency to actually think things out for yourself. I think all you really lack is a good understanding of mathematics but we can cover that (though it may not be a quick thing).

Meanwhile, let's get to your questions in this post: i.e., stick with option #1 until everything is clear (I don't intend for you to take anything on faith as it is all actually quite simple once you actually see what I am doing). We can worry about philosophical interpretation after you understand what I am saying.
AnssiH said:
Actually, let me get back to that older quote about recovering missing indices. I don't know if the answers are supposed to be obvious to me but they are not :) Hopefully you can pick up what am I missing.
I think we need to go back to that post where I first began adding “invalid ontological elements”. The fact that we can add these invalid ontological elements gives us the power to organize or represent that ”what is”, is “what is” table in a form which allows for easy deduction. In that post, I said I wanted to add three different kinds of “invalid ontological elements”, each to serve a particular purpose. You need to understand exactly why those elements are being added and how the addition achieves the result desired.

The first addition is quite simple. As I said, that ”what is”, is “what is” table can be seen as a list of numbers for each present which specify (or refer to) exactly what “valid ontological elements” went to make up our past at each defined time (what we know being “the past”). The output of our probability function (which defines what we think we know) is either zero or one depending upon whether a specific number is in that list or not. Viewed as a mathematical function, it is a rather strange function in that the number of arguments (the number of valid ontological elements associated with a given t) can vary all over the place (there is no fundamental constraint on our change in knowledge: i.e., the amount of information in a given “present”). That is somewhat inconvenient (at least from the prospect of the “language” of mathematics) so we add “invalid ontological elements” sufficient to make the number of arguments the same in each and every case defined by a specific t.

What you need to do is comprehend that we are dealing with two rather different issues here. First there is that collection of “valid ontological elements” underlying our world view (you can think of this as a basic, undefined, ”what is”, is “what is” table in your left hand) and, secondly, there is that epistemological solution which is our world-view itself. That world view (and that would be any explicitly defined explanation) includes the assumption of certain “invalid ontological elements” necessary to that epistemological solution. Thus that “defined” representation must include those “invalid” elements (you can think of this as a second, explicitly defined, ”what is”, is “what is” table in your right hand). What I am going to do is add some rather arbitrary “invalid ontological elements” to that second table. You should certainly ask, how do I justify these specific additions?

Certainly someone might come up with an explanation which didn't require these, right. The answer is, of course, yes! However when he (or she) goes to explain their explanation, it is my problem to understand that explanation. As they proceed with communicating their explanation I would certainly make some assumptions about what they were trying to tell me. These assumptions are not necessarily true nor need they be part of his actual communications: i.e., they amount to presumed invalid ontological elements on my part. What I am laying out is, I think, some very useful analytic assumptions: i.e., “invalid ontological elements” which make that communication understandable to me. I have to build a world-view in my own head and that world view has to be logical coherent, I can not do that without making assumptions.

Just as an aside, from a philosophical perspective, that first addition (making the number of ontological elements the same for all B(t)) is essentially presuming these valid ontological elements exist even when we are not directly dealing with them. That is to say, the ordinary concept of “ontological elements” behind that epistemological construct is that they exist in the past, the present and the future. No one presumes they come and go (actually, there is a subtle point there which comes up in the solution possibilities with regard to explicitly invalid ontological elements, but that will come up later). Basically, I presume you understand the advantage of this first addition.

The second addition of invalid ontological elements was to make sure that “t” (the “time” index) could be extracted from the ”what is”, is “what is” table so that it could be a viable parameter usable in an explanation. That was done in the following manner. Anytime there existed two or more identical presents (in that specifically defined ”what is”, is “what is” table in your right hand), invalid ontological elements were added and given references sufficiently different to make those presents different. At the time you expressed understanding of that procedure.

This step can be justified from a philosophical perspective. How could one present a world view where temporal behavior of entities was explained without being able to define clocks or calendars? That is, those clocks and calendars need to be part of that underlying ontology.

What that second step also provided was a method of defining a specific index via addition of invalid ontological elements. What was important was that the augmented ”what is”, is “what is” table in yielding a different present for every t allowed us to recover t if we were given a specific present (i.e., the specific entries going to make up that B(t)). Given that set of ontological elements, how do we recover t? Very simply; we look at the augmented ”what is”, is “what is” table and find the specific entry. There can only be one such entry and that entry will include the t index we wish to know. Thus it is that we can say that t is a function of the elements going to make up B(t).

That brings us to the third addition of “invalid ontological elements”. The mechanism just described for establishing a unique t index can just as easily be used to establish a specific reference index within that B(t). All one need do is remove (or ignore) a specific elemental index in that ”what is”, is “what is” table and jot down all the remaining elements. Now examine the entire ”what is”, is “what is” table and determine if the set which was jotted down appears anywhere else in the table: i.e., exists in any other present when a single element is removed. In any case where these references appear a second time, one can add invalid ontological elements with different reference indices such that the the augmented table will not contain that duplication.

Just as occurred with the t index, if I am given all but one of the reference indexes in a present, I can recover the correct index for the missing element. Again, the process is very simple: we look at the newly augmented table and find the specific entry which has that collection of elements and read off the missing element. The augmentation process can be continued until any index can be so recovered if the entire collection of remaining indices are known. This is exactly the same mechanism which made the t index recoverable.

From that rather extensive augmented ”what is”, is “what is” table, I can always recover any missing index. Since “a function” is a method of obtaining a result from specific information, this proves that “a function” exists. (In actual fact; since, in the final analysis, this amounts to a fitting problem to a finite set of points; there exists an infinite number of mathematical functions which will serve the purpose of recovery.) What I have just proved is that it is always possible to conceive of “invalid ontological elements” such that the function “f” exists where

[tex]\vec{(x,\tau)}_n= x_n\hat{x}+\tau_n\hat{\tau} = \vec{f}((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_{n-1})[/tex]

Notice that, this time, I have shown f as a vector function (it's result is a vector pointing to a point in the x, tau space which constitutes the missing point, [itex](x,\tau)_n[/itex].

You should understand that, if two things are equal, their difference is zero. Certainly, if that is the case, then one can define the function “F” to be exactly the difference between the point representing the missing index and the result of the vector function which yields that point,

[tex]F((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_n})= \vec{(x,\tau)}_n - \vec{f}((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_{n-1})\equiv 0.[/tex]

where the x, tau arguments are the relevant numerical references in the ”what is”, is “what is” table. (Sorry about being sloppy with my notation earlier regarding the vector picture.)

Notice that I have removed the “t” which was in the earlier representation. (It really shouldn't have been there.) If you examine the argument above carefully, it should be evident that any there need not be any dependence on t: i.e., it is possible to add enough invalid ontological reference indices such that no repeat exists anywhere in the table.
AnssiH said:
Is this valid only when there is only 1 missing index, or is it valid for larger number of missing indices?
One could continue the process of adding “invalid ontological elements” in order to define a function which would yield two missing indices but I see no purpose to such an extension. My purpose was to prove that one could always achieve a circumstance (by adding invalid ontological elements) such that the rule which determined what reference numbers existed in the ”what is”, is “what is” table consisted of “those entries are the roots of the function F”: i.e., the rule can be written as

[tex]F((x,\tau)_1,(x,\tau)_2, \cdots, (x,\tau)_n})= 0,[/tex]

a rather simple expression as rules go! Note that the rule is not a function of t; a seriously important fact. (I apologize again for my earlier oversight.) Philosophically speaking, this is nice as it means that the rule does not change from day to day; a rather significant fact.
AnssiH said:
I took it on faith that the above expression "guarantees that there exists a mathematical function, the roots of which are exactly the entires...", but I don't fully grasp what that expression says.
It says that the only acceptable reference numbers for the ”what is”, is “what is” table are roots of some function “F”. Or rather, that there always exists a collection of “invalid ontological elements” such that the rule as to what reference numbers can be seen in that table are given by the solutions to some equation expressed in the form F=0.
AnssiH said:
The part that I thought I understood is that it would be possible to recover one missing index from a specific B, if we had a function that gave "0" with the input of the correct (full) set of indices of that B. So we could just test which index gave a 0. That was the idea with this?
In a sense you are right; but the issue is not really to test the function F as we do not have it. Before you can actually have that function, you have to have the solution to the problem. That is F can not be defined until the epistemological construct which explains that ”what is”, is “what is” table is known (it is that explanation which specifies those numerical references). What is important here is that, if I am given a set of “valid ontological elements” there always exists a set of “invalid ontological elements” which together with a rule F=0 will yield exactly those “valid ontological elements” (along with those presumed “invalid ontological elements”). That is, it is always possible to construct a flaw-free epistemological construct where the only rule is “F=0” and the entire problem is reduced to “what exists”. This is a much simpler problem than being confronted with two apparently different issues to solve: “What exists?” and “What are the rules?”.
AnssiH said:
I suppose the expression essentially means we take a specific B, and its every X is compared with every other X and every tau is compared with every other tau. So that we'll see if any of them are the same. Or in other words, we are simply labeling every entry as unique?
You appear to understand what I am saying; however, it is possible that you are stepping off trying to construct a epistemological solution which conforms to the circumstance I have laid out. That, you shouldn't be trying to do. Remember, what I have laid out must be capable of representing all possible epistemological constructs. That is a pretty extensive field and it would be a mistake to presume that simple answers exist. I have proved that the procedure I described could be accomplished in principal since the number of elements being referred is finite; however, their number could easily exceed any mechanical equipment we might envisage to carry out such a procedure. I certainly have not proved any such thing could actually be done in one's life time; even with the simplest problem. All I have shown is that the process can be done “in principal”.
AnssiH said:
I am missing, why do we need a dirac delta function to make every single entry unique?
First of all, the Dirac delta function does not make every single entry unique, all it does is yield an infinite result when any two are the same. It should be clear that, if there exists a finite set of “invalid ontological elements” which will make the rule “F=0” yield both the “valid ontological elements and those we added (providing us with that flaw-free epistemological solution), we can certainly add a bunch more without bothering that solution. All we need do is recognize them as “presumed” and not necessarily part of that valid ”what is”, is “what is” table.
Doctordick said:
It is thus a fact that the equation will constrain all labels to be different and any specific collection of labels can be reproduced by the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated.
That seems to me to be a pretty straight forward issue. The only real problem is that the number of references has now gone to infinity and we can no longer argue things from a “finite” perspective. That introduces some subtle problems which require additional mathematics to handle. Other than that, I think my statement is rather incontrovertible.

Apparently I have exceeded the allowed size of a post and the system will not accept it. I will continue with a second post.

Sorry about that -- Dick
 
  • #478
Part II, answer to Anssi.

Back again! This is a continuation of the post above.
AnssiH said:
Yeah I think some things need clarification at least. I don't know what the [itex]\dagger[/itex] means. I am not familiar with Schrödinger representation (as I am not familiar with mathematical representation of much of anything :)
Let me start with the relationship between Psi and our probability. The issue is the fact that probability is defined to be bounded by zero and one. As a function, that makes P a rather special function. Note that, in my presentation, I don't want to make any limitations on the possibilities at all. It follows that I need to work with a totally unconstrained function: i.e., the solution to our problem must be left to be ANYTHING. Now, “any mathematical function” is a pretty obvious entity: it's arguments are a collection of numbers and it's output is a collection of numbers. A “mathematical function” is a method from getting from the first to the second, “PERIOD”, no other constraints! If we are to include all possibilities, that is about all we can say about the solution to our problem, the possible epistemological construct.

What I am pointing out with my definition of P here is that absolutely any function can be converted into a form which can be seen as a probability. It can be converted into a positive definite number by squaring all those output values and adding them up. It can then be made to be bounded by zero and one by dividing it by a number equal to the sum of all possible outcomes. (There are some subtleties here related to problems with infinity which I will discuss if you wish; however, for the moment, let's just say that the required division is always possible if it is needed.) The standard mathematical notation for the act of squaring those output values and adding them up is to represent the output of the function as an n-dimensional vector. In that case, performing a dot product of that vector with itself constitutes exactly the process of squaring all the components (the output values) and adding them up i.e., [itex]\vec{\Psi}\cdot\vec{\Psi}[/itex].

The “dagger” has to do with a thing called the “complex conjugate”. Apparently, from the posts I have seen and the comments I have gotten from modern physicists, no one uses Erwin Schrödinger's original notation any more. (In 1926, Dirac showed that Heisenberg's “matrix mechanics” and Schrödinger's “wave mechanics” were mathematically equivalent and introduced a new “bra-ket” notation which seems to be the standard now.) I prefer Schrödinger's original notation as it can be directly derived from my attack. (The issue of notation is little more than a mathematical formality though different notation does bring different issues to the forefront.) What you should take note of is the fact that modern quantum mechanics, as seen by the academy (the religious authority of modern physics) is not derived from fundamental concepts; but is rather put forth in axiomatic form and that derivation of the relationships from more fundamental analysis is really of no interest to them.

In Schrödinger's equation, Psi is taken to be a vector with complex components (if “i” is the square root of minus one then an arbitrary complex number can be written as a+bi). If the components of the vector Psi are complex, then the simple squaring does not yield a positive definite number: (a+bi)(a+bi)= a(a+bi) +bi(a+bi) = aa+abi+bia+bibi = aa-bb+2abi which just isn't positive definite. Instead, it is necessary to define what used to be called the complex conjugate: [itex](a+bi)^\dagger = (a-bi)[/itex]. Then [itex](a+bi)^\dagger[/itex] (a+bi) = (a-bi)(a+bi) = aa +bb. So all the dagger means is that the result is to be transformed to its complex conjugate; each and every result of applying the function Psi to its arguments (every component of that abstract vector) is changed to its complex conjugate. This is simply a method of guaranteeing that the probability calculation represented by

[tex]P(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=\vec{\Psi}^{\dagger}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)\cdot\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)dV[/tex]

is a positive definite quantity.

I laid all that out because I want to work with the Psi function. If the probability function exists (and it certainly does if our epistemological construct will yield expectations for those collections of ontological elements B(t)) then so does Psi (worst case scenario, Psi is just the square root of P). What I want to do is examine the possibilities for Psi. With the “invalid ontological elements” I introduced to make that sum over Dirac's delta function become the F function I needed, I know that, whenever I have the correct set of numerical references to my ontological elements,

[tex]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j) = 0, [/tex]

If I don't, then that sum is infinite! Against this, I also know that, if I have an incorrect set,

[tex]\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t)=0[/tex]

as the probability of seeing that particular set of references must be zero and the probability is the sum of the positive definite squares of the components of Psi (that means that every one of those components must be zero and Psi must totally vanish). This means that no matter what arguments are inserted as numerical references to that collection of ontological elements, the product of those two above must be zero (if one isn't zero, the other is). It follows that

[tex]\sum_{i \neq j}\delta(x_i -x_j)\delta(\tau_i -\tau_j)\vec{\Psi}(x_1,\tau_1,x_2,\tau_2,\cdots,x_n,\tau_n,t) = 0, [/tex]

without exception.
AnssiH said:
I may have forgotten something but, why does the number of components have to be even?
If we go to representing the components of the vector function Psi as complex numbers, it is completely equivalent to using two components for each normally real component so, in a sense we are limiting our consideration to functions with a even number of components. This isn't really troublesome as, if the correct answer turns out to be a function with an odd number of components, it can just as well be seen as a function of an even number where one of the components is always zero. All this move really does is make the notation appear to be similar to Schrödinger's.

As far as the dV is concerned, when I introduced the idea of adding an infinite number of ontological elements, I brought the total of all possibilities to an infinite number of combinations. That pretty well assures us that the probability for any single collection will be zero. Essentially that tells us that we are dealing with probability density here and not directly with probability itself. Another way to look at it is to understand that the sums over all possibilities (in order to determine the factor we need to divide by) has now transformed into an integral over a continuous variable. The probability then depends upon how large a region of that continuous variable we are considering.

[tex]dV = dx_1d\tau_1dx_2d\tau_2\cdots dx_nd\tau_n \cdots[/tex]

Sometimes the notation gets complex; we are talking about a lot of variables here; remember, this solution represented by Psi explains everything about the entire universe.

Back to the issues of philosophy here. Several come to mind. First I would like to go back to your comment about recovering a specific index from that ”what is”, is “what is” table given that you know all the indices except one. In the set up I have described, “the rule” allows one to recover that index if all the other indices are known. What does that amount to? Given a world view consistent with the flaw-free epistemological solution we are looking at, it says that, if we know the entire rest of the universe in detail for all times under consideration, the rule will tell us exactly what that reference number must be for the missing index as a function of time.

This is surprisingly similar to a common presumption of modern science. Take a careful look at exactly what modern science says about the outcome of an experiment. In essence they hold that, if we know the entire description of an experiment (all significant details: i.e., ignoring what is insignificant) and the rules governing the universe, we know what the result of the experiment will be. They make the assumption that this is a fact whereas I have constructed my representation (via additions of invalid ontological elements) so that the same issue is a fact and not an assumption. The same conclusion is reached but the defense is subtly different.

A second issue also arises here. In establishing my Dirac delta function rule, I pointed out that it was absolutely correct via the simple act of adding “invalid ontological elements” until all the wrong answers are eliminated. There is a very important additional consequence of that procedure. Suppose we don't know that a specific answer is wrong (one of those numbers might actually represent a valid ontological element). All that need happen is that we do not add an “invalid ontological element” to cover that case. The consequence of not adding that element is that it leaves openings in that universal cover. What that does is allow the reference to a valid ontological element to have more than one value. Essentially, it introduces uncertainty to the resultant world view.

Now this uncertainty is accepted as a definite component of modern physics but my approach is concerned with “valid ontological elements” something philosophers do not consider to be “variables” subject to uncertainty. Can you really say that their position is defendable? You should note that they give their arguments in an inexact language under the presumption that the words they use have a definite meaning. I personally think that is a rather poor assumption. Words change meanings all the time and, from a historical perspective, they are almost as dynamic as are the molecules that make up our physical environment. Why else would ancient languages differ so much from our own?

What I am saying is that what I am doing has applications far beyond what is currently regarded as physics. It adds a whole multiplicity of dynamic relationships to the study of analytical science. Remember that, prior to the work of Franklyn Ampere, Oersted, Volta, Coulomb, Faraday and others, electricity and magnetism were simply not considered to be analytically accessible phenomena.

I know your mathematics training is limited but you should consider what Feynman once said, “mathematics is the distilled essence of logic”. The problem with conventional logic is that it can only span a few million steps at best and can only be extended to the range needed to understand the universe through abstract mechanisms with powers well beyond what we can hold in our heads; mathematics is absolutely essential to understanding the universe.

Have fun -- Dick
 
  • #479
Dr. Dick,

In response to a comment you made in post # 478 above, I started a thread in quantum theory section of forum, and I see that you will have to provide clarification of your thoughts. I think this a good opportunity for you to interact with professional physicists about your philosophy here presented--see here if you have an interest:

https://www.physicsforums.com/showthread.php?t=178555
 
  • #480
Rade said:
In response to a comment you made in post # 478 above, I started a thread in quantum theory section of forum, and I see that you will have to provide clarification of your thoughts. I think this a good opportunity for you to interact with professional physicists about your philosophy here presented--see here if you have an interest:
I have read the thread and their comments are pretty typical of physicists I have run across in the past. As far as interacting with professional physicists is concerned, I have done plenty of that in my life time. I have earned a Ph.D. in theoretical physics from a reputable university and had plenty of interactions with the academy during that period. At that time (the early sixties) the position of theoretical physicists was that the big problem was not understanding the universe (they already understood it all) the big problem was how to calculate solutions to their equations. As I have said somewhere else, Richard Feynman got a Nobel Prize for developing a notation for keeping track of terms in an expansion of an infinite series (which everyone believed to be correct ). To quote Caltech themselves, http://pr.caltech.edu/events/caltech_nobel/ And I do not intend any insult to Richard in any way. In fact, I talked to him in 86 and he said he would like to follow my thoughts as soon as he finished with that NASA accident (he was the chairman of the investigating committee). Next thing I heard, he had died of cancer (I finally get an intelligent educated person to talk to me and he ups an dies; just my luck).

At any rate, I was not interested in “crunching numbers” (the standard career of a theoretical physicist, at least back then), I was interested in the underlying basis of physics itself. So, I did not publish (I spent my time thinking instead). I had sufficient evidence of the academy's lack of interest in such things long before I got my Ph.D..
jostpuur said:
I'll put it this way: "Physicists are usually not interested in philosophy, they are interested in calculating." That is something that many will probably agree with, and if Doctordick is criticizing it, it is understandable, although I'm not convinced that he himself would be improving anything.
At least he finds my rebellion “understandable” though he clearly does not think my thoughts are worth thinking about.
country boy said:
But every physicist I know is interested in the possibility that QM and other aspects of modern physics might be derivable from more fundamental, as yet unrecognized, principles.
Yeah, sure they are interested; as long as it comes from a recognized authority and not a rebellious skeptic of their great accomplishments.
Hurkyl said:
... you run the risk of losing some of your audience if they have to do a lot of theoretical work before they can actually compute anything.
Yeah, there is a lot of truth to that all right. When it comes to serious thought, most people have an intention span of about two minutes. They want “simple minded” answers to their questions, not simple answers. One should recognize that Newton's theories are quite simple but they are not at all “simple minded”. There is a great difference between “simple " and "simpleminded”.
Llewlyn said:
Please note that all physics is put in axiomatic form.
That is a succinct statement of the academies position on the issue. As I have said many times, physicists say what I am doing is philosophy and they have no interest in it; philosophers say what I am doing is mathematics and they have no interest in it and mathematicians say what I am doing is physics and they have no interest in it. All I am looking for is people who are interested in thinking; a very rare breed indeed.

You comment that I need to provide clarification of my thoughts. I think what you really mean is that I need a simple minded overview. Explaining the entire universe is not a simple minded thing. I have already provided much clarification to Anssi. Tell your friends to start with post #211 on this thread (my first response to Anssi) and then follow the conversation between Anssi and myself. I think they would find my thoughts quite clarified. But I doubt any of them would take the trouble.

Have fun -- Dick
 
  • #481
I didn't follow this incredibly long thread, I just jumped in now. Just reading Doctordick's last post I can relate to what he says but I still don't know what the discussion is about.

Before I even try to read all posts, is the dicussion here about the definition or interpretation of time like the title suggest?

Any suggestions which post in this thread I should start reading to get an idea of Doctordicks idea? I ask because, as is often threads start out as something and ends up as something completely different.

/Fredrik
 
  • #482
  • #483
Whoa.. a lot of reading. Some comments along the way...

From http://home.jam.rr.com/dicksfiles/Explain/Explain.htm

Without going through all details I can directly relate to this

What I am saying is that understanding implies it is possible to predict expectations for information not known; the explanation constitutes a method which provides one with those rational expectations for unknown information consistent with what is known

This sounds very close to the general induction principles of optimal inference. If so, that is very much in line with my own thinking. When I want to understand reality, it basically means that I want to see how my view of things, and my generator of educated guesses are induced from my current knowledge and experience, under the condition that I do not know everything, and I can't know everything. The reason I can't know everything at once is because my memory is too small, and the reason I can't computer everything instantly is because my computer power is too poor. Here comes a relation to time. This is my own thinking... and if Doctordicks ideas is anything close to this I think I'll find it interesting.

How does that relation sound to you DD?

I'll read on when I get more time

/Fredrik
 
Last edited by a moderator:
  • #484
I also associate here to bayesian thinking, but instead of bayesian probability, I'd like to call it bayesian expectation for the very reason that the true probabilities themselves can only be estimated.

/Fredrik
 
  • #485
Ok I am only one the first page yet! but a question to Doctordick, did you read the ideas of Ariel Caticha, based on optimal inference and entropy methods?

For example arXiv.org/abs/physics/0311093
more at http://www.albany.edu/physics/ariel_caticha.htm

A quote from his paper
The procedure we follow differs in one remarkable way from the manner that has in the past been followed in setting up physical theories. Normally one starts by establishing a mathematical formalism, setting up a set of equations, and then one tries to append an interpretation to it. This is a very difficult problem; historically it has affected not only statistics and statistical physics – what is the meaning of probabilities and of entropy – but also quantum theory – what is the meaning of wave functions and amplitudes. The issue of whether the proposed interpretation is unique, or even whether it is allowed, always remains a legitimate objection and a point of controversy.

Here we proceed in the opposite order, we first decide what we are talking about and what we want to accomplish, and only afterwards we design the ap- apropriate mathematical formalism. The advantage is that the issue of meaning never arises.

/Fredrik
 
Last edited by a moderator:
  • #486
I found this page http://home.jam.rr.com/dicksfiles/reality/Contents.htm which I suspect is easier to read that this thread as it looks more structured.

I seems the author tries to rethink from scratch, which is good. I take it the suggestions must be read in the context of his rethinking. I'll start and see if I understand you... some questions along the way on things that I "suspect" are key points to understand the rest(?)...

The Foundations of Physical Reality said:
The issue of truth by definition rests on two very straight forward points:
(1.) we either agree on our definitions or communication is impossible and
(2.) no acceptable definition can contain internal contradictions.

What about the possibility that some definitions, along with other concepts are formed in the communication/interaction itself? And that mutual equilibration is evolving _due to_ communication?

For example, you an I start to speak, by starting out with a small common relation, we can build a larger common relation and set of "definitions"... but isn't that a process?

I'm not sure if I read you wrong here.
Comments?

/Fredrik
 
Last edited by a moderator:
  • #487
The Foundations of Physical Reality said:
Thus, the problem becomes one of constructing a rational model of a totally unknown universe given nothing but a totally undefined stream of data which has been transcribed by a totally undefined process.

I like your bold stance so far, but sometimes the tone is a bit aggresive towards the supposedly "simple minded", but maybe there is a reason for that :)

I received an image in my head of what you set out to do, to somehow try to find a foolproof starting point and work from there. You also note that

The Foundations of Physical Reality said:
As it is my intention to make no assumptions whatsoever, even the smallest assumption becomes a hole which could possibly sink the whole structure. As I do not claim perfection, errors certainly exist within this treatise. None the less, I claim the attack will be shown to be extremely powerful.

It thin this is a key point, that I suspect I'll relate back to later on. In my thinking stability and flexibility is what I consider to be a factor or survival. A strategy that basically is "if I am right, I'll rule the world, and if I'm wrong I'll die" sounds like a high risk strategy. It will be interesting to see how risk assessment is further handled.

In my thinking, the key goal is not some ultimate perfection, but optimal improvement/progression, which is by construction is always changing and "in motion", and improvement of something presumes also it's survival. I see it a bit like a game.

/Fredrik
 
  • #488
To return to the purpose of your tool...

The Foundations of Physical Reality said:
Thus, the problem becomes one of constructing a rational model of a totally unknown universe given nothing but a totally undefined stream of data which has been transcribed by a totally undefined process.

How do you picture an observer beeing exposed to this datastream? What happens when the observers memory is full, and runs out of memory for you constructions?

/Fredrik
 
  • #489
Please read a little of my conversation with Anssi!

Fra said:
How do you picture an observer being exposed to this data stream? What happens when the observers memory is full, and runs out of memory for your constructions?
(Excuse me for correcting your spelling; it's sort of a compulsion ingrained by my father years ago.) You are clearly misinterpreting what I am doing. I made no claim to understanding how human beings unconsciously solve the problem; all I said is that they obviously solve it on a regular basis which implies it is a solvable problem. Thus the fact that I have solved the problem bears little impact on how the average person does so. In fact, there are a lot of points to persuade one to accept the fact that they certainly do not use my method. In particular, we have the fact that no one (to my knowledge) uses that equation I derived and, secondly, their solutions are often ripe with errors. But they certainly are “solutions”, and dammed good ones at that (almost everyone agrees with “what is real”).

My only point in bringing up the fact that “every living human being” has essentially “solved the problem of constructing a rational model of a totally unknown universe given nothing but a totally undefined stream of data which has been transcribed by a totally undefined process” was to convince the reader that the problem was solvable. Most serious scientists would hold that the problem is insoluble on the face of it. Why do you think they refuse to even consider the issue?

I still suggest it would be to your benefit to glance over my conversation with Anssi. As far as the question: what happens when the observers memory is full, and runs out of memory for my construction, the issue is quite simple. First, I am not claiming he is using my construction and second, with regard to my construction, anything which is truly forgotten can not possibly influence one's world view. My construct is based entirely on that data which is available and depends not at all on anything which has been forgotten.

Philosophically speaking, the fact that a common humans construct is based on the assumption that their current world view is valid and that anything they have forgotten was consistent with that world view. That itself could be a great explanation for the errors in their world view. The central point here is that a flaw-free explanation of anything must satisfy my equation.

Have fun -- Dick
 
Last edited:
  • #490
( I don't mind if you want to spellcheck - go ahead )

I am well aware that I may misinterpret your intentions, but that's what the questions are for.

You somewhere (I forgot where) defined an "explanation" as a method for obtaining a expectation? This sounds interesting, but I am still not sure if you mean what I think you mean.

Question on definition of expectation: Do you with expectation mean like some probability in frequentists interpretation, define on the current known fact? ie history or past, or whatever is part of your known facts?

Or does expectation refer to the unknown? ie. that what you know, induce an expectation on the unknown? ie. future?

If you _define_ a probability pretty much like some relative frequency on a given, fixed set of facts, then the "expectation" applied to that set is of course exact by definition? Is this what you mean?

Or do you suggest, that the expectation provides us with educated guesses in cases where we lack information?

You said somewhere I think that you make no predictions? But isn't an expectation a kind of prediction? I mean the expectation is not exact, it doesn't tell us what will happen, but it gives us a basis for bet placing - thus there are good and bad expecttaions. Do you somehow claim that your expectation is the optimum one?

Let me ask this: What is the benefit, someone would have, adapting your models, over someone that uses the standard model? Would they somehow be more "fit" (thinking of the analogy of natural selection here).

/Fredrik
 

Similar threads

Replies
22
Views
979
Replies
5
Views
2K
Replies
35
Views
821
Replies
5
Views
1K
Replies
21
Views
5K
Replies
3
Views
971
Replies
32
Views
4K
Replies
8
Views
577
Back
Top