What is the Cosmological Natural Selection theory?

In summary, Cosmological Natural Selection is a testable idea for explaining why the constants in physics and cosmology----the parameters of the standard models used in the two fields----have the numerical values they do.
  • #36
marcus said:
To recapitulate what I said, for a universe to make a lot of black holes it has for example TO LIVE A LONG TIME and not collapse in a big crunch only a billion years or so into its existence. But a long lifetime like our universe is already over 13 billion years and no sign of ending, a long lifetime can also be taken advantage of by chemical molecules to start their biochemistry and their biological evolution---which needs time for random accidents to get it started. So that long lifetime of the universe is also, as a byproduct, a life-enabling feature.
Which is more of a concentration of information, black holes, or biological life? Is that the point of a universe that allows black holes?
 
Astronomy news on Phys.org
  • #37
Mike2 said:
Which is more of a concentration of information, black holes, or biological life?

Mike maybe I don't understand your question. Dont have anything expert to say about this, but will just do the best I can.
I believe it is not reliably known what happens to the information about stuff that falls into a BH. It might be irretrievably lost. Or it might be recoverable if one waits for billions of years and watches the hole evaporate.

if the information is lost, then we can't deduce any details about what fell in and all we can see is the hole's mass, its spin (if it is rotating) and its charge (if it is electrically charged)

that seems to me like very little information at all!

maybe from a standpoint of a different observer it would look different but from someone outside the hole it seems very iffy that he sees any great concentration of information.
 
  • #38
marcus said:
for a universe to make a lot of black holes it has for example TO LIVE A LONG TIME and not collapse in a big crunch only a billion years or so into its existence.
Two factors could cause the universe to collapse in only a billion years or even much less: the average density could be much higher
(Omegamatter >> 1) or the gravitational 'constant' could be much larger
(Gnew >> GNewton).

Would not either of these changes in cosmological attributes also tend to hasten and increase the formation of black holes?

Garth
 
Last edited:
  • #39
Garth said:
Two factors could cause the universe to collapse in only a billion years or even much less: the average density could be much higher
(Omegamatter >> 1) or the gravitational 'constant' could be much larger
(Gnew >> GNewton).

Would not either of these changes in cosmological attributes also tend to hasten and increase the formation of black holes?

Garth

Hi Garth, see page 16 of the usual paper (gr-qc/9404011) section
6.1 "Increasing the gravitational constant"

the Omegamatter issue is complicated by the fact that individual stars and subsequent black holes take time to form

To try to put it intuitively, if the density of matter near the start of expansion is very high the universe expands a little and then quickly collapses in a big crunch bounce.

because stars don't have time to form, if the density of matter is too high you only get one black hole, in effect, instead of millions upon millions of individual black holes if you have a long-lived universe
 
  • #40
100 black holes formed per second

Interesting side aspect of this, Garth, on page 2 first paragraph (the usual paper gr-qc/9404011), and the footnote at the bottom of the page, you see this estimate that the present rate of formation of black holes in our unvierse is "likely to be as high as one hundred per second".

the footnote explaines how this order of magnitude estimate is arrived at, you might like to check it for yourself

contrast this with what happens if the initial matter density is too high leading to an early collapse where individual stars don't have time to form,
essentially one black hole (the crunch) in the whole life of the universe

whereas with a thinner, less dense, universe we are talking quantities of BH like 100 per second for billions of years.

you your idea of making it more dense does not automatically get you more black holes, it might get you less, even a lot less.
 
  • #41
marcus said:
Mike maybe I don't understand your question. Dont have anything expert to say about this, but will just do the best I can.
I believe it is not reliably known what happens to the information about stuff that falls into a BH. It might be irretrievably lost. Or it might be recoverable if one waits for billions of years and watches the hole evaporate.

if the information is lost, then we can't deduce any details about what fell in and all we can see is the hole's mass, its spin (if it is rotating) and its charge (if it is electrically charged)

that seems to me like very little information at all!

maybe from a standpoint of a different observer it would look different but from someone outside the hole it seems very iffy that he sees any great concentration of information.

It sounds like we are talking about at least a temporary loss of information in any event as things fall into BH's. I'm not sure if that matters at all. Information is the opposite of entropy, so there's no news that entropy increased as information is lost, right? Entropy always increases, right? So that information always decreases, right? So what's the problem?

Tell me again what happens when high entropy things fall into BH's. If there is no entropy associated with the horizon, then there is a decrease in entropy in the universe as they disappear behind the event horizon, right? But if there is a compensating increase of the surface of a BH for things falling into it, then how would the rest of the universe know that? How can the rest of the universe obtain any information about the entropy state of the BH when no information can escape a BH? It's all still a bit confusing to me. Would appreciate a little clarification, thanks.
 
Last edited:
  • #42
Mike2 said:
...
Tell me again what happens when high entropy things fall into BH's. If there is no entropy associated with the horizon, then there is a decrease in entropy in the universe as they disappear behind the event horizon, right? But if there is a compensating increase of the surface of a BH for things falling into it, then how would the rest of the universe know that? How can the rest of the universe obtain any information about the entropy state of the BH when no information can escape a BH? ...

Mike thanks for contributing such interesting material to this thread! I will pass on these questions in the hope that someone else will step in and contribute a BH (Bekenstein-Hawking or black hole, your choice :smile:) entropy talk. It is not directly on topic here, so it might be nice to made a separate BH entropy thread to discuss the questions you raise.

I believe that "If there is no entropy associated with the horizon, then..." is a vacuous assumption, a bit like saying "If 2 equals 3, then...". I also do not think you can assume that the rest of the universe is unable to observe the size, or mass, of a BH. Astronomers for example have recently been observing the BH at the center of MilkyWay and have estimated its mass and therefore its size. In that sense information (namely about its mass) CAN "escape a BH." So when one says that no info can get out of a BH one has to be real clear about what the words mean in context.

It would be nice if someone wants to volunteer a brief clarification of the issues you raise here Mike, though I suppose if you want to have a broader discussion about "what is information" and "what is entropy" and "in what sense is information not able to get from A to B and in what sense is it able to get from A' to B' and why do words mean different things in different contexts" then you should start a different thread.
 
  • #43
marcus said:
Interesting side aspect of this, Garth, on page 2 first paragraph (the usual paper gr-qc/9404011), and the footnote at the bottom of the page, you see this estimate that the present rate of formation of black holes in our unvierse is "likely to be as high as one hundred per second".

the footnote explaines how this order of magnitude estimate is arrived at, you might like to check it for yourself

contrast this with what happens if the initial matter density is too high leading to an early collapse where individual stars don't have time to form,
essentially one black hole (the crunch) in the whole life of the universe

whereas with a thinner, less dense, universe we are talking quantities of BH like 100 per second for billions of years.

you your idea of making it more dense does not automatically get you more black holes, it might get you less, even a lot less.
1. Nevertheless, if the Gravitational constant were ~ 1040 times bigger, i.e. the gravitational force remained the same order of strength as the other three forces, then the universe might recycle in ~ 10-22 secs. (cosmological acceleration not withstanding) so even if there were no other black holes, a constantly recycling whole universe would produce considerably more iterations than the present universe with 102 black holes per sec.

2. There may well be other ways of producing black holes than the final stage of large star stellar evolution. Primordial black holes might well be more numerous in universes with less initial homogeneity, i.e. larger fluctuations.

Although these possibilities are highly speculative so is the rest of CNS, we don't know what goes on inside a black hole!

Does Smolin address these issues?

Garth
 
  • #44
Garth said:
Does Smolin address these issues?

Garth

bet you a dime he does :smile:
read the paper and see how

if you have trouble understanding the paper, ask
and maybe someone can help
but the general argument is certainly not hard to follow!
 
  • #45
marcus said:
bet you a dime he does :smile:
read the paper and see how

if you have trouble understanding the paper, ask
and maybe someone can help
but the general argument is certainly not hard to follow!
Yes I have read and understood Smolin's paper you referred to, but I do not find an adequate answer to my questions, does he address these questions elsewhere?
From that paper:
In particular, as the small size of the primordial density fluctuations observed by COBE [15], as well as direct observational limits, seems to rule out the presence of primordial black holes in our universe, the dominant mode of black hole production in our universe is by the collapse of massive stars.


It is not sufficient to consider our universe only; the question is what might happen in other universes, with different physical 'parameters'. Although primordial black holes seem to be ruled out by observation in our smooth (1 part in ~105) universe, there is no reason why this should be so in others, perhaps those universes that did not suffer inflation (if indeed ours did!).

Secondly Smolin's assumption
* Almost every small change in the parameters of the standard models of particle physics and cosmology will either result in a universe that has less black holes than our present universe, or leaves that number unchanged.
does not take into account that a universe itself will be a significant class of black holes in the ensemble if it recycles quickly enough.

One of the anthropic coincidences arises from the widespread expectation that in a unification of all four forces, gravitation will be of the same strength as the other three. How come then that whereas the other three remained within two or three orders of magnitude of each other in strength, gravitation rapidly diminished yet stopped at OOM 10-40 of the others within the fairly narrow (2 OOM) band that allowed stars to form yet a universe old enough for life?

If we say, as Smolin does, that that value of G actually maximises the number of black holes, which only incidentally is also propitious for life, then we also have to consider the possibility that gravitation might not diminish at all in other universes,but remain roughly the same strength as the other forces. If so, the lifetime of such a universe might be OOM 10-20 sec and be a far more efficient way of producing black holes than our universe's rate of 100/sec.. Following on from Smolin statement
Thus, we conclude that a typical universe in the ensemble (for N > N0) has parameters p close to a local maximum of B(p).
we might conclude instead that the local maximum will be with such rapidly recycling universes. Again does Smolin address this elsewhere?

Garth
 
Last edited:
  • #46
Hi Garth, thanks for reading the paper and quoting from it, which, among other things, let's us have some common material to be looking at

Among the several points you raise, we could start here

Secondly Smolin's assumption

Almost every small change in the parameters of the standard models of particle physics and cosmology will either result in a universe that has less black holes than our present universe, or leaves that number unchanged.

does not take into account that a universe itself will be a significant black hole if it recycles quickly enough.

First, it is important to realize that what you call an assumption is NOT AN ASSUMPTION, but something he explicitly says is a PREDICTION of the model. that is it is something which can be used to test the model in the hope of falsifying it.

I hope you understand that a conjecture or theory like this is not something that Smolin believes or disbelieves, or that I or anybody else does, or that anyone is trying to persuade you of. It is offered to fellow scientists for testing.

If you, Garth, can provide a solid objection to this prediction that actually shows it wrong then you will have had the honor of being the one to shoot down the CNS conjecture! I, for one, would congratulate you because I think that CNS is a very interesting idea and deserves to be carefully tested and shot down if its prediction (stated here) proves false.

Now you have offered an idea for falsifying this prediction, and thereby shooting CNS down. You have said that the prediction does not square with the possibility of a universe RECYCLING VERY FAST.

Smolin estimates that our universe creates roughly 100 black holes per second. So I would guess that you are imagining that a small (say one percent) change in the parameters of our observed universe might result in a universe that would experience a big bang-crunch-bounce-bang-crunch-bounce-bang...cycle at something faster than 100 times a second.

1. What one percent, or even 50 percent, change in any of the parameters do you think would produce this very rapidly cycling universe?

the idea in mathematics of a "small" change is left intentionally vague and depends on people being reasonable. In standard problems of optimization, of finding a local extremum (maximum or minimum) there is an idea of a small change being within an epsilon-neighborhood where the difference over such a change in the argument can be approximated by the derivative. It doesn't span two separate peaks in the payoff function. this should not be worrisome. I am quite comfortable with any reasonable idea of "small"----if you want think of a small change in the parameter as anything up to 95 percent change!

Anyway, to make your objection relevant to the prediction, you have to come up with a change in the parameters of our universe that you can reasonably call a SMALL change that would lead to a very rapidly cycling universe. If you can please say what it is.

2. I don't think you can come up with a proposed change like that, but even if you could it would still not validate your objection!
the reason is that the prediction is about the number of black holes produced by one iteration, one universe, in one 'generation'.
Smolin is predicting that no one can exhibit a small change in the parameters of THIS universe which would result in THIS universe having more black holes in IT.

I take it, because you want to compete by rapid cycling, your picture of a competing universe only has ONE gravitational collapse in it, the terminal one. Let us for the sake of argument call that a black hole. So it has one black hole. You think your universe can win by having that happen very quickly---no time for anything else, just expand a little and recollapse very quickly. But that does not offer any competition to our universe with its parameters.

the number that Smolin's prediction counts, and what you are competing with, is roughly 100 times the age of the universe expressed in seconds, or its estimated reproductive lifetime.
That is on the order of E18------ten to the 18th power---in american english we call it a "quintillion".

If you read Smolin's prediction carefully as you quoted it, you will see that you are offering ONE as competition for ONE QUINTILLION, so you see it does not work.

Too bad, please try again. It would be wonderful if one of us could shoot down this prediction made by CNS, effectively disposing of this interesting theory and clearing the way for the next falsifiable theory explaining the physical constants to be proposed and tested.
 
Last edited:
  • #47
Hi marcus I am not necessarily trying to CNS down, just trying to understand it by cross examination!

I stand corrected over my use of the word "assumption", Smolin uses that word later on
Let us make the more specific assumption that all the dimensionless parameters of the standard models of particle physics and cosmology change by small random increments at such events
and I misapplied it.

We do seem to disagree though on how the theory works.

My understanding of CNS is that you ‘start’ with a random ensemble of universes with a completely diverse range of physical parameters. Black holes produced, either as an end result of stellar evolution, primordially from dense fluctuations coming out of the Planck era, or by a final 'big crunch', then produce other universes, which are slightly different from their individual progenitors. Those that produce black holes more efficiently and more quickly give rise to other efficient BH producers and soon overwhelm the ensemble. After a sufficient time the most common type of universes are those that maximise BH production. It is therefore not surprising that we find ourselves in such a universe.

One question I have, though, is that if we take all the universes that have ever existed, would we not find the most common type of universe to be one that recycled in say 10-20 seconds? It would seem that all that would be required to make such a universe would be to have all four forces to be roughly of equal strength.

Garth
 
  • #48
Marcus, I will be getting back to the rest of the paper in a little while, but I am interested in knowing what you think of the conservation of matter-energy issues I've raised already, and what definition of black hole makes sense. I share Garth's confusion about why a "big crunch" scenario for a universe distinguishes itself from a non-big crunch black hole scenario.
 
  • #49
Garth said:
Hi marcus I am not necessarily trying to CNS down, just trying to understand it by cross examination!

You are patient and congenial. I am very glad of someone else being interested in this conjecture or theory. It is good to cross examine it and if possible find flaws. what I like about your approach is you do not reject it out of hand.

there are articles you would be able to find in the literature which critique CNS constructively, in this way. however perhaps the best is to make one's own critique independently and then see what one or more others have said

I think it is important to acknowledge that a theory like this does not have to give the whole picture, it can have limitations to what it covers and still make useful predictions.

CNS HAS BLANK areas for instance it does not envision what may have been at the beginning of this evolutionary process.

by rough analogy: it does not envision or claim about the roots of the tree or what the trunk looks like near the ground or even where the trunk might be. It is completely vague about the total history of the ensemble of all the universes. In effect we simply find ourselves in the branches. the theory is only about a section of the branches.

at least locally we could, in principle, define the idea of a generation or cross-section of the branches. If we could crawl around in the tree we could identify who our brothers and sisters are, and who our nephews and neices (the next gen) and who our aunts and uncles (the previous gen) and even we could label who is our great uncle and our great great uncle. by going back a certain number of forkings.

Mathematically we could color---- all our gen we could color red, and next gen color blue, and previous gen color green etc., by a tree search algorithm.

BUT STILL WE KNOW NOTHING OF THE BEGINNING. just like it could be "turtles all the way down" it could also be branches all the way down, no trunk! no roots! CNS does not envision or talk about these things.

all CNS pictures is that locally we are in a branching pattern, and we can derive from this as its first consequence that the vast majority of each generation will parameters that promote forking
because if your branch forks a lot you will be well represented in the next generation

now from this first consequence we have to derive some likely OBSERVABLE consequence to serve as a testable prediction. So we can play by the rules of empiricism and agree to give up our CNS picture if it's testable prediciton is falsified. (this is the honor system of doing science in good faith and it is how the community of scientists can gradually get better ideas by ruling out the ones proven wrong or at least very unlikely)

this first consequence is not observable because we cannot climb around like kids in a real tree and count the branches and the forkings and the numbers in each "generation" of branches. We are essentially blind to other branches besides our own.

So we must derive a second consequence which is observable. this is more iffy and it goes just as you quoted

In each generation we expect the vast majority to be BH-prolific and to have evolved their constants to some local maximum. (evolution does not find global maxima, it can only feel its way up to the top of whatever hill it is on, only rarely can it jump over to a higher hill so do not expect ideal perfection only LOCAL superiority compared to what could be obtained by small changes).

Because the vast majority will have parameters optimized for BH-abundance it is at least PLAUSIBLE that our universe might be typical and have parameters which are optimized, so a small change wouldn't improve.

So this is the second consequence and it is about OUR universe, so it is OBSERVABLE and testable. It says that a small change of the parameters would not make our universe more BH-prolific (because our parameters, if they are typical, are already optimized to be better than parameters which are nearby in parameter-space.

Now, Garth, I want to quote what your picture of CNS is, from your last post, to see how we differ, or if we differ at all, in how we see it:
My understanding of CNS is that you ‘start’ with a random ensemble of universes with a completely diverse range of physical parameters. Black holes produced, either as an end result of stellar evolution, primordially from dense fluctuations coming out of the Planck era, or by a final 'big crunch', then produce other universes, which are slightly different from their individual progenitors. Those that produce black holes more efficiently and more quickly give rise to other efficient BH producers and soon overwhelm the ensemble. After a sufficient time the most common type of universes are those that maximise BH production. It is therefore not surprising that we find ourselves in such a universe.

Great!, I think we agree exactly except for your first sentence where you sketch out a possilbe "start" with a random collection. I do not think Smolin has gone so far as to conjecture something about the start like that, maybe he has and I just missed it, but I think not. I feel we know nothing about the "start" and it is useless to conjecture at least at this point.

But the rest of what you say sounds just like what I wanted to say except I shifted down to low gear and said it very slowly with laborious detail and you gave the "executive summary" short statement.

Vale,
 
Last edited:
  • #50
ohwilleke said:
Marcus, I will be getting back to the rest of the paper in a little while, but I am interested in knowing what you think of the conservation of matter-energy issues I've raised already, and what definition of black hole makes sense. I share Garth's confusion about why a "big crunch" scenario for a universe distinguishes itself from a non-big crunch black hole scenario.

Ohwilleke, I am glad you will be reading more in http://arxiv.org/gr-qc/9404011

most of the matter energy in our universe was generated during inflation
and was not here prior to inflation (I believe Alan Guth has called this "the ultimate free lunch")

at present in any given comoving volume, as we watch say a Hubble volume of space expand, according to the standard model of cosmology a huge amount of dark energy is being created because the dark energy density is assumed constant---so expanding the volume adds energy.

the amount of matter in the volume remains essentially constant but the amount of energy increases, according to standard LambdaCDM consensus.

this is the consensus view of the present situation, but a similar remark applies to the presumed brief inflation period near the onset of expansion.
the scalar field, or "inflaton", eventually decayed to more ordinary types of energy and was the origin of most of the matter.

A black hole would appear not to need a lot of mass initially, in order for it's bounce to generate a universe like ours.

If you have an alternative view please present it!

If you have some problem with CNS connected to energy please restate it. I want to make sure I understand and not have to hunt for what I guess you must be referring to.

About the confusion you say you share with Garth, did my post to Garth just now help at all to clear that up?
 
Last edited by a moderator:
  • #51
OK - so now we are talking about: Inflation, dark energy, other universes, other universes being spawned by Black Holes, these new universes being very similar but only slightly different from their individual progenitors.

Is there any empirical evidence for any of this? A Higgs boson? Even a consensus on what dark energy is, let alone identifying it in a laboratory? Proof of the existence other universes? The BH spawning process? The mechanism in CNS that takes the place of DNA in biological natural selection?

Being also a theologian I might in addition ask: "How many of these conjectures can dance on a pinhead?"!

Just a friendly comment.

Garth
 
Last edited:
  • #52
marcus said:
most of the matter energy in our universe was generated during inflation and was not here prior to inflation (I believe Alan Guth has called this "the ultimate free lunch")

at present in any given comoving volume, as we watch say a Hubble volume of space expand, according to the standard model of cosmology a huge amount of dark energy is being created because the dark energy density is assumed constant---so expanding the volume adds energy.

the amount of matter in the volume remains essentially constant but the amount of energy increases, according to standard LambdaCDM consensus.

this is the consensus view of the present situation, but a similar remark applies to the presumed brief inflation period near the onset of expansion.
the scalar field, or "inflaton", eventually decayed to more ordinary types of energy and was the origin of most of the matter.

You learn something new every day. While I had heard that inflation called for perhaps apparently superluminal speeds, I had never realized that it called for violation of matter-energy conservation. I had always figures that the initial energy was just so great that it was sufficient to do the job. I may have to go back and look at inflation again and see if that really makes any sense.

Edited to add Guth reference: http://www.paulagordon.com/shows/guth/

I'm not sure that inflation really does call for violation of matter-energy conservation, however. Certainly, it requires violation of matter conservation, but it isn't clear to me that this isn't just a case of a very high energy situation (a radiation dominated era) "condensing" energy into matter, rather than creating matter out of energy that didn't exist before. If this is the case, my objections still hold. You can't get more matter-energy out of a black hole than one puts into it.

As spelled out in this take on Guth's work, there isn't a matter-energy conservation violation: http://www.users.globalnet.co.uk/~slocks/links/Guth's%20Grand%20Guess.htm

And what about the conservation of energy? According to Einstein's theory of relativity, the energy of a gravitational field is negative. The energy of matter, however, is positive. So the entire universe-creation scenario could unfold without breaking conservation-of-energy laws. The positive energy of all matter in the universe could be precisely counterbalanced by the negative energy of all the gravity in the universe.
 
Last edited:
  • #53
ohwilleke said:
As spelled out in this take on Guth's work, there isn't a matter-energy conservation violation: http://www.users.globalnet.co.uk/~slocks/links/Guth's%20Grand%20Guess.htm

And what about the conservation of energy? According to Einstein's theory of relativity, the energy of a gravitational field is negative. The energy of matter, however, is positive. So the entire universe-creation scenario could unfold without breaking conservation-of-energy laws. The positive energy of all matter in the universe could be precisely counterbalanced by the negative energy of all the gravity in the universe.

Would you please explain the quote from Guth a bit? I have read that same passage, or ones like it, several times in wide-audience stuff by Guth. It often comes near where he says something like "the universe could be the ultimate free lunch".

Anyway it seems to confirm the famous "something for nothing" aspect of inflation. The big bang can start with very little energy (tho at high density) and as inflation creates positive energy it is counterbalanced by "negative gravitational energy".

this appears to satisfy Guth's intended audience because it doesn't confront the conservation of energy law that they expect. but in more technical treatments I have been unable to find it.

I think this relates to the wellknown fact that in General Relativity one does not have a global energy conservation law. So a sophisticated or technical audience would not expect the books to balance throughout an episode of inflation. there are other examples in cosmology where energy is not conserved (the CMB redshift has eliminated some 99.9 percent of an original CMB photon's energy, contemporary dark energy is constant density so energy is being created by expansion, which is what "negative pressure" means).

In a technical article about inflation I have not seen anyone showing that energy is conserved, or ever mentioning this "negative gravitational energy". They may discuss it! I just never saw it. but in science journalism and in Alan Guth popular lectures like to liberal arts audience i have seen it.

So, what is to think? Can you explain the "negative gravitational energy" created during inflation as space expands greatly very fast? Or shall we speculate what it could mean? I have some ideas but would like to hear yours.

In any case it is certainly very convenient for the CNS hypothesis isn't it? :smile: It takes care of how from the pit of a black hole with only a star's-worth of energy one could expand out an entire universe!
 
  • #54
A fine and spirited discussion. But I think the question is moot without a quantum theory of gravity. It is, however, heartening to see Dr. Smolin is dedicating his efforts toward developing a viable QG model. And that's where the effort belongs, IMHO. Perhaps, as recently indicated, some of the better minds on the string side of the fence are tired of AP and ready to join the mix in solving the more immediate problem of quantizing GR. After all, the TOE bone is attached to the foot bone.
 
  • #55
This line of thinking appeals to me, however I just want to get this strait. So since in this scenario the vast majority of universes are going to be designed to maximize BH production it is therefore possible that our own universe is among the majority.. however it is also possible that our universe could be among the minority, being only modestly efficient at BH production, and this theory could still be correct.

This is a thought I'm sure no one can comment on. I wonder whether the size of the BH being created is important or not. If it is not it seems like a universe that produced many small, maybe even subatomic BH's would be a more efficient reproducer than a universe that created bigger, but fewer, BH's.

Staying within the framework of this theory maybe the universe is also subsequently tweaked for life production. Perhaps life can reach a level of technological sophistication to where it could some how participate in the production of BH's.. Maybe one day we will be the universes little farmers. We could create factories across the cosmos that mass produced BH's. how could a universe without BH farmers even compete with this? If the size of the BH's are of no consequence maybe we could get started pretty soon with a new atom smasher. I started out sort of half joking.. but I'm beginning to rather like this idea.. someone stop me
 
  • #56
marcus said:
all CNS pictures is that locally we are in a branching pattern, and we can derive from this as its first consequence that the vast majority of each generation will parameters that promote forking because if your branch forks a lot you will be well represented in the next generation
Yes, this is because CNS assumes a (high) degree of correlation between the physical laws of “parent” and “daughter” universes, and humans just “happen” to find themselves in a universe which is conducive to human lifeforms.

But it could equally well be the case that there is no significant correlation between the physical laws of “parent” and “daughter” universes, and humans still just “happen” to find themselves in a universe which is conducive to human lifeforms. Not intellectually very satisfying, but it may be true (see the quote from Lindley below). This IMHO is the usual interpretation of the AP.

CNS is IMHO simply a special version of AP which makes the additional assumption that there is a (high) degree of correlation between the physical laws of “parent” and “daughter” universes. There is no a priori reason why we should make such an additional assumption, hence in absence of any further evidence Ockam’s Razor would favour the straightforward and simpler AP. When the “testable predictions” of CNS have been tested, we may be in a position to make a better judgement.

MF
:smile:

It’s up to us to make sense of Nature; it’s not Nature’s obligation to behave as we would like.
David Lindley, in “Where Does the Weirdness Go?”
 
  • #57
moving finger said:
But it could equally well be the case that there is no significant correlation between the physical laws of “parent” and “daughter” universes,

hi MF, is this is your own conjecture? Like CNS it postulates a BH-BB connection, I gather, so that there is a parent-daughter relation between branches. But unlike CNS it says the parameters change randomly from one generation to the next, "no signif. correlation."

this is a possible conjecture, certainly, but it does not seem to be testable.

CNS, by assuming only small change in the "constants" from one gen to the next is able to make a prediction that can be tested. so that one might possibly falsify CNS

a priori i see no reason to prefer one conjecture over the other as far as BELIEVING, but as I have indicated I don't think the role of these things is to be believed or disbelieved. what one wants is something that makes falsifiable predictions so it can be tested, of course it also has to be at least somewhat interesting to make it worthwhile but testability is paramount

thanks for your comment, always livens things up
:smile:
 
  • #58
marcus said:
hi MF, is this is your own conjecture?
Does it matter where it originates?

marcus said:
Like CNS it postulates a BH-BB connection, I gather, so that there is a parent-daughter relation between branches. But unlike CNS it says the parameters change randomly from one generation to the next, "no signif. correlation."
IMHO this is a more “basic” conjecture – that one world gives rise to another world via some process which may include black holes (just like CNS), but there is not necessarily any correlation between the physical laws of parent & daughter worlds.

marcus said:
this is a possible conjecture, certainly, but it does not seem to be testable.
With respect, what does truth have to do with testability? As I said in my previous post, it may not seem intellectually satisfying to have a hypothesis which is not immediately testable, but this in itself does not invalidate the hypothesis.

marcus said:
CNS, by assuming only small change in the "constants" from one gen to the next is able to make a prediction that can be tested. so that one might possibly falsify CNS
How would you propose to test CNS?

marcus said:
a priori i see no reason to prefer one conjecture over the other as far as BELIEVING,
a priori that depends on whether one follows Ockham’s Razor or not

marcus said:
but as I have indicated I don't think the role of these things is to be believed or disbelieved. what one wants is something that makes falsifiable predictions so it can be tested, of course it also has to be at least somewhat interesting to make it worthwhile but testability is paramount
Again - How would you propose to test CNS?

marcus said:
thanks for your comment, always livens things up
you’re welcome

MF
:smile:
 
  • #59
moving finger said:
...
With respect, what does truth have to do with testability? As I said in my previous post, it may not seem intellectually satisfying to have a hypothesis which is not immediately testable, but this in itself does not invalidate the hypothesis.
...

I was not talking about truth, I was talking about testability.

MF, your question, in this context, suggests that there may be a fundamental incompatibility between our views of the scientific enterprise.

As a general rule I do not expect scientific theories to be true. I expect them to make predictions----and to make predictions by which they might possibly be proven false some day.

this is the first requirement and indeed a scientist has kind of ethical responsibility (to the scientific community) to deal only in falsifiable theories and to be willing to reliquish a theory which has made predictions which have proven wrong, maybe after some reasonable efforts to patch it up have failed.

A theory (conjecture, hypothesis, "principle") which makes no UNpredictions is not predictive and is not scientifically meaningful in my view. That is, IMO, to be scientifically meaningful the theory must make some prediction about the outcome of some future experiment which might NOT turn out. If it is so mushy that it can accommodate any outcome of any experiment then it does not tell us anything.

MF MAY HAVE A DIFFERENT VIEW OF SCIENCE.

I see MF inserted some kind of "weasel?" word there and hedged by saying "immediately" testable.
this points to a gray area. there are some ideas which are simply not testable EVER in principle, or NOT IN A HUNDRED YEARS. One has to be reasonable. there are other ideas that are not testable right now but are expected to be testable in 2007 or 2008 if things go as planned, equipment is built etc.

the operative criterion is, for me, the health and unity of the scientific community. the community must be able to resolve differences of opinion empirically, by observation, before it gets divided into sects of adherents of this or that untestable idea. If people start working with untestable theories (and calling that science) then eventually the community may be split into two belief-groups, those who believe a certain theory and those who do not, and there will be no recourse to experiments to resolve the division.

So I have never said I though everything should be IMMEDIATELY testable, this very minute, and do not wish to be pushed into an extreme position.
I can only talk with other people who have some common notion of what is reasonable to expect.

Certainly those who work on versions of the Std Mdl which will become testable by 2008 or 2009 have my full blessing :smile:

but those who have recourse to some principle that CANNOT EVEN IN PRINCIPLE be falsified are just as surely beyond the pale and should not be tolerated IMHO.
 
  • #60
so the first thing I look for in a scientific theory is testability.
Before plausibility or simplicity or beauty or anything else, i ask questions like:"What possible outcome of what conceivable experiment would justify abandoning the theory? what does the theory predict that might not turn out and would falsify the theory?"

I look for falsifiability before plausibility, or simplicity, or mathematical elegance, or logical neatness, or any other quality.

If the theory is testable (even in the fairly long term, I am not unreasonably demanding) then I will usually be willing to take a look-see, to judge if it is plausible or intuitively appealing, or whatever.

And I don't expect scientific theories to be TRUE

In physics no theory can ever be proven true--- it seems to be only a matter of time before any given theory will be found to have limited applicability and will be replaced by a better theory which predicts more accurately.

Any theory that is spelled out in exact detail is probably going to make a false or inaccurate prediction someday and be caught and replaced by a more accurate one. If you don't want it spelled out in mathematical detail but stated in very vague general VERBAL terms then maybe you can state theories sloppily enough for them to be, in some commonlanguage sense, "true". but if a theory is sharply defined and testable there is a very good chance of it being proven wrong. and it can never be proved right!

no matter how many times you test, there is still a chance that the next test will catch an error, a bad prediction.

so "true" is not something to expect of a theory. one expects it to be predictive, and testable, and one USES it to make predictions, with greater and greater confidence about where it applies and where it does not, and then someday (probably) it will be found to fail.

Since a scientific theory cannot be expected to be true, a theory is something to TEST and not something to BELIEVE IN. Since I do not have a pressing emotional need to believe unproven things, this does not bother me. I like to watch the drama of theories arising in human discussion and being tested and the more interesting they are, the better.

CNS I find extremely interesting because it makes predictions that are EVEN NOW AT THIS MOMENT BEING TESTED.

The "Anthropic Lack of Principles" as some call it, I do not find interesting because it makes no testable prediction. there is no conceivable outcome of any imaginable experiment which is incompatible with the existence of conscious life. so the "ALP" does not unpredict anything that might be observed! it is therefore scientifically empty. more like a religious article of faith of some kind. something to "believe in" for people who so desire.
 
Last edited:
  • #61
marcus said:
A theory (conjecture, hypothesis, "principle") which makes no UNpredictions is not predictive and is not scientifically meaningful in my view. That is, IMO, to be scientifically meaningful the theory must make some prediction about the outcome of some future experiment which might NOT turn out. If it is so mushy that it can accommodate any outcome of any experiment then it does not tell us anything.

MF MAY HAVE A DIFFERENT VIEW OF SCIENCE.

I don't think this is the view that I have of science. Take Newtonian Gravity. It didn't really predict a whole lot that we hadn't observed that we could observe at the time. Kepler had the planet thing pretty well figured out. DaVinci had figured out the bit about balls falling at equal rates regardless of composition. Interstellar gravitational experiements with any meaningful accuracy were hundreds of years into the future, and even solar system observations were far from being accurate enough to infer new planetary orbits. Universal gravitation didn't really predict new phenomena so much as it found an economical unifying interpretation (dare I say an elegant one) of what we already knew.

Likewise, Darwin didn't really predict a whole lot of new phenomena. Lamark had already developed a classification system and has a proposed mechanism which explained much of what we saw, and which predicted, as Darwin's theory would as well, that species adapat to their environments. What Darwin added, primarily simply a more plausible mechanism to explain what had already been observed. Indeed, many key points of that mechanism was so obviously true that it was more of a meme than something that we had to go out and prove. Even Young Earth Creationists agree that natural selection happens, although they try to downplay its importance and the scale of human events that it implies.
 
  • #62
ohwilleke said:
don't think this is the view that I have of science. Take Newtonian Gravity. It didn't really predict a whole lot that we hadn't observed that we could observe at the time. Kepler had the planet thing pretty well figured out. DaVinci had figured out the bit about balls falling at equal rates regardless of composition...

disagree. Kepler model doesn't include different masses of planets. Doesnt include, for instance, the orbits of jovian moons. with Kepler one cannot estimate relative masses of sun, jupiter, moon, Earth etc.

Newton theory not mushy, could have been refuted if odd planetary or satellite behavior was observed which was inconsistent with it. also better fit to data than Kepler's model (where e.g. Jupiter has zero mass)

Newton model met my minimum requirement of falsifiability. It also did much more! falsifiability (non-mushiness) is not the only virtue! Newton's theory was also elegant, plausible, beautiful, simple, unifying etc. Over the long haul these may be more important virtues---I am not saying anything about ranking the importance of virtues----I am talking about a minimum requirement, the basic price of getting into the game.




Mazuz said:
This line of thinking appeals to me, however I just want to get this strait. So since in this scenario the vast majority of universes are going to be designed to maximize BH production it is therefore possible that our own universe is among the majority.. however it is also possible that our universe could be among the minority, being only modestly efficient at BH production, and this theory could still be correct.

This is a thought I'm sure no one can comment on. I wonder whether the size of the BH being created is important or not. If it is not it seems like a universe that produced many small, maybe even subatomic BH's would be a more efficient reproducer than a universe that created bigger, but fewer, BH's.

Staying within the framework of this theory maybe the universe is also subsequently tweaked for life production. Perhaps life can reach a level of technological sophistication to where it could some how participate in the production of BH's.. Maybe one day we will be the universes little farmers. We could create factories across the cosmos that mass produced BH's. how could a universe without BH farmers even compete with this? If the size of the BH's are of no consequence maybe we could get started pretty soon with a new atom smasher. I started out sort of half joking.. but I'm beginning to rather like this idea.. someone stop me

I think you have it straight, and your reaction is on-target

I think you have found maybe the most telling criticism of CNS. It is not fatal but it is a significant flaw that EVEN IF WE GET A NEGATIVE RESULT and find some parameter in the Std Mdl of physics that is NOT optimized and could be improved some so as to make BH more abundant, then this would STILL NOT COMPLETELY DISPOSE of the theory.

that is a weakness in its falsifiablity

As you point out, it still would not totally refute the picture of a branching system of universes because there might still be this system but OURS MIGHT BE IN A TINY MINORITY of improbable, unoptimized universes.

I think you have also put your finger on the most worrisome complication in the picture, the fact that in this branching system of universes it would be possible for conscious beings to play a role and arrange to artificially enhance BH production

this seems very unmotivated and I cannot imaging why they would do this but if one accepts it as likely then it undermines the logic of the prediction.

One can still test to see if our parameters are optimal. If the params of the Std Mdl turn out to be a local optimum for BH production (so there is no small change that would increase it) that would still be extremely interesting and a sign that we are in that kind of branching system. But if they do NOT turn out optimal then there is the possibility that some diehard who likes the idea would argue that in our ancestry there are conscious agents (life) who COMPENSATE for some lack of optimality by artificially causing BH.

Well Mazus I tip my hat to you because these are two of the most cogent response I have seen. but I like to remember Smolin's estimate that in our universe the BH production rate is on the order of 100 BH per second and I think it is not to likely that conscious life could significantly increase that by interfering. And I cannot imagine why they would want to. So altho it is a deep logical objection I do not worry about it.

I still want them to check and see if the the parameters are already just naturally optimized for BH production---they seem to be and that would be really exciting if confirmed!

About SIZE of BH it probably doesn't limit things much because inflation creates most of the matter----more than was there before inflation.

Also recent LQG work by Bojowald, Maartens, Singh, Goswami indicates that there may be a lower bound on BH. For quantum reasons it may not be possible for graviational collapse to produce BH below a certain threshold mass.

(search Bojowald on the arxiv, or ask me for the URL)

and no one has ever SEEN one of these supposed small BH, they just occur in various people's theories. So I also don't worry too much about that either. Inflation will take care of supplying matter and the main paradigm is stellar mass (macro) BH. Rest is more like loose ends to tie up later.
don't think this is the view that I have of science. Take Newtonian Gravity. It didn't really predict a whole lot that we hadn't observed that we could observe at the time. Kepler had the planet thing pretty well figured out. DaVinci had figured out the bit about balls falling at equal rates regardless of composition.
 
Last edited:
  • #63
hi ohwilleke, you have your own view of how science progresses and what does and does not qualify as a scientific theory. that is fine with me. We all need to have whatever view of science suits our own intellectual bent.
I will reply to your view by stating mine: If it does not unpredict something that might be observed then it is not even a theory---"not even wrong" as per Wolfgang Pauli's famous quip.

to be unfalsifiable is kind of the nadir in my view: the worst thing a thing a proposed theory can be, and a sign that it may be either pure mathematics, fantasy, or some faith-based pseudo-science.

Here is an earlier post to make this extra clear
marcus said:
so the first thing I look for in a scientific theory is testability.
Before plausibility or simplicity or beauty or anything else, i ask questions like:"What possible outcome of what conceivable experiment would justify abandoning the theory? what does the theory predict that might not turn out and would falsify the theory?"

I look for falsifiability before plausibility, or simplicity, or mathematical elegance, or logical neatness, or any other quality.

If the theory is testable (even in the fairly long term, I am not unreasonably demanding) then I will usually be willing to take a look-see, to judge if it is plausible or intuitively appealing, or whatever.

And I don't expect scientific theories to be TRUE

In physics no theory can ever be proven true--- it seems to be only a matter of time before any given theory will be found to have limited applicability and will be replaced by a better theory which predicts more accurately.

Any theory that is spelled out in exact detail is probably going to make a false or inaccurate prediction someday and be caught and replaced by a more accurate one. If you don't want it spelled out in mathematical detail but stated in very vague general VERBAL terms then maybe you can state theories sloppily enough for them to be, in some commonlanguage sense, "true". but if a theory is sharply defined and testable there is a very good chance of it being proven wrong. and it can never be proved right!

no matter how many times you test, there is still a chance that the next test will catch an error, a bad prediction.

so "true" is not something to expect of a theory. one expects it to be predictive, and testable, and one USES it to make predictions, with greater and greater confidence about where it applies and where it does not, and then someday (probably) it will be found to fail.

Since a scientific theory cannot be expected to be true, a theory is something to TEST and not something to BELIEVE IN. Since I do not have a pressing emotional need to believe unproven things, this does not bother me. I like to watch the drama of theories arising in human discussion and being tested and the more interesting they are, the better.

CNS I find extremely interesting because it makes predictions that are EVEN NOW AT THIS MOMENT BEING TESTED.

The "Anthropic Lack of Principles" as some call it, I do not find interesting because it makes no testable prediction. there is no conceivable outcome of any imaginable experiment which is incompatible with the existence of conscious life. so the "ALP" does not unpredict anything that might be observed! it is therefore scientifically empty. more like a religious article of faith of some kind. something to "believe in" for people who so desire.

as an historical example, ohwilleke, since you mentioned Kepler and Newton I will take the example of Einstein 1915 GR.
He published the theory in 1915 and it predicted a different bending angle for light (twice as big an angle as Newton theory could be interpreted to predict).
So GR was not mushy, it was not like the ALP, because there was a possible observation you could make that might turn out incompatible with it.
In 1919 they measured the angle of some light passing the sun and it
COULD HAVE TURNED OUT DIFFERENT AND FALSIFIED GR
but it didnt.

this ability to discriminate between future observations is not the be-all-and-end-all of scientific theories but it is, I take it, a minimal requirement. By this criterion the Anthropic Lack of Principles is indiscriminate and non-science because it is all-accepting: There is no possible observation from any conceivable future experiment that is inconsistent with the existence of conscious life.
A.L.P. does not make predictions because it cannot unpredict anything.
 
Last edited:
  • #64
string theory I do not find very interesting, seems unable to generate testable predictions, and the string model of the black hole is complicated and peculiar----extremal, or near extremal---very different from the black holes that astronomers observe and which are studied in LQG.

Having said that, however, I want to say something NICE about string. I think it is mildly interesting that there is this new string paper talking about "baby universes" forming in black holes:

http://arxiv.org/abs/hep-th/0504221

we already have this kind of thing in non-stringy types of quantum gravity.
I note that one of the authors of this paper, Cumrun Vafa, evidently watches LQG with more than usual alertness-----he recently drew an analogy between LQG and some sector of "topological m-theory", or between the latter and some sector of LQG.
Vafa is more aware of potential contacts with LQG than some other string researchers, or at least seems less inclined to deny the possibility. So this partial parallelism around BH is right in keeping. (tho it may mean nothing in the long run, of course)
 
Last edited:
  • #65
Does anybody get New Scientist magazine?

One way to get some perspective on CNS is to look at where the "competition" is. When String/M still seemed promising as a potential "Theory of Everything" what that might have meant to a large extent in practical terms is an explanation for the 26-odd parameters of the Standard Model. (see quote from Edward Witten further down)

To be more specific, suppose back in say 1990, someone had offered a theory that explained the parameters (some 26 numbers) you need to plug into the StdMdl to make it work----that says why those numbers have to be what they are----and also included gravity, then that would have met what most people expected from a "Theory of Everything". So curiously enough the only candidate for a "ToE" that is currently offering a mechanism explaining the numbers and making testable predictions is now CNS!

String/M does not have a way of saying why the parameters of the Std Mdl are what they are, although it has been a Fabled Goal of stringy reasearch for some decades.

Instead, a dark horse, Cosmic Natural Selection, has appeared in the race.

So the whole thing is intriguing to watch and there was just an article in the New Scientist (30 April 05 issue) relating to this. Has anyone seen it?

http://www.newscientist.com/channel/fundamentals/mg18624971.500

The theory of everything: Are we nearly there yet?
30 April 2005
Stephen Battersby

"The hunt for the theory of everything is turning into a road trip from hell - and don't even ask who's reading the map...

... Thirty years have passed since physicists established the 'standard model' of particle physics, a set of limited theories that cover the basics of how particles and forces interact. Since then, they have been trying to ..."

According to Peter Woit blog on the article, Witten
is quoted as believing that M-theory may have a unique solution that fits our universe and explains the constants of the standard model.

This is a backward glance at the old hope. The phrase "explain the constants of the standard model" was string's El Dorado. And in line with the Spain's fruitless search for the Fabulous Golden City, Witten is referred to as a "string grandee".

He is quoted as opposed to the Multiverse or "Landscape" trend in string research but sounding a bit discouraged. In string theory, says Witten, "More work has always given more possibilities - far more than anyone wanted... I hope that current discussion of the string landscape isn't on the right track, but I have no convincing counter-arguments."

The article quotes Susskind and Weinberg as believing in the existence of a multiverse, even if this means that "all we can hope for from a final theory is a huge range of possibilities".

Can anyone who gets the New Scientist supply any more quotes?
 
Last edited:
  • #66
marcus methinks you protest too much about the AP!

It is not falsifiable as you rightly point out, not because it is no good at making predictions, but because, given our existence, it is too good.

However; first it is not meant not be a theory but only an observation about how the universe must be, given our existence - suitable for life somewhere and 'somewhen' within it.

Secondly if you insist on strickly applying Popper's falsifiable criteria to scientific theories you won’t have many left! This is particularly so in the 'softer sciences'. However, even in the ‘harder’ sciences such as physics almost any theory that appears to be falsified may be rescued by the addition of another 'epicycle' - just take the standard cosmological LCDM theory for starters!

Nevertheless, the fact that CNS is falsifiable, albeit, like many theories not indisputably so, does make it a good theory, I shall watch its progress with interest, thank you for clarifying it for me.

Garth
 
Last edited:
  • #67
Garth said:
... However, even in the ‘harder’ sciences such as physics almost any theory that appears to be falsified may be rescued by the addition of another 'epicycle' - just take the standard cosmological LCDM theory for starters!

Nevertheless, the fact that CNS is falsifiable, albeit, like many theories not indisputably so, does make it a good theory, I shall watch its progress with interest, thank you for clarifying it for me.

Thank you too Garth. I am glad the discussion did serve that purpose and that CNS meets with your limited approval!

the "epicycles" point is well taken! the whole scientific enterprise depends on a certain reasonableness. the advocates of a theory discredited by experiment have to be willing to stop adding epicycles after a while. it is natural for them to want to patch their theory up but past a point this gets ridiculous. If everyone acts in sufficient good faith then, as with other social customs, we can blunder through.
 
Last edited:
  • #68
marcus said:
the whole scientific enterprise depends on a certain reasonableness. the advocates of a theory discredited by experiment have to be willing to stop adding epicycles after a while. it is natural for them to want to patch their theory up but past a point this gets ridiculous. If everyone acts in sufficient good faith then, as with other social customs, we can blunder through.

Sometimes you see that not the advocates, but later workers looking for a solvable problem will resuscitate a falsified theory, sometimes successfully, in a new context. Both Kalusza-Klein and Weyl's conformal theory are cases of this. Falsified in the day, but later found to be useful.
 
  • #69
selfAdjoint said:
Sometimes you see that not the advocates, but later workers looking for a solvable problem will resuscitate a falsified theory, sometimes successfully, in a new context. Both Kalusza-Klein and Weyl's conformal theory are cases of this. Falsified in the day, but later found to be useful.

If I understand your point, it distinguishes between mathematics (which can be reused) and testable theories about nature (which can be proven wrong even if they use valid mathematics).

If a theory about nature makes false predictions it normally gets chucked out. But its mathematical machinery, if valid as mathematics, could be re-applied elsewhere and might indeed prove useful, even though the theory for which it was originally invented has been ruled out.

I think your point confirms what many of us suspect (or I do at least) which is that progress can be made by formulating and testing physical theories even when they fail. It is not a pure loss for a theory to be formulated, made to predict something, and then checked. Even if it doesn't check out in the end, something is learned, and (as your post reminds us) the mathematical machinery may be useful in other applications.
 
  • #70
At one point string/M inspired hope that it might eventually develop into a theory explaining the parameters of the Standard Model. Now many string researchers have given up on that hope and accepted the inevitability of something which Lubos Motl calls the "Haystack" of possible versions each with its own physics. There was a recent article about this---I don't have a subscription to the N.S. but maybe someone else does and has seen it:

marcus said:
... the whole thing is intriguing to watch and there was just an article in the New Scientist (30 April 05 issue) relating to this. Has anyone seen it?

http://www.newscientist.com/channel/fundamentals/mg18624971.500

The theory of everything: Are we nearly there yet?
30 April 2005
Stephen Battersby

"The hunt for the theory of everything is turning into a road trip from hell - and don't even ask who's reading the map...

Witten... is quoted as opposed to the Multiverse or "Landscape" trend in string research but sounding a bit discouraged. In string theory, says Witten, "More work has always given more possibilities - far more than anyone wanted... I hope that current discussion of the string landscape isn't on the right track, but I have no convincing counter-arguments."
...
Can anyone who gets the New Scientist supply any more quotes?

I think string leaders are gradually becoming more public and forceful in their opposition to the "Landscape" trend. It would have been nice if Witten had come out earlier and more forcefully. but it is worth charting.

Earlier this year there was an article by the San Francisco Chronicle's science writer, Keay Davidson. The title was:
"'Theory of everything' tying researchers up in knots"

http://sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2005/03/14/MNGRMBOURE1.DTL

This did quote major scientists outside string, including Nobel laureates. But there was no leader from within string expressing criticism of the way the field has been going. There certainly was no quote like this from Witten!

For example David Gross has been an outspoken, even impassioned, opponent of Landscape/Anthropery but only in house. He has not come out in as public a medium as the New Scientist. And in the 14 March SF Chronicle article he breathed no word of string self-criticism.

so maybe the current (30 April) New Scientist article about the
string theory "road trip from hell" does represent a bit of progress towards getting more forthcoming public recognition of the crisis by string leadership.
 
Last edited:

Similar threads

Back
Top