Will Galactus Become a Reality in the Age of Immortality?

  • Thread starter PhizzicsPhan
  • Start date
In summary, Jared's question is irrelevant to the broader discussion and his comments don't add anything substantive.
  • #1
PhizzicsPhan
118
0
A disturbing thought occurred to me recently: given that we are on the cusp of personal immortality (Kurzweil's The Singularity is Near, etc.) and our collective entrance into the age of conscious information (for lack of a better term), it seems that there will eventually become a real resource shortage at the most fundamental level.

What I'm suggesting is that as individuals make the transition to electronic form in coming decades and centuries, whatever the actual substrate for each individual consciousness is at that point, there will over time be a steady and massive amount of matter required to accommodate each individual's memory and processing needs. Just as hard drives in today's PCs get bigger and bigger, the memory and processing requirements of immortal individuals will over the course of millions and billions of years become so large that each individual will eventually seek to claim entire stellar systems or galaxies to grow their "body" - just to act as the substrate for their ever-growing demand for more physical medium for memory and computation.

This suggests a grand conflict will emerge between ever larger individuals over the course of future history. And it also suggests disturbing limits on the possibility of continued existence for beings anything like us in our current form as our physical environments are subsumed within galactic-scale individuals (an interesting parallel to Galactus' planet-devouring habits in the Fantastic Four comic book series).

Any thoughts on these rampant speculations?
 
Physics news on Phys.org
  • #2
PhizzicsPhan said:
A disturbing thought occurred to me recently: given that we are on the cusp of personal immortality (Kurzweil's The Singularity is Near, etc.) and our collective entrance into the age of conscious information (for lack of a better term), it seems that there will eventually become a real resource shortage at the most fundamental level.

You're tossing "The Singularity is Near" out there as if we have all read it. I don't know what you are talking about as far as "personal immortality" and "collective entrance into the age of conscious information". Please explain.
 
  • #3
Kurzweil makes a pretty convincing case in The Singularity is Near and other books that computing power, medical knowledge and other technologies will allow humans to choose to be immortal within a few decades - or at least extend our lives to the point where we can choose to be truly immortal. He also argues that all biological species that reach a certain level of technology will transcend their biological origins and shift their substrate from biological to electronic form over time. We'll find out if he's right in the coming decades and I suspect he is - if we don't kill ourselves first or face perpetual global recession/depression from peak oil or climate change.
 
  • #4
We don't have anywhere near the medical or technological abilities to do anything of the sort within the coming decades. I'd even say centuries is a bit optimistic.
 
  • #5
jared, even if it takes centuries to get there my original question remains.
 
  • #6
PhizzicsPhan said:
jared, even if it takes centuries to get there my original question remains.

I answered it. You asked for "thoughts on these rampant speculations", I provided them.

Outside of that, I don't see any formal question.
 
  • #7
Not so much, but it's all good.
 
  • #8
jarednjames said:
We don't have anywhere near the medical or technological abilities to do anything of the sort within the coming decades. I'd even say centuries is a bit optimistic.



Yes, it's very optimistic.
 
  • #9
To be clear: my question was not about the temporal accuracy of Kurzweil's speculations. What I was asking about, in the spirit of broad philosophizing, was whether others agree that once we do reach the biological-to-electronic transition there will eventually be a conflict between ever larger individuals over the stuff of the universe, which will be required for individual memory and processing power expansion. So the timing of the singularity and of personal immortality, whether it's in two decades or two centuries, is irrelevant to my broader speculation.
 
  • #10
PhizzicsPhan said:
timing of the singularity

I'm obviously missing something here, but what singularity?
 
  • #11
PhizzicsPhan said:
To be clear: my question was not about the temporal accuracy of Kurzweil's speculations. What I was asking about, in the spirit of broad philosophizing, was whether others agree that once we do reach the biological-to-electronic transition there will eventually be a conflict between ever larger individuals over the stuff of the universe, which will be required for individual memory and processing power expansion. So the timing of the singularity and of personal immortality, whether it's in two decades or two centuries, is irrelevant to my broader speculation.
I think it's a bit ridiculous to speculate. If we have reached a point in technology that a human can place themselves within a computer and remain cognizant, I don't think computer memory will be an issue. I don't think all people will choose to live in a computer, I wouldn't.

This thread is really too overly speculative for any meaningful discussion.
 
  • #12
Evo, philosophy is about speculation and I'm simply speculating about possible futures for humanity and the universe more generally.

I agree that many humans will choose to remain biological and choose to die a normal death. But others won't, when this becomes a possibility. Kurzweil speculates that this choice will become possible in about 25 years, but whether it's 25 years or 250 years isn't really that important in the grand scheme of things. When it does become a possibility it will be all but certain that biological humans and other life forms will become anomalies. The vast majority of living entities will become electronic and dominate the universe. My worry, in the OP, is that this process may inevitably lead to the conscription of all matter in the universe to feed ever-larger selves that need this matter for their vastly expanded "bodies."

Jared, the "singularity" is what Kurzweil calls the point at which computers become smart enough and powerful enough to improve themselves. This point is key for Kurzweil's vision of human immortality because all bets are off once computers are able to improve themselves in vast exponential expansion. Check out his book. It's very interesting reading.
 
  • #13
I don't think storage would be a big issue. We could probably fit the intellect of all three Kardashian sisters on a thumb drive, for example. :-p
 
  • #14
So your question is basically "what happens when we have AI?" and you're not far off reciting the plot of the Terminator films. Just replace "this process may inevitably lead to the conscription of all matter in the universe to feed ever-larger selves that need this matter for their vastly expanded bodies." with "want to kill everyone".
 
  • #15
PhizzicsPhan said:
Evo, philosophy is about speculation and I'm simply speculating about possible futures for humanity and the universe more generally.
Real philosophy isn't idle speculation. Asking what would happen if an amoeba became omnipotent is not philosophy. Asking what if we had warp drive is not philosophy. Asking what kind of memory storage issues would there be if it was ever possible to transfer ourselves into a computer is not philosophy. We don't know what "computer memory" would even mean at that point in time. We can't use today's technology to speculate, that's a huge mistake.
 
  • #16
Ray Kurzweil is a crank, he plots things like cave paintings, writing, semaphore and email on a chart that shows an upward line and points to that as evidence that before 2050 we're all guna live in upload heaven in the sky. If I were you I would read up on the subjects that Kurzweil preaches so that you can gain an appreciation of how woefully ill informed he is on the subjects at hand.

However in the spirit of speculative conversation; if we imagined an upload civilisation that dismantles most of the mass in the solar system to build a big computer for themselves to live in (or swarms of computers like a matrioska brain) then whether or not they are a threat depends on how much they care about keeping the Earth solid. I'd advice you read accelerando by Charles Stross, it's a very good book about what life would be like if a technological singularity happens this century (though he himself ridicules the idea he's pretty good at writing it)

The ebook is free

http://www.manybooks.net/titles/strosscother05accelerando-txt.html
 
  • #17
even if we did work out immortality, we'd also have to stop people from reproducing or it would be really uncomfortable.
 
  • #18
Pythagorean said:
even if we did work out immortality, we'd also have to stop people from reproducing or it would be really uncomfortable.

I don't think that would be a pressing issue for a while, turns out that if you give women rights and education they don't want to spend their lives spitting out babies until one kills them. On top of that if healthy (i.e pre-old age) life spans were greatly extended people would put off having children for a greater amount of time. If it's going to take over one hundred years to go from teens to old age (65+ body) Why have children in the first half of your life?
 
  • #19
PhizzicsPhan said:
A disturbing thought occurred to me recently: given that we are on the cusp of personal immortality (Kurzweil's The Singularity is Near, etc.) and our collective entrance into the age of conscious information (for lack of a better term), it seems that there will eventually become a real resource shortage at the most fundamental level.

What I'm suggesting is that as individuals make the transition to electronic form in coming decades and centuries, whatever the actual substrate for each individual consciousness is at that point, there will over time be a steady and massive amount of matter required to accommodate each individual's memory and processing needs. Just as hard drives in today's PCs get bigger and bigger, the memory and processing requirements of immortal individuals will over the course of millions and billions of years become so large that each individual will eventually seek to claim entire stellar systems or galaxies to grow their "body" - just to act as the substrate for their ever-growing demand for more physical medium for memory and computation.

This suggests a grand conflict will emerge between ever larger individuals over the course of future history. And it also suggests disturbing limits on the possibility of continued existence for beings anything like us in our current form as our physical environments are subsumed within galactic-scale individuals (an interesting parallel to Galactus' planet-devouring habits in the Fantastic Four comic book series).

Any thoughts on these rampant speculations?

given that we are on the cusp of personal immortality ..

How do you figure that we are on the cusp of personal immortality ?
 
  • #20
What I worry about is after we have solved these problems that preoccupy you and Kurzweil what happens then? What about the next one - have you thought about that? What then I ask you, what then? [URL]http://www.vegegifs.com/smiley-emoticon/depressif.gif[/URL]
 
Last edited by a moderator:
  • #21
alt said:
How do you figure that we are on the cusp of personal immortality?

Kurzweil and his ilk normally cite his "law of accelerating returns". He claims that all technologies undergo exponential development and infer from that that nanotechnology, AI and biotech will buddy up and provide cheap n' easy immortality.

One word: Bollocks

The man knows nothing about biology, nothing about nanotech (both my fields) and i'd hazard he has nothing more than a simpleton understanding of AI research. He is a perfect example of the dunning-kruger effect

EDIT: the "both my fields" thing was meant to highlight my angst at his perversion of them, not an appeal to authority
 
Last edited:
  • #22
ryan_m_b said:
He is a perfect example of the dunning-kruger effect

I'll second that.

They learn a bunch of buzz words and run with it to write a book - this is the result.
 
  • #23
In any case, fundamental to this question seems to be the notion of data-storage.

Will humans remain efficacious in a physical manner or will they inhabit some sort of computer mainframe through which they control robots?

Who will build the data storage centers for these proposed "massive individuals?" What is their motive? Can this computer individual simply be killed by unplugging the system or wiping the hard drive, so to speak?

How will we begin to use entire stellar systems as data storage systems within the next 25 to 50 years? Is this grounded at all in any reasonable assumptions or is it baseless speculation?

One can easily write a compelling science fiction book with this possibility (and it's been done here: https://www.amazon.com/dp/076534825X/?tag=pfamazon01-20) but this does not make for feasible science.

It sounds like human ingenuity is the crux of his argument:(http://en.wikipedia.org/wiki/Moore's_law#Consequences_and_limitation)

My personal gripe with this argument is that we are in a period of unprecedented change. What does this mean? It means that there is no precedent. Attempting to apply past rates of growth/change (as he seems to do with this accelerating metric of technology) to our current situation doesn't make much sense. We don't know where technology will be in 6 months, let alone 5 years; it MIGHT continue to develop at an exponential rate, or it might settle down into a more linear rate of growth. The point is, we are not authorized to make assumptions about such an unprecedented phenomenon.

We don't know yet how the future will continue to develop. To assume that just because we see exponential growth now we will continue to see it forever seems foolish. At best, a lapse into Hume's belief that tomorrow will continue the same as today, at worst, a gross misapplication of logic and baseless speculation.
 
  • #24
I'd add that we are not in a period of exponential change at all. Computer transistor count is exponential but to apply that to all areas of science is foolish beyond belief. Other than computer technology there is little that we can point to to be "exponential"

Fields progress rapidly at times and slowly at others but unless said field as a metric to measure by (i.e no. transistors...) I don't see how we can say its exponential. We discovered the role of siRNAs some years ago in protein translation. Does that represent an exponential from previous understanding? Of course not!

By Kurzweils logic travel in the 20th century should have progressed;
1903 - Write brother flight
1930s - Jet engine
1960 - manned orbit
1970 - manned moon landing
1980 - manned jovian visit
1990 - interstellar travel
1999 - FTL time traveling machine

It's ridiculous to look at a series of progression, plot it on a graph and draw the line upwards to infinity
 
  • #25
Thanks Ryan, I just bought Accelerando on your recommendation. Looks very interesting.

I'll speculate a little further here and then let this thread languish.

It does seem that with the transition from biological form to electronic form that will become possible in coming decades, more and more people will choose this option. There will, over the course of vast time, and space travel made possible by vastly extended lifespans, arise competition over matter as the substrate for computation (what Kurzweil calls "computronium," which is a catchall term for any matter).

Here's the million dollar question: will that process eventually, over billions of years, lead to just one individual, "God," emerging victorious? And what will existence be like for this God who can contemplate nothing more than its components because it will consist of literally everything in the universe?

Speculating even further, is this process inevitable in any universe? And if so, can we arrive at a proof of God in this manner? Are we right now existing as a simulation within God as an entity that began much like us and eventually became the entire universe? Is this process duplicated in a nested manner such that each universe becomes a single being over time and then simulates/creates new universes for mental and emotional stimulation? ("Simulation" and "creation" are synonymous in this context because there is no distinction between simulation and creation - it's all just reality, whether we call it simulated or not).

Happy Monday!
 
  • #26
PhizzicsPhan said:
It does seem that with the transition from biological form to electronic form that will become possible in coming decades, more and more people will choose this option.

No, it really doesn't. We are nowhere close to this, it is not going to happen in the next few decades. Why do you keep insisting we are?
There will, over the course of vast time, and space travel made possible by vastly extended lifespans, arise competition over matter as the substrate for computation (what Kurzweil calls "computronium," which is a catchall term for any matter).

If we're going to start making things up, this thread can be called BSium. You keep repeating this nonsense that doesn't conform to reality in any way what so ever.
Here's the million dollar question: will that process eventually, over billions of years, lead to just one individual, "God," emerging victorious? And what will existence be like for this God who can contemplate nothing more than its components because it will consist of literally everything in the universe?

We're off the scale of speculation here.

1. We don't have the technology, so to speculate it's use / abilities a supposedly 'meaningful' way is non-sense.
2. Given number 1, to speculate even further is just pure science fiction.
Speculating even further, is this process inevitable in any universe? And if so, can we arrive at a proof of God in this manner? Are we right now existing as a simulation within God as an entity that began much like us and eventually became the entire universe? Is this process duplicated in a nested manner such that each universe becomes a single being over time and then simulates/creates new universes for mental and emotional stimulation? ("Simulation" and "creation" are synonymous in this context because there is no distinction between simulation and creation - it's all just reality, whether we call it simulated or not).

Oh blimey, we really are pushing the limits now. You've taken a non-sense idea without any basis in reality, speculated on that as to how such technology may be used, and then brought it full circle to "everything is a simulation".
 
  • #27
PhizzicsPhan said:
Thanks Ryan, I just bought Accelerando on your recommendation. Looks very interesting.

I'll speculate a little further here and then let this thread languish.

It does seem that with the transition from biological form to electronic form that will become possible in coming decades, more and more people will choose this option. There will, over the course of vast time, and space travel made possible by vastly extended lifespans, arise competition over matter as the substrate for computation (what Kurzweil calls "computronium," which is a catchall term for any matter).

Here's the million dollar question: will that process eventually, over billions of years, lead to just one individual, "God," emerging victorious? And what will existence be like for this God who can contemplate nothing more than its components because it will consist of literally everything in the universe?

Speculating even further, is this process inevitable in any universe? And if so, can we arrive at a proof of God in this manner? Are we right now existing as a simulation within God as an entity that began much like us and eventually became the entire universe? Is this process duplicated in a nested manner such that each universe becomes a single being over time and then simulates/creates new universes for mental and emotional stimulation? ("Simulation" and "creation" are synonymous in this context because there is no distinction between simulation and creation - it's all just reality, whether we call it simulated or not).

Happy Monday!

Please, read Spin by Robert Charles Wilson. This is one of the ideas he propounds--but fortunately for us, he let's it reside firmly in the arena of science fiction. He does not attempt to sell the book as a futurist scenario that will occur in 40-50 years.
 
  • #28
EnergyStrom, I made it pretty clear in my posts that while Kurzweil and others are predicting the biological to electronic transition becoming possible in decades (by the end of the century, actually), the larger questions I raise will play out over "billions" of years. I mentioned this a few times.
 
  • #29
PhizzicsPhan said:
EnergyStrom, I made it pretty clear in my posts that while Kurzweil and others are predicting the biological to electronic transition becoming possible in decades (by the end of the century, actually),

Again, this is not based in reality so let's just drop it.
the larger questions I raise will play out over "billions" of years. I mentioned this a few times.

I don't see "larger questions". I see wild speculation based on no more than science fiction.

You need to realize, we don't know if it's possible to do what is being described. It's not a case of when, it's a case of if.

This whole premise is:
"If we develop a way to transfer consciousness to a computer and then if we develop a way to allow us to live within a computer and then if we develop the ability to traverse solar / star systems and then if we get to the point where we need entire solar system levels of resources to sustain us and then the latest is throwing in what if we 'merge' into one single consciousness which would resemble god and then if we start simulating life."

Can you not see how its a frivolous debate?

I'm not a fan of philosophy at the best of times, but if we're going to just ignore current possibilities and assume what we can do in the future based solely on history, it really is straying into the realms of fiction.
 
  • #30
I agree with jarednjames, this isn't philosophy this is a discussion about science fiction. I am happy to have the discussion under that pretext the same way as I am happy to discuss the workings of the universe of a science fiction novel but I am against any notion that this is based in anything other than that.

Phizzicsphan I hope you enjoy the book, it is a good read. On the subject of the singularity ala kurzweil I have these thoughts; if we are going to assume strong AI from the start then we already have a weak super human intelligence (i.e it is equal in intelligence but has the advantage of speed/self-editing[for optimum concentration and motivation etc]/eidetic memory/telepathy[with its peers] and the ability to copy and paste itself to create a larger workforce). The motivations of such an entity could be as varied as the motivations of humans so there is no way of predicting "what if".

As for the God thing I think most theists will disagree with you, the ability to create a simulation of a universe whilst living in a confined universe yourself would not fit the definitions. If it is possible to simulate a person matrix style then we could propose that we are more likely to live in a simulation (as the number of simulated universes could outnumber the number of real universes) but that is another science fiction discussion.

I would repeat though along side others on this thread that we are not even sure if simulating a human mind independent of a human body is possible, nor if simulating a human body is possible and on top of that we do not know if it is possible to map what is physical into this digital world. With respect are you taking Kurzweils word on this? You don't seem like an unintelligent guy and I would advise you to read up on the science (or we could have the discussion here) before making statements along the lines of "we are on the cusp"
 
  • #31
A great way to describe Kurzweils ideas about mind uploading, computers and his justification is to look it at this analogy:

"Camera pixel count doubles every 2 years --> In X years camera's will have more pixels than the human eye --> at this time we will swap our eyes with camera's for super vision"

Of course even this analogy is easier than Kurzweil's argument because it's relatively easy to count how many pixels the eye has (just count the rod and cone cells) compared to figuring out how to compare the brain to computer power. He equates development of machines designed to mimic aspects of humans (vision, thought etc) to replacement! Not to mention computer power having little relation to how the brain works to produce a mind
 
  • #32
Time magazine had an http://www.time.com/time/health/article/0,8599,2048138,00.html" ) at the NASA Ames campus.
 
Last edited by a moderator:
  • #33
ryan_m_b said:
A great way to describe Kurzweils ideas about mind uploading, computers and his justification is to look it at this analogy:

"Camera pixel count doubles every 2 years --> In X years camera's will have more pixels than the human eye --> at this time we will swap our eyes with camera's for super vision"

Of course even this analogy is easier than Kurzweil's argument because it's relatively easy to count how many pixels the eye has (just count the rod and cone cells) compared to figuring out how to compare the brain to computer power. He equates development of machines designed to mimic aspects of humans (vision, thought etc) to replacement! Not to mention computer power having little relation to how the brain works to produce a mind

So what he's saying is "once we get technology to rival X human part, we can ignore the issue of actually connecting it to the body and just 'use it'". What a fruit loop.

He's ignoring 90% of the work required and just making wild guesses based on, well, nothing.

From Borg's article above, says it all really:
He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.

I don't see how this discussion can be of any use, given what we are working with.
 
  • #34
Just to show in graph form what we are dealing with this is Kurzweils "calculations" that give him that 35 year number (apologies for the size of the image)

CountdowntoSingularityLog.jpg
 
  • #35
This and many other of his graphs (http://singularity.com/charts/page17.html) just plot random things that he calls events to show that the world will end in 35 years in a nerdy robot rapture.

It makes no sense, he picks and chooses things like the time gap between Life--multicellular life, walking upright--speaking and the telephone--computers and concludes that because these things are slowing down there must be a singularity in his life time. Specifically one with mind-uploading, nanomagic control over matter and strong AI. I know it sounds stupid but this truly is his reasoning and the defence he always gives when criticised. He calls it his "law of accelerating returns" and sticks to it like the Pope sticks to the gospels. Its pathetic this man is seen as some sort of authority figure on modern science

For a good criticism see Alfred Nordmann's article here http://spectrum.ieee.org/robotics/robotics-software/singular-simplicity/1
 
Last edited:

Similar threads

Back
Top