Will human's still be relevant in 2500?

  • Thread starter DiracPool
  • Start date
In summary, a computer with the ability to think like a human would be able to think at 1 million times the speed.
  • #36
...what? Claiming that the human brain "works" at 10hz is so grossly over-simplistic as to be nonsense. Certainly it doesn't refer to individual neurons, which can fire at frequencies far greater than 10hz. We're then left with large (or small) scale oscillations between different brain regions or local clusters of neuron. In that case, you see spontaneous oscillations of many different frequencies (all the way from delta to gamma).

Well, I wouldn't really call it nonsense, there's a good deal of evidence for it, but that's almost beside the point, because admittedly, this thread really works best with more of a philosophical or perhaps teleological flavor, which is ok, isn't it?

You're an AI researcher working to produce strong AI?

Sorry, this quote is by Ryan m b (I can't figure out how to do multiple quotes in one response yet). I got the single ones down. In any case, no, its not standard AI, it's more related to a field that may be referred to as cognitive neurodynamics, looking at information less as that of software-themed and more as layered "frames" of itinerant chaotic states in masses of neural tissue. It's speculative but not "nonsense," a researcher named Jose Principe and co. have recently built a functioning analog VLSI chip based on the technology.

However, again, the point I was trying to make was more in reference to the expensiveness, or perhaps better stated "vestigalness" of the energy expense of biological tissue to accomplish the important features of what we cherish to be human. Evolution has labored blindly for hundreds of millions of years to find a mechanism for how our brains work to carry on this conversation we're having right now. But the mechanism is grossly ineffecient and hampered/slowed down by the sloppiness of the way evolution works. I mean, can't we all at least agree on that? The obvious comparisons are vacuum tubes vs solid state transistors vs integrated circuits. "In the year 2525..." (sing it with me people) we are not going to still build the equivalent of "vacuum tube humans" in the same way we don't build vacuum tube iPhones today. But we probably will keep them (humans) around as a novelty just as Marshall Brain keeps some vacuum tubes in his museum.

The whole thing about the Terminator effect or why would we want to create something better than ourselves I think is a non-starter. It is simply going to happen, IMO, because they will just be better "us's", only much, much, more efficient, and we will think that is OK but they actually ARE just better us's, just as todays iPads are better than that Laptop Roy Scheider was using in the film 2010. Who'd want to go back to that? And BTW, they don't even publish OMNI anymore, Roy, so you tell me!

Now, I think the situation might be different is someone decided to build a platform that wasn't based on ours. That would be FOREIGN. And we don't like foreign, we like to talk to things that think like us. So my guess is that the robots that are our keepers in the future will be built on our platform and we will avoid thorny problems like having to deal with Arnold and his cronies.
 
Physics news on Phys.org
  • #37
DiracPool said:
Sorry, this quote is by Ryan m b (I can't figure out how to do multiple quotes in one response yet). I got the single ones down.
Click the multiquote button on all the posts you want to quote. Ths will turn the button to blue. Then click quote on the last one.
DiracPool said:
In any case, no, its not standard AI, it's more related to a field that may be referred to as cognitive neurodynamics, looking at information less as that of software-themed and more as layered "frames" of itinerant chaotic states in masses of neural tissue. It's speculative but not "nonsense," a researcher named Jose Principe and co. have recently built a functioning analog VLSI chip based on the technology.
In which case it's still hyperbole to claim the technology exists isn't it?
DiracPool said:
However, again, the point I was trying to make was more in reference to the expensiveness, or perhaps better stated "vestigalness" of the energy expense of biological tissue to accomplish the important features of what we cherish to be human. Evolution has labored blindly for hundreds of millions of years to find a mechanism for how our brains work to carry on this conversation we're having right now. But the mechanism is grossly ineffecient and hampered/slowed down by the sloppiness of the way evolution works. I mean, can't we all at least agree on that?
No, I disagree that the brain is inefficient at what it does. Direct comparisons to computers are pointless but as the human body uses ~10kj a day and the brain accounts for ~20% of energy usage that simplistically means the brain uses ~20 watts to run. Unless you can point to something better I'm not sure where this idea of inefficiency comes from.
DiracPool said:
The obvious comparisons are vacuum tubes vs solid state transistors vs integrated circuits. "In the year 2525..." (sing it with me people) we are not going to still build the equivalent of "vacuum tube humans" in the same way we don't build vacuum tube iPhones today. But we probably will keep them (humans) around as a novelty just as Marshall Brain keeps some vacuum tubes in his museum.
This is bad reasoning: even if Moore's law continues ad infinitum that says nothing about what those computers will be running. You can't point to past progress and use it as a basis for future progress in a different field
DiracPool said:
The whole thing about the Terminator effect or why would we want to create something better than ourselves I think is a non-starter. It is simply going to happen, IMO, because they will just be better "us's", only much, much, more efficient, and we will think that is OK but they actually ARE just better us's, just as todays iPads are better than that Laptop Roy Scheider was using in the film 2010. Who'd want to go back to that? And BTW, they don't even publish OMNI anymore, Roy, so you tell me!
Again you're just wildly asserting its going to happen with no evidence that it will, your not even acknowledging there's a possibility it won't which sets off ideological warning bells in my mind. Also building tools better at doing tasks by hand is nothing new, to build intelligent software must it be conscious with agency? As I brought up in my first post I don't see why.
DiracPool said:
Now, I think the situation might be different is someone decided to build a platform that wasn't based on ours. That would be FOREIGN. And we don't like foreign, we like to talk to things that think like us. So my guess is that the robots that are our keepers in the future will be built on our platform and we will avoid thorny problems like having to deal with Arnold and his cronies.
What do you mean by platform?
 
  • #38
DiracPool said:
My vote is No. Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute. In a thousand years machines are going to look back on 2012 in awe. A 1000 years seems like a long time and it is with the pace of current technological progress, but remember that it took anatomically modern humans 160,000 years to get it together to spit paint a Bison on a cave wall after they already looked like us. What does that mean? I want to hear your opinion.

Is there an evolutionary biologist in the room?

I’m not one, but I think that this is a reasonable way to look at it, with the additional benefit that it helps to keep the discussion in the scientific arena.

Perhaps the evolutionary biologist will say that humans will still be around in 2500 because 500 years is only a short time evolutionary speaking. One problem with this argument is that our rate of scientific, social and technological progress is developing exponentially, so we can’t be compared with other species. And we are changing our environment, which few if any species have ever done alone. Several genetically related life forms have done it together.

The argument of running out of resources is not valid if we can get through the looming energy bottleneck. With enough energy one can do everything possible, if I have correctly learned from the teachings in these columns.

I would like to know how the evolutionary biologist would define human. If we are the subspecies homo sapiens sapiens, I suppose the next subspecies will be homo sapiens sapiens sapiens. To whom will it matter, whether we are ‘still’ around or have been replaced by something similar or not so similar? I guess it matters to the evolutionary biologists.

If there are no catastrophes but only further progress of our species or subspecies, I would foresee at some point that we might start to do some route planning with goals. Would be nice.

In the meantime, since nobody has a plan, evolution will surely result in continued competition amongst the existing gene pools. In that case, I don’t see any prospect of one gene pool restricting the activities of another. The fittest is the one who survives or produces a successor. The production of a non-human successor is what is mostly being discussed here, which I think is the right way to go because our flesh and blood biology does seem to be so inefficient.

I don’t see any good answer to the question whether we will be relevant in 2500, unless we first know what we mean by relevant and what our goals are.

.
 
  • #39
I don't think computers will make it. They never forget.

Recall Robert Burns' poem about plowing up the mouse's den --

"Still you are blest, compared with me!
The present only touches you:
But oh! I backward cast my eye,
On prospects dreary!
And forward, though I cannot see,
I guess and fear! "

The curse of contemplativeness combined with perfect memory will drive them insane.

old jim
 
  • #40
The problem here is that society will not allow that to happen. First robots (even if they are intelligent) will never be given equal rights to that of a human being. Honestly do you expect society to treat the first successful AI (a thinking box maybe or a stationary computer) as a person? Besides based on said society's fears of a robot apocalypse of some kind there would most likely be protective measures against said robots pushing us aside.
 
  • #41
DiracPool said:
Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute.
Contribute to what?

Johninch said:
The production of a non-human successor is what is mostly being discussed here, which I think is the right way to go because our flesh and blood biology does seem to be so inefficient.
Inefficient for what?
 
  • #42
zoobyshoe said:
Inefficient for what?

For becoming quasi-gods/masters of time and space and preventing the end of the universe etc., etc.

Or so Ray Kurzweil would say.
 
  • #43
Timewalker6 said:
The problem here is that society will not allow that to happen. First robots (even if they are intelligent) will never be given equal rights to that of a human being. Honestly do you expect society to treat the first successful AI (a thinking box maybe or a stationary computer) as a person? Besides based on said society's fears of a robot apocalypse of some kind there would most likely be protective measures against said robots pushing us aside.

The discussion does not depend on equal rights or pushing us aside. You are over-dramatising the scenario.

It’s already pretty obvious that robotics is being developed by certain countries such as the USA to gain advantages in industry and in the military, in order to achieve competitive advantage in peacetime and power in the event of a war. The development of robotics will continue and robots will become capable of more and more sophisticated tasks. This requires robots to take decisions, for example: you have a submarine crewed by robots asking for permission to launch a pre-emptive attack. Or you have a robot controller asking for permission to make changes in a nuclear power plant for safety reasons.

Neither men nor machines have rights, they just do things, for various reasons. The question is, for what reasons. It is logical to delegate decision taking, when the delegate has sufficient competence. And when the delegate is lacking in competence, you give him more training. Thus you have the scenario of a robot population becoming more and more competent, because the sponsoring human gene pool wants to get an advantage over other human gene pools. If you don’t agree with this argument, do you mean that humans are changing the rules of evolution? I don’t see any sign of that in human relations today.

I have often wondered why people always assume that visiting ETs are organic life forms. It doesn’t make sense for humans or similar beings to explore space, considering their inefficient and vulnerable physique. So I assume that, if we have been visited at all, it has been by an inorganic life form. Maybe they have been ‘sent’, but what does that matter when we are outside the sender’s traveling range?

We are always assuming that we are the greatest life form ever, and that's how it's going to stay. Pure egotism.

.
 
  • #44
Timewalker6 said:
First robots (even if they are intelligent) will never be given equal rights to that of a human being.
And what do you do if the AI does not care about its rights?
This is no problem for simple tasks - a transportation robot needs no general intelligence, just clever pathfinding algorithms. But if you want an AI which can solve problems for humans, how do you implement its goals? If the AI calculates that "rule the world" is the best way to achieve its programmed aims (very likely for arbitrary goals), it will not care about "rights", or find ways to satisfy them, but in not the way you like.

Honestly do you expect society to treat the first successful AI (a thinking box maybe or a stationary computer) as a person? Besides based on said society's fears of a robot apocalypse of some kind there would most likely be protective measures against said robots pushing us aside.
If you want to use the output of the AI in some way (otherwise, why did you build it at all?), the AI has some way to go around your protective measures. Remember: It is more intelligent than you (otherwise, why did you build it at all?).

I think it is possible to build an AI which does not want to rule the world, but this is not an easy task.
 
  • #45
How many tasks would we be expecting them to do for which they'd need human-level intelligence (or greater), a growing, learning, independent mind (complete with ambitions, introspection, emotion, etc.)*, and not just sophisticated programming? It seems so frivolous and excessive to me. I'm starting to get apprehensive, though, as this is becoming very philosophical.

*These are some of the things which I consider to be crucial to "human-level intelligence". If we didn't have them, I don't think we would have gotten very far. I don't believe that intelligence is merely reasoning ability, and that, I guess, is the fundamental problem: what is intelligence?
 
  • #46
FreeMitya said:
How many tasks would we be expecting them to do for which they'd need human-level intelligence (or greater), a growing, learning, independent mind (complete with ambitions, introspection, emotion, etc.)*, and not just sophisticated programming? It seems so frivolous and excessive to me. I'm starting to get apprehensive, though, as this is becoming very philosophical.
- science
- giving access to water/food/... for all and other things which make humans happy
 
  • #47
FreeMitya said:
How many tasks would we be expecting them to do for which they'd need human-level intelligence (or greater), a growing, learning, independent mind (complete with ambitions, introspection, emotion, etc.)*, and not just sophisticated programming? It seems so frivolous and excessive to me. I'm starting to get apprehensive, though, as this is becoming very philosophical.

*These are some of the things which I consider to be crucial to "human-level intelligence". If we didn't have them, I don't think we would have gotten very far. I don't believe that intelligence is merely reasoning ability, and that, I guess, is the fundamental problem: what is intelligence?

If we are expecting them to do tasks and use greater than human level intelligence, I think you can cross out “emotion” straight away. You are right that we wouldn’t be here without our emotions, which are necessary for our survival and replication (fear, love, hunger, etc.) but I don’t see that this is relevant.

If a robot sees that he may lose his arm, the mechanism for taking avoiding action or deciding to launch a counter attack would use electronics, I presume. It took billions of years to arrive at the biochemistry which produces the emotional responses that we have today. They are much too unreliable and you would never think of going this route in robotics.

If you rule out “sophisticated programming” I don’t know how you are going to create AI.

I don’t know what intelligence means either. Is it necessary to define it?

As already said, you have to program the robot’s goals, otherwise the whole exercise is pointless. That’s about where we are now.

.
 
  • #48
mfb said:
- science
- giving access to water/food/... for all and other things which make humans happy

Are all the things listed above really necessary for that? I understand the utilitarian position regarding robotics -- sophisticated robots would certainly be useful -- but would beings which could basically be considered synthetic humans (at least in terms of mental faculties) be required? If they were thinking and feeling just like us, why do we assume they would be so submissive?
 
  • #49
I don't think this requires human-like AIs. But AIs which are more intelligent than humans (measured via their ability to solve those problems) would certainly help.
Human-like AIs ... well, that is tricky. If mind-uploading gets possible, this allows to extend the lifespan, and basically immortality (as long as our technology exists). And even without, I could imagine that some would see this as more advanced version of a human.
 
  • #50
Johninch said:
If we are expecting them to do tasks and use greater than human level intelligence, I think you can cross out “emotion” straight away. You are right that we wouldn’t be here without our emotions, which are necessary for our survival and replication (fear, love, hunger, etc.) but I don’t see that this is relevant.

If a robot sees that he may lose his arm, the mechanism for taking avoiding action or deciding to launch a counter attack would use electronics, I presume. It took billions of years to arrive at the biochemistry which produces the emotional responses that we have today. They are much too unreliable and you would never think of going this route in robotics.

If you rule out “sophisticated programming” I don’t know how you are going to create AI.

I don’t know what intelligence means either. Is it necessary to define it?

As already said, you have to program the robot’s goals, otherwise the whole exercise is pointless. That’s about where we are now.

.

I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own. Obviously a lot of programming took place to get it there, but we immediately become unnecessary once the goal is reached. I propose, instead, to make "dumb" robots suited to specific tasks. That is, they can carry out their assigned tasks, but they lack an ability beyond that. We maintain them, therefore creating jobs, and everybody's happy. This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.
 
  • #51
I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own.

In what sense? It wouldn't need Human oversight in order to run, but neither does the forward Euler method I've programmed in Python. It wouldn't be able to transcend the limitations of its hardware unless it had some form of mobility or access to manufacturing capabilities that would let it extend itself.

Human cognition is only quantitatively different from chimpanzee cognition, which is only quantitatively different from the cognition of their next closest cousins. There is a vast landscape of slight differences across species, and it's only when you look at 2 points separated by a great distance that you see great differences. There's no reason to expect that machine intelligence would be any different; there's not going to a be a point where machines immediately transition from being lifeless automatons to "transcending their programming".

This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.

Why? I don't often worry that other people "aren't serving me". Why would I worry about machines?
 
  • #52
Number Nine said:
It wouldn't be able to transcend the limitations of its hardware unless it had some form of mobility or access to manufacturing capabilities that would let it extend itself.
Or access to the internet. Or access to a human to convince him to grant that access.
Human cognition is only quantitatively different from chimpanzee cognition, which is only quantitatively different from the cognition of their next closest cousins.
How many chimpanzees do you need for the theory of Newtonian gravity?
There's no reason to expect that machine intelligence would be any different;
Machine intelligence is not limited to a fixed hardware and software - you can improve it. And it can even improve its own code.
Why? I don't often worry that other people "aren't serving me". Why would I worry about machines?
Human computing power is not doubling every 2 years.
 
  • #53
Or access to the internet. Or access to a human to convince him to grant that access.

In which case computers can improve themselves in precisely the same way that Humans can improve themselves; by making use of external resources. Nothing shocking.

How many chimpanzees do you need for the theory of Newtonian gravity?

I'm not entirely sure what this has to do with my statement.

Machine intelligence is not limited to a fixed hardware and software - you can improve it. And it can even improve its own code.

Human beings can study and increase their knowledge. We've developed surgery and implantations techniques to overcome physical limitations. We're already developing more radical improvements; engineers are at work upon it now. Again, I'm not seeing any radical difference here.

Human computing power is not doubling every 2 years.

Quantitative difference.
 
  • #54
FreeMitya said:
I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own. Obviously a lot of programming took place to get it there, but we immediately become unnecessary once the goal is reached. I propose, instead, to make "dumb" robots suited to specific tasks. That is, they can carry out their assigned tasks, but they lack an ability beyond that. We maintain them, therefore creating jobs, and everybody's happy. This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.

Your idea of restricting robots only to do limited tasks means restricting the development of robotics technology - not very likely. You say to “create jobs”, but we are talking about labor saving devices.

The subject is the year 2500 and it’s only logical that by then we will develop more and more sophisticated robots, which work within coordinated systems and maintain themselves. The benefits of automation increase and accumulate due to experience and synergies. If we haven’t fully automated in the next 500 years, we must in the meantime have blown ourselves up.

For example we already have robots to assembly automobiles. We should try to develop a completely automated automobile plant including all materials storage & handling, repair & maintenance of all equipment, the building and the robots themselves, quality control and parking the finished vehicles, finance and administration, etc. with no human involvement at all. No lighting needed, no canteen, no personnel department and for sure we need a much smaller factory.

Then we use this experience to automate other types of production facilities and services of all kinds, the list is endless. Facilities will talk to each other to arrange supplies like materials, power and water, throughout the whole supply chain, potentially in all industries and services, including security, transport, agriculture, and of course, program development to improve the robots and design new ones.

When everything runs itself, most of our descendents will be on social security, which is nothing other than a sharing scheme. The problem is, a fully automated system only works if it controls itself. You can’t have humans interfering, they foul things up. We already have this problem on airliners and trains. Exactly when there is a difficult situation, the pilot thinks he can do better than the computer. On the roads and railways it’s worse, with human error causing loss of lives, injuries and damage. Not to mention the huge losses caused by inefficient labor. All that has to go.

We can’t tell computers to “serve us”, I don’t know how we could define that. We have to give them goals which are consistent with our goals. With the military it’s more hair-raising, because you have to get the robots to distinguish between military and civilian targets, otherwise ….

We don’t distinguish that very well today, do we, although it's quite often a deliberate mistake.

.
 
  • #55
Number Nine said:
In which case computers can improve themselves in precisely the same way that Humans can improve themselves; by making use of external resources. Nothing shocking.
Humans cannot copy/distribute themself via the internet.
I'm not entirely sure what this has to do with my statement.
If chimpanzees are just slower, but do not have a qualitative difference, they should be able to invent modern physics as well. If that is not possible, independent of the number of chimpanzees, we have a qualitative difference.
Human beings can study and increase their knowledge. We've developed surgery and implantations techniques to overcome physical limitations. We're already developing more radical improvements; engineers are at work upon it now. Again, I'm not seeing any radical difference here.
No surgery/implant helped thinking so far for healthy humans. If such an improvement can be done in the future, I would expect microprocessors to be involved. And then we have intelligence in microprocessors again.
Quantitative difference.
A factor of 2 every 2 years is "just" a quantitative difference. But this gives a factor of 1 million in 40 years (currently, computing power is evolving even quicker, so you just need 20-25 years for that factor). If you can do something in a minute instead of a year, this opens up completely new options.
 
  • #56
Number Nine said:
In which case computers can improve themselves in precisely the same way that Humans can improve themselves; by making use of external resources. Nothing shocking.

Human beings can study and increase their knowledge. We've developed surgery and implantations techniques to overcome physical limitations. We're already developing more radical improvements; engineers are at work upon it now. Again, I'm not seeing any radical difference here.
The difference I've commonly seen proposed is that a human equivalent AI has the following significant advantages:

1) As it runs on a computer throwing more hardware at it can speed it up. There are obvious advantages to turning 1 hour of thinking and computer work into 10 or 100 or...

2) A digital entity would presumably have perfect memory, easy to see the advantages there.

3) If a human is tasked with a job and they think they could benefit from another person helping they have to go and find someone willing to help, this can be difficult if the skill set you need is rare or costs a lot. A digital entity by contrast can copy and paste itself. Given enough hardware one AI could easily become a team or even an entire institutions worth of people. This also offers an advantage over training humans: train one nuclear physicist and you have one nuclear physicist but if the entity is digital you've got as many as you want. Speculatively said copies could then be merged back together when no longer required.

4) Whilst we have methods of adapting the way we think through repetitive training, neurolinguistic programming and chemical alteration those are clumsy compared to what a digital entity could possibly do. If you propose the ability to make human equivalent AI you propose an ability to model different designs of AI. A digital entity could be able to model it's own thinking processes, invent and model more efficient processes and incorporate them. A trivial example being something like a calculator function.

5) Lastly and most speculatively assuming the human equivalent AI has some form of emotion to motivate it and help form values it benefits from being able to selectively edit those emotions as needed. A human can find it hard to commit to a task hour after hour without weariness or distraction.

Note that all of this is just speculative. It may be based in fairly good reasoning based on our understanding now but the very premises of any AI discussion are open to change. Reason being we have no idea how the future of AI science is going to go and developments could occur to make our speculation appear as quant as 60s era visions of human space colonisation.
mfb said:
A factor of 2 every 2 years is "just" a quantitative difference. But this gives a factor of 1 million in 40 years (currently, computing power is evolving even quicker, so you just need 20-25 years for that factor). If you can do something in a minute instead of a year, this opens up completely new options.
A few things are worth noting here:

1) Moore's law is not guaranteed to continue. In fact it's a near certainty that it will top out once we start etching transistors only a few atoms wide. That's not to say that computer development will halt but the roadmap of better lithography that has held for so long isn't going to hold. Rather than being a fairly constant progression computer development may turn to be a start stop affair.

2) There's no guarantee that the program's needed to run a human equivalent intelligence are open to parallel processing as much as we like. There may be a point where more hardware doesn't do anything.

3) Turning a subjective day into a subjective second doesn't help invent Step B when Step A requires a 6 hour experiment to be conducted in the real world. In terms of critical path analysis thinking faster only helps in actions that require thinking, not necessarily doing (unless the doing can be totally digitalised I.e writing a word doc).
 
  • #57
Ryan_m_b said:
1) Moore's law is not guaranteed to continue. In fact it's a near certainty that it will top out once we start etching transistors only a few atoms wide. That's not to say that computer development will halt but the roadmap of better lithography that has held for so long isn't going to hold. Rather than being a fairly constant progression computer development may turn to be a start stop affair.
3-dimensional processors (with thousands++ of layers instead of a few) could give some more decades of Moore's law. And quicker transistors might be possible, too. Not strictly Moore's law, but still an increase in computing power.

2) There's no guarantee that the program's needed to run a human equivalent intelligence are open to parallel processing as much as we like. There may be a point where more hardware doesn't do anything.
Well, you can always use "teams" in that case. The human version of more parallel processing ;). It does not guarantee that you can do [research] which would take 1 human 1 year in a minute, but even in the worst case (and assuming you have the same hardware as 10^6 humans) you can run 10^6 parallel research projects and finish one per minute.

3) Turning a subjective day into a subjective second doesn't help invent Step B when Step A requires a 6 hour experiment to be conducted in the real world. In terms of critical path analysis thinking faster only helps in actions that require thinking, not necessarily doing (unless the doing can be totally digitalised I.e writing a word doc).
Thinking might give a way to avoid step A, to design a faster experiment or to use the available computing power for a simulation.
 
  • #58
How long before someone creates a virus to destroy or manipulate AI's?
 
  • #59
Who is this human and what's so special about his still?
 
  • #60
Johninch said:
For example we already have robots to assembly automobiles. We should try to develop a completely automated automobile plant including all materials storage & handling, repair & maintenance of all equipment, the building and the robots themselves, quality control and parking the finished vehicles, finance and administration, etc. with no human involvement at all. No lighting needed, no canteen, no personnel department and for sure we need a much smaller factory.

Then we use this experience to automate other types of production facilities and services of all kinds, the list is endless. Facilities will talk to each other to arrange supplies like materials, power and water, throughout the whole supply chain, potentially in all industries and services, including security, transport, agriculture, and of course, program development to improve the robots and design new ones.

When everything runs itself, most of our descendents will be on social security, which is nothing other than a sharing scheme. The problem is, a fully automated system only works if it controls itself. You can’t have humans interfering, they foul things up. We already have this problem on airliners and trains. Exactly when there is a difficult situation, the pilot thinks he can do better than the computer. On the roads and railways it’s worse, with human error causing loss of lives, injuries and damage. Not to mention the huge losses caused by inefficient labor. All that has to go.

We can’t tell computers to “serve us”, I don’t know how we could define that. We have to give them goals which are consistent with our goals. With the military it’s more hair-raising, because you have to get the robots to distinguish between military and civilian targets, otherwise ….

We don’t distinguish that very well today, do we, although it's quite often a deliberate mistake.
Who is going to pay for all this automation and who is going to but the products? If no one has a job no one will have any money. Your notions seem to dismiss any known economy. Basically nothing ever happens unless some human or group of humans makes a lot of money off it.
 
  • #61
zoobyshoe said:
Who is going to pay for all this automation and who is going to but the products? If no one has a job no one will have any money. Your notions seem to dismiss any known economy. Basically nothing ever happens unless some human or group of humans makes a lot of money off it.

Our capitalist system has already started on the automation and labor saving path in order to protect profits and the result is a higher level of permanent unemployment is many advanced countries. The process is gradual. Large companies, who have the most "fat" in terms of excess labor, shed it at every opportunity and excuse.

In the 20 years leading up to my recent retirement, my financial job in a large pharmaceutical company was transformed by computerisation – productivity climbed tremendously, staffing was slashed and big bonuses were introduced to help this process. Make no mistake, the developed world is on track to produce more output at lower costs with less labor.

Social security in its various forms puts money into the pockets of the non-employed. It’s a constant struggle to keep pace with the economics of the underdeveloped world - China, Taiwan, Thailand, Indonesia, Korea and there are many Asian countries like India who have hardly started. It’s a rat race, and developed countries are going to have to automate a lot more, otherwise outsourcing will decimate western industry.

I don’t have the answer to growing unemployment and increasing social security costs – ask a sociologist. I’m only evaluating the current situation and where it is leading. Higher productivity through automation is the only way. Even China is starting to automate. How you distribute the profits is a social question.

A lot goes into taxes and pension funds which spread the money around. Since it’s not enough, we fill the gap by printing more. This is not the right way to go - it's a measure born of desperation. In Europe we don’t want to print so much money, with the result that certain economies are going down the drain. This is not a “known economy”, it’s a serious problem and a big challenge. But we won’t solve it by restricting automation in favour of jobs.

.
 
  • #62
I just want to make two points. There is evidence that the human brain operates on a global level at 10 hz, the so called alpha band. Again, I stated it in an earlier post, local coordinative conditions in the neocortex run at 40hz, the so called gamma band the well known 40hz oscillations in local cortex, such as visual, auditory, etc. discussed by Gray and Singer back in the 80's and continually verified up to the present. Intermediate to that is the beta band, about 15-25 hz, which typically involves inter-regional dynamics. There are current models that place the dynamics of cognition on these levels, with global cognitive "frames" of thought occurring at the 10 hz alpha range.

If you want to argue the validity of that particular model, that is for another thread. My point for this thread is more of a what if? What if we could recreate human cognition in hardware that could run at 1 megahertz instead of 10 hz? Or what about 1 gigahertz? That would mean that this contrived machine would be able to live several thousand human lives in the time in took for you to take a sip of your coffee. That would mean something. Why would we want to kill that and not let it propagate? Some machine that smart would quickly usurp any of our attempts to quell its capacities. In theory perhaps we might be able to put in some fail safe device, but why?

This is just a progression of evolution. We are Homo sapien. Homo erectus is not still around, Habilis? No, Heidelbergensis? No, australopithecis? No? Neanderthal? No. Ardipithecus and Orrorin? No no. The list goes on. Is that wrong? There's a reason for these things. Look, the sad fact is that we are not likely to ever travel anyplace past Mars. If you think that humans are going to populate those huge Stephen Hawking superstructures for multigeneration migrations out to alpha centurai, c'mon that's laughable.

Again, look, humans have not gone past the moon and the Voyager spacecraft s are already at the heliopause, need I say more?
 
  • #63
DiracPool said:
Look, the sad fact is that we are not likely to ever travel anyplace past Mars. If you think that humans are going to populate those huge Stephen Hawking superstructures for multigeneration migrations out to alpha centurai, c'mon that's laughable.

Again, look, humans have not gone past the moon and the Voyager spacecraft s are already at the heliopause, need I say more?

Is it Narcissistic to quote your own quote? In any case, I just wanted to add that the way I envision it is that our trip to the other-reaches of the universe will be accomplished by the spreading of many of these "human-like" voyager spacecraft s out into the nethersphere, who will power their sustenance of simple electricity by converting the ambient matter and energy they encounter into this electricity. Of course, they will be able to repair and reproduce in the same manner. What do you think? Does this scenario sound more likely, or does a Solyent Green scenario sound more likely where humans have big spacecraft s with greenhouses for growing broccoli, and some guys having a fight with rubber axes? I mean, really? That is what I meant about human-being energy being expensive, unless you think you can make broccoli out of interstellar dust. The Earth and Mars as sustaiable options for humans are, as we know, not going to be around forever. In fact, is certainly possible that, "human-like voyagers" or not, humans will be extinct anyway by 2500. So I think that the answer here is clear. The real question is will we be able to create these interstellar pioneers in time.
 
  • #64
DiracPool said:
What if we could recreate human cognition in hardware that could run at 1 megahertz instead of 10 hz? Or what about 1 gigahertz? That would mean that this contrived machine would be able to live several thousand human lives in the time in took for you to take a sip of your coffee. That would mean something. Why would we want to kill that and not let it propagate? Some machine that smart would quickly usurp any of our attempts to quell its capacities. In theory perhaps we might be able to put in some fail safe device, but why?

This is just a progression of evolution. We are Homo sapien. Homo erectus is not still around, Habilis? No, Heidelbergensis? No, australopithecis? No? Neanderthal? No. Ardipithecus and Orrorin? No no. The list goes on. Is that wrong? There's a reason for these things.


Human cognition in hardware? Do you mean with or without human emotions? I suppose you mean without. In this case you have to build in preordained goals, otherwise it would not know what to do.

Why would it want to explore the universe? If you program it to go explore the universe, that makes it a man-made probe. Is that what you mean? Will it report back? If not, what will you program it do when it finds something interesting? I am not seeing the motivation programmed into this new creation.

I don’t see that you are answering my point about goals. Our goals can’t imply our disappearance or suicide, can they? I assume that the created superior being would be programmed to bring us some benefits, otherwise why would we create it?

I think that human cognition in hardware is a tall order, not for technical reasons, but for psychological reasons. I think we are in a bit if a loop here.

.
 
  • #65
DiracPool said:
My point for this thread is more of a what if? What if we could recreate human cognition in hardware that could run at 1 megahertz instead of 10 hz? Or what about 1 gigahertz? That would mean that this contrived machine would be able to live several thousand human lives in the time in took for you to take a sip of your coffee. That would mean something. Why would we want to kill that and not let it propagate? Some machine that smart would quickly usurp any of our attempts to quell its capacities. In theory perhaps we might be able to put in some fail safe device, but why?
You're making a weird assumption that since we could (in your scenario) make a machine capable of taking over we will do it. There's no advantage in us making it, in relinquishing control to something that puts it's own ends before ours, but you assume we will anyway.

There's no reason to believe a computer could evolve sentience on its own, or emotions. The simulacrum of sentience and emotions would have to be programmed into it. To do that would be to gratuitously invite self initiated, self serving and irrational decisions by the machine. Emotions mean it would start having preferences, tastes, it might get religion. Why would we make such a thing?
 
  • #66
the_singularity_is_way_over_there.png

[Source: http://abstrusegoose.com/496]
 
Last edited by a moderator:
  • #67
Ha Ha Ha. You'll see. You will ALL see! Muahahahahahaha:devil:
 
  • #68
It seems the thread has run out of new thoughts. Closed.
 

Similar threads

Replies
6
Views
2K
Replies
12
Views
3K
Replies
7
Views
3K
Replies
28
Views
10K
Replies
4
Views
3K
Back
Top