- #1
lpetrich
- 988
- 180
In "Future? Tense!" in his essay collection From Earth to Heaven (1965), Isaac Asimov wrote about the nature of science fiction. He noted that science-fictioneers are stereotyped either as indulgers in weird fantasies or as farsighted predictors of the future. After discussing some SFers' successful, if limited, predictions, he notes:
He himself had proposed that his robots would have "positronic brains", from having recently learned about the positron, a sort of mirror-image of the electron. However, when a positron runs into an electron, the two particles disappear into two or three very energetic gamma rays, with the combined energy the an electron would get from a million-volt battery. A positronic brain would quickly fry itself.
But that was not the point -- the point is what would happen if artificial-intelligent systems became common. IA had became annoyed at all the SF stories about robots destroying their creators, with the implication that we were not meant to create such entities. He knew that many tools have various safety mechanisms, and wouldn't AI systems also need them? Thus, his Three Laws of Robotics, which I rephrase as follows:
1. An AI system may not injure a human being or, through inaction, allow a human being to come to harm.
2. An AI system must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. An AI system must protect its own existence as long as such protection does not conflict with the First or Second Law.
Likewise, if automatic-driving cars became feasible, what would become of manual driving? IA once wrote a story, "Sally", in which manual driving had been outlawed as needlessly dangerous.
IA imagined what SFers might have written about cars back in 1880.
Or,
He also considered what social changes cars would make possible if they could be mass-produced in the millions, and at prices low enough for just about anyone to buy them. Wouldn't people move outward and create suburbs? Etc. H.G. Wells predicted several such things in his 1901 book Anticipations of the Reaction of Mechanical and Scientific Progress: Upon Human Life and Thought, and IA thought of something that even HGW didn't think of. When people commute to cities, they will have to have some place to leave their cars, and he imagines:
That seems like painful reality today, but maybe not in 1880, when he thought that that could have alerted at least some policymakers about the problems of a superabundance of cars.
IA also noted a prediction of the Cold-War nuclear stalemate. Robert A. Heinlein wrote "Solution Unsatisfactory" under the name Anson MacDonald back in 1941. Although he imagined radioactive dust rather than nuclear bombs, the essential outcome was the same. He imagined his hero asking if the US could continue to have a monopoly on radioactive-dust making, because someone elsewhere will sooner or later reinvent it. This would result in an all-offense-no-defense stalemate, with every dust-possessing nation dependent on the goodwill of every other one.
Not the technological advance, but what would happen if it became common.Do you see, then, that the important prediction is not the automobile, but the parking problem; not radio, but the soap-opera; not the income tax but the expense account; not the Bomb but the nuclear stalemate? Not the action, in short, but the reaction?
He himself had proposed that his robots would have "positronic brains", from having recently learned about the positron, a sort of mirror-image of the electron. However, when a positron runs into an electron, the two particles disappear into two or three very energetic gamma rays, with the combined energy the an electron would get from a million-volt battery. A positronic brain would quickly fry itself.
But that was not the point -- the point is what would happen if artificial-intelligent systems became common. IA had became annoyed at all the SF stories about robots destroying their creators, with the implication that we were not meant to create such entities. He knew that many tools have various safety mechanisms, and wouldn't AI systems also need them? Thus, his Three Laws of Robotics, which I rephrase as follows:
1. An AI system may not injure a human being or, through inaction, allow a human being to come to harm.
2. An AI system must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. An AI system must protect its own existence as long as such protection does not conflict with the First or Second Law.
Likewise, if automatic-driving cars became feasible, what would become of manual driving? IA once wrote a story, "Sally", in which manual driving had been outlawed as needlessly dangerous.
IA imagined what SFers might have written about cars back in 1880.
IA didn't name names of SF stories like that, but that reminds me of the "treknobabble" in some Star Trek episodes.There could be the excitement of a last-minute failure in the framistan and the hero can be described as ingeniously designing a liebestraum out of an old baby carriage at the last minute and cleverly hooking it up to the bispallator in such a way as to mutonate the karrogel.
Or,
Lots of visual-media SF is similarly absurd about its spaceships, making them seem too much like Earthbound vehicles."The automobile came thundering down the stretch, its mighty tires pounding, and its tail assembly switching furiously from side to side, while its flaring foam-flecked air intake seemed rimmed with oil." Then, when the car has finally performed its task of rescuing the girl and confounding the bad guys, it sticks its fuel intake hose into a can of gasoline and quietly fuels itself.
He also considered what social changes cars would make possible if they could be mass-produced in the millions, and at prices low enough for just about anyone to buy them. Wouldn't people move outward and create suburbs? Etc. H.G. Wells predicted several such things in his 1901 book Anticipations of the Reaction of Mechanical and Scientific Progress: Upon Human Life and Thought, and IA thought of something that even HGW didn't think of. When people commute to cities, they will have to have some place to leave their cars, and he imagines:
The title: "Crunch!"A delightful satire about our hero spending all day looking for a parking spot, and in the process, meeting traffic jams, taxi drivers, traffic cops, trucks, parking meters, filled garages, fire hydrants, etc., etc.
That seems like painful reality today, but maybe not in 1880, when he thought that that could have alerted at least some policymakers about the problems of a superabundance of cars.
IA also noted a prediction of the Cold-War nuclear stalemate. Robert A. Heinlein wrote "Solution Unsatisfactory" under the name Anson MacDonald back in 1941. Although he imagined radioactive dust rather than nuclear bombs, the essential outcome was the same. He imagined his hero asking if the US could continue to have a monopoly on radioactive-dust making, because someone elsewhere will sooner or later reinvent it. This would result in an all-offense-no-defense stalemate, with every dust-possessing nation dependent on the goodwill of every other one.