- #71
Filip Larsen
Gold Member
- 1,890
- 799
jack action said:Would you prefer not taking any chances and stop the research on how to produce that gas? Or would you do the research and see where it goes?
I don't see anyone here or elsewhere that seriously argues in favor of stopping research. Many of those that express concerns about AI are often also those that are involved. One of their worries is that blind faith in this technology by the manufacturers may lead to serious issues that in turn will make consumers turn away from this technology at some point. It seems like a win-win situation for them to work towards both ensuring the bottom line as well as the well-being of the human race. To take your gas example, no company in its right might would completely ignore risks developing and deploying a new gas if they know it has the potential to serious hurt humans and thus their bottom line. Greenfield technology always have a larger set of unknown risks, and companies knows this. Usually the higher risks comes a bit down the road when you think you know all there is to know about this technology, start to optimize and deploying it wide and then you get hit bad by something you'd missed or optimized away thinking it was unimportant. The recent case of exploding Samsung phones seems to be a textbook example on such a scenario.
To me, the discussion in this thread seems to revolve more around beliefs regarding how much risk people themselves (i.e. "consumers") can accept using a future technology we do not yet understand. It seems that even people who acknowledge how complicated control of future AI can be still believe that the net risk to them will be kept low because they rightfully expect someone else to worry and mitigate any risk along the way. That is a perfectly sensible belief, but in order for it to be well placed there really need to be someone else that actually concerns themselves about identifying and mitigating risks.
In a sense, the current discussion seems very similar to the public discussion on the dangers of gene editing. Even if everyone rightfully can expect everyone involved in gene editing to do it safely the technology hold such potential that there is a risk that a few "rotten apples" will spoil it for everyone and do something that is very difficult to undo and which ends up being harmful for a very large set of humans.