In summary, PF Advisors were asked about their thoughts on the rise of AI and its impact on STEM in various areas. Some advisors expressed skepticism about its potential, while others acknowledged the potential for both positive and negative outcomes. Some mentioned the need for more clarity and understanding in terms of what truly constitutes AI. Others saw it as a valuable tool that can be used for beneficial purposes or misused for harmful ones. Overall, there were mixed opinions and levels of interest in the topic among the advisors.
Technology news on Phys.org
  • #72
Pretty interesting read. I look forward to the next installment.
 
  • #73
fresh_42 said:
Yes.
Although I quoted your post, the last statement was not directed to you personally, but to the reader. I perhaps should have made that more clear.
 
  • #74
Last edited by a moderator:
  • #75
I am surprised that majority of advisors seems to be rather sceptical in teir opinions. Personally, I tend to agree with @bhobba, he mentioned some very interesting examples.
 
  • #76
I think that by now AI should be part of rudimentary education (not the technical details, but the gist and scope). Many people don't seem to realize the role it has in modern societies and economies. News, politics, the stock market, social media, advertising, the internet; by now it's all driven mostly by AI rather than people. It's the reason why companies are trying to collect as much information about you as possible. You could say that it offers the power to have a customized Sith Lord for every citizen (per say). That is, manipulation is customized, as AI can utilize each persons situation and psychological weaknesses to optimize the effect it is able to have on your behavior. That's the big business aspect. And this is the thing everybody should understand so that we can have a conversation about the ethics and impacts. It should be combined with education in critical thinking and ethics. Everyone should know about what/who is trying to manipulate them and how.

Besides the role of AI in manipulating human behavior, advancements in autonomous robotics is set to further transform a number of areas: mining, war, espionage, space, manufacturing, farming, etc. The specifics about exactly how and when might be slightly uncertain, but overall and in general it's pretty clear and simple. The main factors that would change things are human intervention to regulate how AI is used, and competition between groups of people for control and domination. Besides that, if you look at the incentives and what is possible, you can get a good idea of what the future is likely to look like.

Personally, I think space is the big one. Modern AI is just about at the level where many of the key breakthroughs, envisioned from the beginning by people such as Von Neumann, are feasible. This includes interstellar space missions, massive industries in outer space, terraforming, etc. How far out these things are, is not clear. They currently still require extensive human input in terms of design and engineering. But at some level of achievement these things could be ramped up in scale enormously. We could, for example, launch a single automated mission to send probes to millions of stars, and then millions of probes from each of those million stars, and so on.

If you go further out, you can expect a time when science, mathematics, and engineering are also dominated by AI. In that case, it is relevant to wonder what role the human being has. The AI will develop insights, construct proofs, record observations, do analysis, pose new questions, maintain an awareness of the state of the art, etc. It will share this information in a distributed way in some non-human readable form. People would, by default, have little clues what is going on, but will notice improvements in technologies. We will likely act as managers giving approval on high level projects, while balancing trying to micromanage things we don't understand. Efforts will be made to figure out how to improve communication between AI and people, so that we can understand as much as possible in terms of what they are learning and doing, and participate as much as possible in decision making. Many proofs, analytic functions, and rationals, will be too large and complex to fit in human memory in order to be understood.
 
Last edited:
  • #77
Jarvis323 said:
That is, manipulation is customized, as AI can utilize each persons situation and psychological weaknesses to optimize the effect it is able to have on your behavior. That's the big business aspect. And this is the thing everybody should understand so that we can have a conversation about the ethics and impacts. It should be combined with education in critical thinking and ethics. Everyone should know about what/who is trying to manipulate them

Humans have been manipulating humans from time in memoriam. Advertising of any type is has an element of manipulation from downright lies to psychology ("The Hidden Persuaders": by Vance Packard 1957). Now as you note it can target individuals. as always, caveat emptor. It isn't AI that is manipulating us it is humans.

Jarvis323 said:
The main factors that would change things are human intervention to regulate how AI is used, and competition between groups of people for control and domination.

We can regulate the overt use of AI but not the surreptitious use. How do you know an AI app is monitoring you? How can we determine what that use is?

What we do with AI is our decision at this time. It will change our behavior as we see benefits from its implementation. That change may expose us to or create problems heretofore unknown. Computers made the internet possible allowing us to expose our entire lives to the world if we choose making us vulnerable to crime and exploitation. But, then AI might help us. Perhaps we will develop a cyber Gort ("The day the Earth stood still") to monitor the web and protect us from ourselves.
 

Similar threads

Replies
2
Views
2K
Replies
1
Views
2K
Back
Top