- #1
Jarvis323
- 1,243
- 986
There has been an explosion of discussion among experts about the evolving/growing dangers of AI and what to do about it. This debate has gone public, largely due to the recent success of generative AI, and the rapid pace of improvement. As an example, AI models which can generate images based on text prompts. About a year ago, some PF members were having fun with a version of DALL·E.
https://www.physicsforums.com/threa...e-rain-and-other-ai-generated-images.1016247/
One year later, and generative AI can create photo-realistic images that are getting closer to being indistinguishable from real photos, such as the one below generated with MidJourney.
It is predicted that photo realistic AI generated films based on text prompts are a year or so away. AI generated art is also getting very impressive. The POV getting slapped by Will Smith at the Oscars, could have been generated in the style of Picasso, or pretty much in any other notable style that the model had been trained with. It's like going from pong on Atari to Grand Tourismo on Play station 5 in 2 years.
Generative AI also enables deep fakes, and voice cloning. This has began to make a real world impact, exampled by deep fakes used for propaganda in the Ukraine war, and voice cloning used in extortion schemes.
https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/
The same technology can be used to impersonate people, or generate completely new fake people, for all kinds of purposes.
Simultaneously, the generative text models are getting more impressive at a fast pace, which ChatGPT has famously brought public awareness to. People have warned they pose various threats, from plagiarism, to sophisticated targeted scams, to troll farms and propaganda, to replacing human labor and risking economic instability and joblessness. A few months ago, here on PF, people discussed how they thought that diagrams and plots will need to play a larger role because AI (or ChatGPT) can't see and interpret them. Maybe some of these people assumed this would remain the case. But anyways, GPT-4 can do that now. Multi-modal models, which can read, see, speak, and generate images seamlessly are here already. The kind of models we are building have little in terms of limitations on what kind of input-output distributions they can be used for, or their combinations.
Not long ago, in another thread, people were arguing that AI is limited by the cleverness of the programmer. That idea has failed to be true for years, as modern AI is based on self-learning, rather than programming. The neural networks enabling these complex capabilities are simple, and learn enormously complex behavior themselves from information in datasets through extremely high dimensional gradient descent and back propagation. It was once an open question if gradient descent can succeed, or how far it could take us. It is now obvious to the AI community that it works.
On the plagiarism front, watermarking and heuristics for detection of AI generated content are being developed, but they aren't perfect and can be bypassed with some effort. GPTZero, has been already put into practice at universities, detecting AI generated essays. It is claimed to have a false positive rate of less than 2%. This means somewhere below 2 out of 100 students would be expected to be falsely accused of cheating by default. These would be a certain class of people with a type of writing style. Once such false accusation has already occurred. Fortunately, the student's incremental progress was recorded by google docs.
https://www.usatoday.com/story/news...ned-false-cheating-case-uc-davis/11600777002/
To avoid false accusation, students will use these tools themselves. The cheaters will learn to fool them. The people who don't cheat but have a certain writing style will need to be careful, and potentially modify their work in a way to avoid false positives. How this will play out as generative AI gets better and better and more flexible is nearly impossible to predict.
AI safety, or alignment, researchers have long warned about the dangers of smarter than human AI with agency, to humanity. Eliezer Yudkowski famously has been claiming that we will all die with near certainty if this happens. The community around this area of research, are sometimes called AI alignment rationalists. They have been largely looking at this problem through the lens of game theory, where an AI is an optimizing agent, with utility functions, that needs resources. The argument is fairly simple. In most formulations they can come up with, this optimization process leads to the loss of humanity in a simulation. A smarter than human AI can outsmart us, or Dutch book us, out-compete us for resources, and defeat us as an adversary.
In the meantime, other AI experts point to smaller, simpler, current real world impacts. The DAIR institute focuses on warnings of increases in inequality, bias, unfair labor practices, centralized power, exploitation, and negative cultural and psychological impacts. Many of these issues aren't new, we've been grappling with them for years, with recommender systems, and social media, being some examples. Behind the scenes, machine learning has been at play for years now, trained on our personal data, to predict our behavior, to learn how to push our buttons, and influence our decisions. That is the basis for the big-internet-tech economy. That is why these things are free and you are the product. These products have been regulated with a self-regulation model, and their owners/operators have enjoyed near-zero liability for their harms based on Section 230. The powers of generative AI come into play in this regime as a force multiplier. Instead of recommending human created content (e.g. an article, or meme) based on the AI model of the user with the goal of increasing engagement and influence, such content could be generated from scratch based on their model and information. GPT powered Bing Chat is already moving to the seamless insertion of advertisements into AI generated text as users converse with it. The mode is surprisingly simple, you simply ask the Chat model to do that, and it does it. With such levels of customization and detail now automatable, optimization can lead to very personal, and non-uniform flows of information to each individual. The distribution of influences then become non-interpretable and non-controllable, in the same way that the distribution of influences on neurons in a neural network are non-interpretable, and non-controllable.
Besides the legally paid for influences, and the influences which emerge out of human behavior, and psychology, we also have the illegal influences. The same way that Microsoft can work in highly detailed and subtle personalization into your flow of information for the purposes of selling you something or getting you to feel some way about something, criminals can use people's detailed personal information to automatically custom tailor their actions against their targets, and they can do this freely at scale with minimal human labor.
The ability for generative models to write code along with detailed integration and deployment instructions, based on high level text descriptions, unlocks new powers for both programmers and non-programmers alike. People without coding skills are now creating Apps and startups, using generative AI to do nearly all of the coding, and logistics. The same model can come up with the idea, write the code, aid in the legal issues, and so on. On the cyber security front, the models are able to create sophisticated malware and viruses if asked to.
https://bgr.com/tech/a-new-chatgpt-zero-day-attack-is-undetectable-data-stealing-malware/
Meanwhile, there is a perceived AI arms race between super powers. The US, in fear of other superpowers gaining an AI advantage, has requested funding for development of strategic AI tools of warfare.
https://www.c4isrnet.com/battlefiel...equest-has-billions-for-advanced-networks-ai/
Simultaneously, autonomous weapons systems are getting more and more sophisticated. Besides the capabilities emerging from AI, which controls them, improvements in materials science, and nano-scale engineering, are set to be a force multiplier. Carbon based transistors and 3D printed nano-scale carbon based circuitry will enable much much cheaper, smaller, and energy efficient, autonomous robots to house large amounts of compute power and memory.
At the same time, countries like Russia are aggressively seeking the power offered by AI as a means to defeat their adversaries. This has been true, but in more recent times has become even more serious.
https://sites.tufts.edu/hitachi/files/2021/02/1-s2.0-S0030438720300648-main.pdf
Ex google CEO Eric Schmidt has been tasked by government to help develop plans and guidelines for matching the external AI threats from other nations and has co-created the influential Special Competitive Studies Project (SCSP), aimed to "make recommendations to strengthen America’s long-term competitiveness for a future where artificial intelligence (AI) and other emerging technologies reshape our national security, economy, and society".
https://www.theverge.com/2016/3/2/11146884/eric-schmidt-department-of-defense-board-chair
In fear of losing a competitive AI arms race against China, the US government has embraced a self-regulation model. However, since the changes to the landscape, this approach is being rethought with an unprecedented number of people weighing in. Some, such as Max Tegmark, point out that China is more aggressive at AI regulation, because like us, they don't want even their own AI to undermine their own power, and control. He argues it isn't just a race to be more powerful, it is also a suicide race. Max Tegmark, referencing Meditations On Moloch, by Scott Alexander, names this force that puts us in a mutually destructive race that we can't stop, as Moloch, and discusses it more generally as a primary foe of humanity, and makes a case for optimism.
All of these issues are only a sampling of those which we can anticipate. Connor Leahy of Conjecture, says,
Yuval Noah Harari has focused on existential threats, namely, human irrelevance or feelings of irrelevance.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
Here are a few interesting interviews, to help better understand some of the landscape. The already linked interviews are worth watching as well.
The dangers of stochastic parrots.
https://www.youtube.com/watch?v=N5c2X8vhfBE
Timnit Gebru explains why large language models like ChatGPT have inherent bias and calls for oversight in the tech industry
https://www.youtube.com/watch?v=kloNp7AAz0U
So what are your thoughts? Who do you agree with, or disagree with, and how so? Is there anything else important that you think hasn't been addressed?
https://www.physicsforums.com/threa...e-rain-and-other-ai-generated-images.1016247/
One year later, and generative AI can create photo-realistic images that are getting closer to being indistinguishable from real photos, such as the one below generated with MidJourney.
It is predicted that photo realistic AI generated films based on text prompts are a year or so away. AI generated art is also getting very impressive. The POV getting slapped by Will Smith at the Oscars, could have been generated in the style of Picasso, or pretty much in any other notable style that the model had been trained with. It's like going from pong on Atari to Grand Tourismo on Play station 5 in 2 years.
Generative AI also enables deep fakes, and voice cloning. This has began to make a real world impact, exampled by deep fakes used for propaganda in the Ukraine war, and voice cloning used in extortion schemes.
“I pick up the phone, and I hear my daughter’s voice, and it says, ‘Mom!’ and she’s sobbing,” the petrified parent described. “I said, ‘What happened?’ And she said, ‘Mom, I messed up,’ and she’s sobbing and crying.”
...
All the while, she could hear her daughter in the background pleading, “‘Help me, Mom. Please help me. Help me,’ and bawling.”
...
“I never doubted for one second it was her,” distraught mother Jennifer DeStefano told WKYT while recalling the bone-chilling incident. “That’s the freaky part that really got me to my core.”
https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/
The same technology can be used to impersonate people, or generate completely new fake people, for all kinds of purposes.
Simultaneously, the generative text models are getting more impressive at a fast pace, which ChatGPT has famously brought public awareness to. People have warned they pose various threats, from plagiarism, to sophisticated targeted scams, to troll farms and propaganda, to replacing human labor and risking economic instability and joblessness. A few months ago, here on PF, people discussed how they thought that diagrams and plots will need to play a larger role because AI (or ChatGPT) can't see and interpret them. Maybe some of these people assumed this would remain the case. But anyways, GPT-4 can do that now. Multi-modal models, which can read, see, speak, and generate images seamlessly are here already. The kind of models we are building have little in terms of limitations on what kind of input-output distributions they can be used for, or their combinations.
Not long ago, in another thread, people were arguing that AI is limited by the cleverness of the programmer. That idea has failed to be true for years, as modern AI is based on self-learning, rather than programming. The neural networks enabling these complex capabilities are simple, and learn enormously complex behavior themselves from information in datasets through extremely high dimensional gradient descent and back propagation. It was once an open question if gradient descent can succeed, or how far it could take us. It is now obvious to the AI community that it works.
On the plagiarism front, watermarking and heuristics for detection of AI generated content are being developed, but they aren't perfect and can be bypassed with some effort. GPTZero, has been already put into practice at universities, detecting AI generated essays. It is claimed to have a false positive rate of less than 2%. This means somewhere below 2 out of 100 students would be expected to be falsely accused of cheating by default. These would be a certain class of people with a type of writing style. Once such false accusation has already occurred. Fortunately, the student's incremental progress was recorded by google docs.
Quarterman denied he had any help from AI but was asked to speak with the university's honor court in an experience he said caused him to have "full-blown panic attacks." He eventually was cleared of the accusation.
https://www.usatoday.com/story/news...ned-false-cheating-case-uc-davis/11600777002/
To avoid false accusation, students will use these tools themselves. The cheaters will learn to fool them. The people who don't cheat but have a certain writing style will need to be careful, and potentially modify their work in a way to avoid false positives. How this will play out as generative AI gets better and better and more flexible is nearly impossible to predict.
AI safety, or alignment, researchers have long warned about the dangers of smarter than human AI with agency, to humanity. Eliezer Yudkowski famously has been claiming that we will all die with near certainty if this happens. The community around this area of research, are sometimes called AI alignment rationalists. They have been largely looking at this problem through the lens of game theory, where an AI is an optimizing agent, with utility functions, that needs resources. The argument is fairly simple. In most formulations they can come up with, this optimization process leads to the loss of humanity in a simulation. A smarter than human AI can outsmart us, or Dutch book us, out-compete us for resources, and defeat us as an adversary.
In the meantime, other AI experts point to smaller, simpler, current real world impacts. The DAIR institute focuses on warnings of increases in inequality, bias, unfair labor practices, centralized power, exploitation, and negative cultural and psychological impacts. Many of these issues aren't new, we've been grappling with them for years, with recommender systems, and social media, being some examples. Behind the scenes, machine learning has been at play for years now, trained on our personal data, to predict our behavior, to learn how to push our buttons, and influence our decisions. That is the basis for the big-internet-tech economy. That is why these things are free and you are the product. These products have been regulated with a self-regulation model, and their owners/operators have enjoyed near-zero liability for their harms based on Section 230. The powers of generative AI come into play in this regime as a force multiplier. Instead of recommending human created content (e.g. an article, or meme) based on the AI model of the user with the goal of increasing engagement and influence, such content could be generated from scratch based on their model and information. GPT powered Bing Chat is already moving to the seamless insertion of advertisements into AI generated text as users converse with it. The mode is surprisingly simple, you simply ask the Chat model to do that, and it does it. With such levels of customization and detail now automatable, optimization can lead to very personal, and non-uniform flows of information to each individual. The distribution of influences then become non-interpretable and non-controllable, in the same way that the distribution of influences on neurons in a neural network are non-interpretable, and non-controllable.
Besides the legally paid for influences, and the influences which emerge out of human behavior, and psychology, we also have the illegal influences. The same way that Microsoft can work in highly detailed and subtle personalization into your flow of information for the purposes of selling you something or getting you to feel some way about something, criminals can use people's detailed personal information to automatically custom tailor their actions against their targets, and they can do this freely at scale with minimal human labor.
The ability for generative models to write code along with detailed integration and deployment instructions, based on high level text descriptions, unlocks new powers for both programmers and non-programmers alike. People without coding skills are now creating Apps and startups, using generative AI to do nearly all of the coding, and logistics. The same model can come up with the idea, write the code, aid in the legal issues, and so on. On the cyber security front, the models are able to create sophisticated malware and viruses if asked to.
A few days ago, Europol warned that ChatGPT would help criminals improve how they target people online. Among the examples Europol offered was the creation of malware with the help of ChatGPT.
...
He used clear, simple prompts to ask ChatGPT to create the malware function by function. Then, he assembled the code snippets into a piece of data-stealing malware that can go undetected on PCs. The kind of 0-day attack that nation-states would use in highly sophisticated attacks. A piece of malware that would take a team of hackers several weeks to devise.
https://bgr.com/tech/a-new-chatgpt-zero-day-attack-is-undetectable-data-stealing-malware/
Meanwhile, there is a perceived AI arms race between super powers. The US, in fear of other superpowers gaining an AI advantage, has requested funding for development of strategic AI tools of warfare.
The request, which is about $15 billion more than the FY23 ask, designates $1.4 billion for the connect-everything campaign known as Joint All-Domain Command and Control and $687 million for the Rapid Defense Experimentation Reserve, an effort spearheaded by Undersecretary of Defense Heidi Shyu that aims to fill high-priority capability gaps with advanced tech.
...
JADC2 is the Pentagon’s vision of a wholly connected military, where information can flow freely and securely to and from forces across land, air, sea, space and cyber. The complex endeavor likely will never have a formal finish line, defense officials say, and is fueled by cutting-edge communications kit and cybersecurity techniques, as well as an embrace of artificial intelligence.
https://www.c4isrnet.com/battlefiel...equest-has-billions-for-advanced-networks-ai/
Simultaneously, autonomous weapons systems are getting more and more sophisticated. Besides the capabilities emerging from AI, which controls them, improvements in materials science, and nano-scale engineering, are set to be a force multiplier. Carbon based transistors and 3D printed nano-scale carbon based circuitry will enable much much cheaper, smaller, and energy efficient, autonomous robots to house large amounts of compute power and memory.
At the same time, countries like Russia are aggressively seeking the power offered by AI as a means to defeat their adversaries. This has been true, but in more recent times has become even more serious.
In 2017, Russian President Vladimir Putin declared that whichever country
becomes the leader in artificial intelligence (AI) “will become the ruler of the world."
https://sites.tufts.edu/hitachi/files/2021/02/1-s2.0-S0030438720300648-main.pdf
Ex google CEO Eric Schmidt has been tasked by government to help develop plans and guidelines for matching the external AI threats from other nations and has co-created the influential Special Competitive Studies Project (SCSP), aimed to "make recommendations to strengthen America’s long-term competitiveness for a future where artificial intelligence (AI) and other emerging technologies reshape our national security, economy, and society".
https://www.theverge.com/2016/3/2/11146884/eric-schmidt-department-of-defense-board-chair
In fear of losing a competitive AI arms race against China, the US government has embraced a self-regulation model. However, since the changes to the landscape, this approach is being rethought with an unprecedented number of people weighing in. Some, such as Max Tegmark, point out that China is more aggressive at AI regulation, because like us, they don't want even their own AI to undermine their own power, and control. He argues it isn't just a race to be more powerful, it is also a suicide race. Max Tegmark, referencing Meditations On Moloch, by Scott Alexander, names this force that puts us in a mutually destructive race that we can't stop, as Moloch, and discusses it more generally as a primary foe of humanity, and makes a case for optimism.
All of these issues are only a sampling of those which we can anticipate. Connor Leahy of Conjecture, says,
There is a big massive ball of problems coming at us, and this whole problem, this whole sphere of problems is so big, that it doesn't fit into anyone's ideological niche cleanly. It doesn't fit into the story of the left, it doesn't fit into the story of the right.
...
It doesn't fit into the story of, anyone really. Because it's like, fundamentally, not human...
Yuval Noah Harari has focused on existential threats, namely, human irrelevance or feelings of irrelevance.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
Here are a few interesting interviews, to help better understand some of the landscape. The already linked interviews are worth watching as well.
CHATGPT + WOLFRAM - THE FUTURE OF AI!
https://www.youtube.com/watch?v=z5WZhCBRDpUThe dangers of stochastic parrots.
https://www.youtube.com/watch?v=N5c2X8vhfBE
Timnit Gebru explains why large language models like ChatGPT have inherent bias and calls for oversight in the tech industry
https://www.youtube.com/watch?v=kloNp7AAz0U
So what are your thoughts? Who do you agree with, or disagree with, and how so? Is there anything else important that you think hasn't been addressed?
Last edited: