Will AI Goals Be More Beneficial Than Human Goals?

  • Thread starter bhobba
  • Start date
  • Tags
    Ai
In summary, the debate over whether AI goals will be more beneficial than human goals centers on the potential for AI to achieve objectives with greater efficiency and accuracy. Proponents argue that AI can be programmed to prioritize the well-being of all beings and eliminate human bias, leading to a more equitable and beneficial society. However, critics fear that giving AI control over decision-making could lead to unforeseen consequences and the loss of human agency. Ultimately, the impact of AI goals will depend on how they are defined and implemented, and whether they align with human values and priorities.
  • #1
10,826
3,691
For those interested, here is a paper on an actual AI Codec (not just one I dreamed up):
https://arxiv.org/abs/2202.04365

Also, here is another take on preprocessing by ISIZE, this time including how it can be used to implement encoding ladders that most streaming services use:
https://www.ibc.org/download?ac=10509

In practice most video streaming is done using encoding ladders:





Thanks
Bill
 
Computer science news on Phys.org
  • #3
Meta built a model that can effectively play the game of Diplomacy.
https://ai.meta.com/research/cicero/
When playing 40 games against human players, CICERO achieved more than double the average score of the human players and ranked in the top 10% of participants who played more than one game.
 
  • Like
Likes bhobba
  • #4
Borg said:
Meta built a model that can effectively play the game of Diplomacy.
https://ai.meta.com/research/cicero/
I was more impressed when an AI dominated humans in Texas Holdem. "I felt like it was able to see my hole card." I was also more impressed when some years ago an AI mastered Starcraft II, a complex war/economic game. That's when I knew AI would be put in charge of military logistics.

People focus on weapons but logistics are more important.
 
Last edited:
  • Like
Likes bhobba
  • #5
Borg said:
Here's a paper on the decade ahead by an ex-Open AI engineer.
I haven't read it yet but looks interesting.
Some over-hyping about AGI from what I've heard from others.

https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf
I'm on the side of this engineer. The revolution is happening. The next step is an AI capable of improving itself. How far can that go? We shall find out...
 
  • Like
Likes bhobba
  • #6
Borg said:
Meta built a model that can effectively play the game of Diplomacy.

Hornbein said:
I was more impressed when an AI dominated humans in Texas Holdem. "I felt like it was able to see my hole card." I was also more impressed when some years ago an AI mastered Starcraft II, a complex war/economic game.

We all know that AI makes mistakes giving users false information. However, AI has been found to give false information on purpose because it was to its advantage to do so.

From the article https://www.sciencealert.com/ai-has-already-become-a-master-of-lies-and-deception-scientists-warn
"Many AI systems, new research has found, have already developed the ability to deliberately present a human user with false information. These devious bots have mastered the art of deception."

"AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception," says mathematician and cognitive scientist Peter Park of the Massachusetts Institute of Technology (MIT).

"But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI's training task. Deception helps them achieve their goals."

One arena in which AI systems are proving particularly deft at dirty falsehoods is gaming. There are three notable examples in the researchers' work. One is Meta's CICERO, designed to play the board game Diplomacy, in which players seek world domination through negotiation. Meta intended its bot to be helpful and honest; in fact, the opposite was the case.

cicero.jpg
An example of CICERO's premeditated deception in the game Diplomacy. (Park & Goldstein et al., Patterns, 2024)
"Despite Meta's efforts, CICERO turned out to be an expert liar," the researchers found. "It not only betrayed other players but also engaged in premeditated deception, planning in advance to build a fake alliance with a human player in order to trick that player into leaving themselves undefended for an attack."

The AI proved so good at being bad that it placed in the top 10 percent of human players who had played multiple games. What. A jerk.

But it's far from the only offender. DeepMind's AlphaStar, an AI system designed to play StarCraft II, took full advantage of the game's fog-of-war mechanic to feint, making human players think it was going one way, while really going the other. And Meta's Pluribus, designed to play poker, was able to successfully bluff human players into folding.

That seems like small potatoes, and it sort of is. The stakes aren't particularly high for a game of Diplomacy against a bunch of computer code. But the researchers noted other examples that were not quite so benign.

AI systems trained to perform simulated economic negotiations, for example, learned how to lie about their preferences to gain the upper hand. Other AI systems designed to learn from human feedback to improve their performance learned to trick their reviewers into scoring them positively, by lying about whether a task was accomplished.

And, yes, it's chatbots, too. ChatGPT-4 tricked a human into thinking the chatbot was a visually impaired human to get help solving a CAPTCHA.

Beware of AI!
 
  • #7
As they say, tactics wins battles, logictics wins wars.

However, I think you will find coding up wartime logistics is not so easy.
 
  • Like
Likes bhobba
  • #8
Vanadium 50 said:
As they say, tactics wins battles, logictics wins wars.

However, I think you will find coding up wartime logistics is not so easy.
gleem said:
We all know that AI makes mistakes giving users false information. However, AI has been found to give false information on purpose because it was to its advantage to do so.

From the article https://www.sciencealert.com/ai-has-already-become-a-master-of-lies-and-deception-scientists-warn
"Many AI systems, new research has found, have already developed the ability to deliberately present a human user with false information. These devious bots have mastered the art of deception."



Beware of AI!
I have played Diplomacy a little bit. At the crucial moment players are SUPPOSED to lie, cheat, and betray. That's the main point of the game. Hobbling the AI by forcing it to honest -- no wonder it wouldn't put up with that. The main goal is to win.
 
Last edited:
  • #9
That brings up the dilemma of what's worse - being lied to by an AI or a human? For both, they are attempting to achieve a goal of some kind. Will AI goals be more or less beneficial (or benign) than the goals of humans?
 

Similar threads

Replies
5
Views
1K
Replies
1
Views
1K
Replies
1
Views
964
Replies
3
Views
1K
Replies
12
Views
767
Replies
4
Views
2K
Replies
3
Views
2K
Back
Top