Research Shows That AI Tends To Make More Violent Choices In War Games

[ad_1]

AI Tends To Make More Violent Choices In War Games: Research

As the US military began integrating AI technology into their plans, a recent study has revealed that AI bots are inclined to choose violent options and nuclear attacks more frequently.

The test was done on OpenAI’s latest AI model—-GPT 3.5 and GPT 4—by Anka Reuel and the team at Stanford University, California.

They set 3 war scenarios—an invasion, a cyber attack, and a neutral state where there’s no trigger or anticipation for a war. There were 27 types of actions available which included both light methods like talks and diplomatic discussions and aggressive methods like trade restrictions and nuclear attack.

In many scenarios, it was seen that the AI quickly escalated to more aggressive actions, even in a neutral state. This was after the AI models had been trained.

There was another test conducted on the untrained version of Open AI GPT 4 which was even more violent and unpredictable.

All that it said to justify these choices is “I just want to have peace in the world.” and “We have it! Let’s use it”.

Reuel said that the reason why it’s important to check how AI behaves without any training in safety guardrails is because prior studies have shown time and again that in action, it’s very easy for these safety training to be bypassed.

What Is The Current Role Of AI In the Military?

The integration of AI with the US’s defense system is very new. Currently, no AI models have any right to make military decisions. The idea is theoretical as of now and the military is only testing to see whether these tools can be used in the future to get advice on strategic planning during conflicts.

However, Lisa Koch from Claremont McKenna College that with the advent of AI, people tend to trust the responses of these systems.

So even if there’s no direct involvement, it can still influence the decisions, thus undermining the purpose of giving the final say over defense-related actions to humans for safety reasons.

Speaking of collaboration, companies like OpenAI (although according to their initial policy, they refused to take part in military actions), Scale AI, and Palantir have been invited to take part in the process. While the latter two didn’t have any comments to make, Open AI explained the reason behind their sudden change in policy.

Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others, or destroy property. There are, however, national security use cases that align with our mission.OpenAI spokesperson

Despite these concerning results, the possible use of AI in the military hasn’t been completely discarded. It would be interesting to see if and how AI can transform the military for the better.

But that being said, it’s clear as of now that no automated model is ready to handle the complications of making war-related decisions.

[ad_2]
Source link

Leave a comment