Microsoft, OpenAI say U.S. rivals use artificial intelligence in hacking

[ad_1]

Russia, China and other U.S. adversaries are using the newest wave of artificial intelligence tools to improve their hacking abilities and find new targets for online espionage, according to a report Wednesday from Microsoft and its close business partner OpenAI.

While computer users of all stripes have been experimenting with large language models to help with programming tasks, translate phishing emails and assemble attack plans, the new report is the first to associate top-tier government hacking teams with specific uses of LLM. It’s also the first report on countermeasures and comes amid a continuing debate about the risks of the rapidly developing technology and efforts by many countries to put some limits on its use.

The document attributes various uses of AI to two Chinese government-affiliated hacking groups and to one group from each of Russia, Iran and North Korea, comprising the four countries of foremost concern to Western cyber defenders.

Tech giants ramp up cloud security under pressure from Washington

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft wrote in a summary of its findings.

Microsoft said it had cut off the groups’ access to tools based on OpenAI’s ChatGPT. It said it would notify the makers of other tools it saw being used and continue to share which groups were using which techniques.

The company said it had not found any major AI-powered attacks, but had seen earlier-stage research on specific security flaws, defenses and potential targets.

Sherrod DeGrippo, Microsoft’s director of threat intelligence strategy, acknowledged that the company would not necessarily see everything that followed from that research and that cutting off some accounts would not dissuade attackers from creating new ones.

“Microsoft does not want to facilitate threat actors perpetrating campaigns against anyone,” she said. “That’s our role, to hit them as they evolve.”

Among the state-sponsored hacking groups identified in the report:

  • A top Russian team associated with the military intelligence agency GRU used AI to research satellite and radar technologies that might be relevant to conventional warfare in Ukraine.
  • North Korean hackers used AI to research experts on the country’s military capabilities and to learn more about publicly reported vulnerabilities, including one from 2022 in Microsoft’s own support tools.
  • An Islamic Revolutionary Guard Corps team in Iran sought AI help to find new ways to deceive people electronically and to develop ways to avoid detection.
  • One Chinese government group explored using AI to help create programs and content, while another Chinese group “is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs,” Microsoft wrote.
[ad_2]
Source link

Leave a comment