[ad_1]
The document attributes various uses of AI to two Chinese government-affiliated hacking groups and to one group from each of Russia, Iran and North Korea, comprising the four countries of foremost concern to Western cyber defenders.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft wrote in a summary of its findings.
Microsoft said it had cut off the groups’ access to tools based on OpenAI’s ChatGPT. It said it would notify the makers of other tools it saw being used and continue to share which groups were using which techniques.
The company said it had not found any major AI-powered attacks, but had seen earlier-stage research on specific security flaws, defenses and potential targets.
Sherrod DeGrippo, Microsoft’s director of threat intelligence strategy, acknowledged that the company would not necessarily see everything that followed from that research and that cutting off some accounts would not dissuade attackers from creating new ones.
“Microsoft does not want to facilitate threat actors perpetrating campaigns against anyone,” she said. “That’s our role, to hit them as they evolve.”
Among the state-sponsored hacking groups identified in the report:
- A top Russian team associated with the military intelligence agency GRU used AI to research satellite and radar technologies that might be relevant to conventional warfare in Ukraine.
- North Korean hackers used AI to research experts on the country’s military capabilities and to learn more about publicly reported vulnerabilities, including one from 2022 in Microsoft’s own support tools.
- An Islamic Revolutionary Guard Corps team in Iran sought AI help to find new ways to deceive people electronically and to develop ways to avoid detection.
- One Chinese government group explored using AI to help create programs and content, while another Chinese group “is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs,” Microsoft wrote.
Source link