Microsoft has raised alarms over the growing use of generative AI by US adversaries, particularly Iran and North Korea. These nations, alongside Russia and China to a lesser extent, are employing AI for offensive cyber operations. Microsoft has been working with OpenAI to detect and disrupt these activities.
While these AI techniques are still nascent and not groundbreaking, Microsoft stresses the importance of bringing them to light. The adoption of large-language models by US rivals enhances their capacity to breach networks and conduct influence campaigns, posing a significant cybersecurity threat.
Traditionally, cybersecurity firms have utilized machine learning for defense, focusing on spotting anomalous network behavior. However, criminals and offensive hackers are now leveraging this technology. The rise of large-language models, exemplified by OpenAI’s ChatGPT, has further intensified this technological race.
Microsoft’s investment in OpenAI has led to the recent announcement coinciding with a report highlighting the potential of generative AI to enhance malicious social engineering, resulting in more sophisticated deepfakes and voice cloning. This poses a significant threat to democracy, especially in a year when over 50 countries are set to hold elections, potentially amplifying disinformation campaigns.
Microsoft has cited examples of how US adversaries have employed generative AI:
- North Korea’s Kimsuky group has used these models to research foreign think tanks and create content for spear-phishing campaigns.
- Iran’s Revolutionary Guard has employed large-language models for social engineering and troubleshooting software errors.
- Russia’s GRU military intelligence unit, Fancy Bear, has used these models to research satellite and radar technologies related to the war in Ukraine.
- China’s cyber-espionage groups, Aquatic Panda and Maverick Panda, have explored ways to enhance their technical operations using large-language models.
OpenAI has stated that its current GPT-4 model chatbot has limited capabilities for malicious cybersecurity tasks beyond what is achievable with non-AI-powered tools. However, cybersecurity researchers anticipate this could change in the future.
Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, has emphasized the importance of addressing the security implications of artificial intelligence, particularly in the context of China’s influence.
Critics have expressed concerns about the rapid release of large-language models like ChatGPT without adequate consideration for security. They argue that more efforts should be directed towards ensuring the security of these models rather than creating defensive tools to address vulnerabilities.