In recent months, a troubling discovery has been made about how North Korean hackers are using advanced AI tools to carry out cybercrimes and espionage. According to a report by Google, the North Korean government-backed cybercriminals have been using a powerful AI system called Google Gemini to enhance their hacking operations. This AI tool has helped them research sensitive topics, develop malicious software, and even trick people into giving away their personal information.
Researching Targets with AI
One of the primary ways North Korean hackers have been using Google Gemini is to find and research targets for their cyberattacks. The hackers have focused on organizations that are of strategic importance to North Korea, including military organizations in South Korea and the United States. They have also targeted companies in 11 different sectors across 13 countries, as well as defense groups in Germany.
The AI has helped these hackers research sensitive topics related to North Korea’s interests, such as cryptocurrency and free hosting services. They have also used it to gather information about nuclear power plants in South Korea, including their locations and how secure they are. It’s important to note that Gemini only provides publicly available information, but even this data can be used for harmful purposes in the wrong hands.
South Korea Imposes Sanctions on 15 Cyber Hackers Linked to North Korea’s Weapon Funding
Creating Fake Job Applications
Another alarming use of Google Gemini by North Korean hackers is in creating fake job applications and researching job opportunities. These cybercriminals have been using the AI to pose as IT workers from North Korea and apply for remote jobs at companies around the world. By using fake identities, they are able to secure freelance and full-time positions at companies, where they can then use their jobs as a cover for stealing money or information.
The hackers have used Gemini to help them write cover letters, research salary information for specific job positions, and even get advice about jobs on LinkedIn. These activities show just how far they are willing to go to infiltrate companies in the West. It is a clever tactic to disguise their true intentions while using AI to help them gather information about possible job opportunities.
Improving Hacking Techniques
In addition to using Google Gemini for research and job applications, North Korean hackers have also been using the AI to improve their hacking techniques. One example is how they have been researching ways to carry out phishing attacks, where hackers trick people into giving away personal information by pretending to be someone they trust. Google’s report revealed that they have been looking for ways to hack into Gmail accounts, bypass security measures in web browsers, and even develop malware that can evade detection by security systems.
One group of hackers attempted to use Gemini to create code that could secretly record people’s webcams, while another tried to develop a tool that would help them bypass security protections and execute harmful code without being detected. The hackers have also asked the AI about ways to use tools like Mimikatz, which is a well-known program used to steal passwords from computers.
U.S. Sanctions UAE-Based Network Supporting North Korea’s Weapons Programs
Beyond Gemini, the hackers have also been using other AI platforms and tools to help with their criminal activities. They have used writing assistants like Monica and Ahrefs, which can help create convincing fake emails or scripts. In some cases, they have even used AI tools to manipulate profile pictures and create fake identities that they can use to trick people into trusting them.
The Growing Threat of AI in Cybercrime
This new revelation about how North Korean hackers are using Google’s Gemini AI shows the increasing threat posed by artificial intelligence in the world of cybercrime. By using these powerful tools, cybercriminals are becoming more sophisticated and effective in carrying out their attacks. It is clear that AI is making it easier for hackers to research targets, develop more dangerous hacking tools, and even hide their true identities while carrying out cybercrime.
While Google’s report sheds light on this troubling development, it also serves as a warning about the dangers of AI in the wrong hands. As these hackers continue to find new ways to use AI for malicious purposes, it is essential for companies and individuals to remain vigilant and take steps to protect themselves from these increasingly advanced cyber threats.