A suspected North Korean state-sponsored hacking group used ChatGPT to create a deepfake of a military ID document to attack a target in South Korea, according to cybersecurity researchers.
The trend shows that attackers can leverage emerging AI during the hacking process, including attack scenario planning, malware development, building their tools and to impersonate job recruiters, said Mun Chong-hyun, director at Genians.
Phishing targets in this latest cybercrime spree included South Korean journalists and researchers and human rights activists focused on North Korea. It was also sent from an email address ending in .mil.kr, an impersonation of a South Korean military address.
Exactly how many victims were breached wasn’t immediately clear.
Genians researchers experimented with ChatGPT while investigating the fake identification document. As reproduction of government IDs are illegal in South Korea, ChatGPT initially returned a refusal when asked to create an ID. But altering the prompt allowed them to bypass the restriction.



