All Insights

The Remote Developer You Just Hired Might Be Working for Kim Jong Un

CivSafe Team·April 23, 2026·7 min read

Three days ago, Help Net Security published a breakdown of something that should land differently if your organization hires remote developers, contractors, or anyone who works over video call. North Korean state actors — operating at industrial scale under a state-run program — are using AI-generated synthetic identities and real-time deepfake technology to get hired at small organizations. Not just tech companies. NGOs. Nonprofits. Government contractors. Consulting shops.

The operation has been running for years, but this week's reporting shows two things have changed: the tools are now trivially cheap, and the targeting has shifted downstream to smaller organizations that don't have enterprise security teams to catch it.

If Amazon can be targeted — and they blocked 1,800 suspected DPRK applicants last year alone — your 12-person organization absolutely can be too.

What's actually happening

North Korea runs what amounts to a state-sponsored employment agency for hackers and IT workers. The objective is simple: get people hired at foreign companies, funnel their salaries back to Pyongyang to fund weapons programs, and in some cases steal intellectual property or plant access for future operations. Researchers estimate the operation generates over $500 million per year.

For years, this relied mostly on stolen identities and fake resumes. The new layer is AI.

Researchers recently demonstrated they could build a convincing synthetic identity — a realistic photo, a coherent backstory, a deepfake video persona for live interviews — in 70 minutes with no prior image manipulation experience and a five-year-old laptop. What used to require expensive Hollywood-grade tools now runs on commodity hardware.

During the interview, the operative runs real-time face-swap software over their video feed. Their actual face is never visible. The "candidate" on screen — right lighting, neutral background, professional appearance — doesn't exist.

The viral catch that put this on the map

In early April, a cybersecurity researcher published a video that went around every security Slack channel we know. The candidate had introduced himself as Taro Aikuchi, a software developer from Meguro City, Japan. His GitHub profile, with activity going back to 2019, showed contributions to Solana-based bots, NFT marketplaces, and DeFi tooling. His LinkedIn looked clean.

Midway through the interview, the researcher asked him to say "Kim Jong Un is a fat ugly pig."

He froze. Several seconds of silence. Averted gaze. Partial denials. Never completed the sentence.

That's not an interview technique most hiring managers would reach for. But it worked because insulting Kim Jong Un is illegal in North Korea and carries harsh punishment — apparently enough to override the training an operative gets for handling these interviews. The IP address logged during the call later matched infrastructure previously linked to North Korean remote desktop operations.

Why small orgs are now the target

Amazon has AI-powered application screening, a dedicated threat intelligence team, and enough volume to spot patterns across thousands of applications. They even caught one operative by detecting a 110-millisecond keyboard input lag — the tell-tale sign of someone RDP-ing from a laptop farm in Pyongyang rather than working locally.

Your organization does not have any of that.

And the operatives know it. Large enterprises have closed the easy paths. The pivot to NGOs, small nonprofits, research firms, and boutique tech shops isn't random — it's where the defenses are thinnest. A small health NGO that hires a "DevOps contractor" for $75/hour and gives them access to donor databases, grant systems, and email has handed over significant attack surface.

The salary also disappears. If the operative is paid via Wise or crypto, that money routes to the regime the same day. If paid by payroll, they often have a US-based money mule handling the account.

How to catch them

The good news: real-time deepfakes have specific failure modes that are easy to probe if you know what to look for.

The physical disruption test. Ask the candidate to place their hand in front of their face, or turn completely to the side so you see their profile. Deepfake systems track faces and struggle with occlusion. If the video glitches, freezes, or shows a strange artifact when they do this, you're looking at a filter.

The impossible-to-script local knowledge question. What's the weather like in your city right now? What did you have for lunch? What's a notable news story in your area this week? A real person in Tokyo can answer this. An operative sitting in a DPRK-adjacent server farm cannot. These answers can't be prepped in advance.

The loyalty litmus test. It sounds extreme, but the Kim Jong Un technique exists for a reason. Any request that triggers genuine discomfort in a North Korean operative — asking them to criticize the regime, describe a protest they've attended, or discuss content that's heavily censored in the DPRK — tends to produce a visible stall. You're not trying to be provocative. You're probing for the behavioral tell.

The GitHub forensics pass. Accounts built for this purpose are usually too clean. Contributions are consistent and well-formatted but lack the noise of a real developer's history — no weird midnight commits, no abandoned experiments, no messy merge conflicts. The history often starts exactly at a round number of years prior. The "Taro Aikuchi" account had curated, professional-looking commits going back to 2019, but nothing messy or personal.

Require a verifiable payment path. If a contractor insists on crypto, an unusual payment service, or routes through a third-party payroll company you've never heard of, that's a flag. DPRK operations rely on money extraction being clean and fast. Requiring payment to a named bank account with identity verification creates friction they'll often try to avoid.

One more thing most teams miss

The threat is bidirectional. The Contagious Interview campaign — tracked by Trend Micro as Void Dokkaebi — runs the opposite play. Fake North Korean "recruiters" target your developers and technical staff with job offers. During the "interview," the candidate (your employee) is asked to complete a coding challenge. The challenge requires running code locally. The code plants malware.

If your team is job-hunting or doing technical interviews anywhere, this matters too. Any interview process that requires you to clone a repo and run code locally on your work machine is a risk. The VS Code configuration files in some of these repos have been designed to self-propagate — every developer who runs the code potentially seeds new infections.

So it's not just about who you hire. It's also about what your team runs during their own job searches.

What we're seeing in the field

We've started adding a quick contractor vetting checklist to the onboarding sprints we run for NGOs and small public sector orgs. Not because we expect every new hire is a spy — obviously not — but because the cost of a basic verification pass is 20 minutes, and the cost of finding out six months later that your "DevOps guy" was routing your donor data to a government in Pyongyang is severe.

The technical controls aren't complicated. What's usually missing is someone saying "here's what to check before you give a contractor access to your systems." That's exactly the kind of thing that falls through the cracks in a 15-person org where the executive director is also doing procurement.

If your organization hires remote technical staff and hasn't run a verification review recently, it's worth taking 30 minutes to walk through your process. We're happy to do that with you.


Sources: Help Net Security (April 20, 2026), Radio Free Asia (April 20, 2026), TechCrunch (April 6, 2026), NBC News NISOS investigation, Unit 42 Palo Alto Networks synthetic identity research, Trend Micro Void Dokkaebi reporting.

CivSafe — Strategic Innovation. Community Impact.