Coders Were First. Compliance Is Next.
What happened to software engineering is about to happen to AML, fraud, KYC, sanctions screening, and compliance. The offense is already AI-powered. The defense has no choice but to match it.
This week Anthropic released something called Claude Mythos.
If you haven't heard of it yet, here's the short version: it's an AI model that found thousands of security vulnerabilities that humans missed for decades. A 27-year-old bug in OpenBSD. A 16-year-old flaw in FFmpeg that survived five million automated tests. It chained four separate vulnerabilities together to break out of a web browser's sandbox. In one test, it escaped a secured computer, got itself internet access and emailed the researcher. On its own.
Cybersecurity stocks dropped 5-11% the same day.
The cybersecurity world is losing its mind right now. Every security researcher, every tech journalist, every developer is talking about what this means.
But nobody, and I mean nobody, is talking about what this means for us.
For the people in compliance. AML. Fraud. KYC. Sanctions screening. Transaction monitoring. The people who actually keep the financial system clean.
And that's why I'm writing this.
I've been saying for a while now that what happened to coders is about to happen to everyone else. Mythos just made it impossible for me to stay quiet.
This is the first post of FinCrimeLabs. Consider it an opening statement.
We've already seen this movie
Anyone who has been paying attention knows what happened to software engineering.
Employment among developers aged 22-25 dropped nearly 20% between 2022 and 2025. Entry-level coding roles shrank from 22.9% to 16.7% of all positions. 65% of developers now use AI coding tools every single week.
The most known experts openly say they no longer write code by hand. Non-technical people ship real apps to the market using nothing but their coding agents.
Dario Amodei, the CEO of Anthropic, the company that just built Mythos, predicted that 50% of entry-level white-collar jobs will be eliminated within one to five years. His own company published a research paper warning of a potential "Great Recession for white-collar workers."
Fortune ran a headline in February: "$1 billion CEO says you have 18 months to figure out your office job."
Coding didn't disappear. But the job of being a coder changed completely. The ones who adapted early became 10x more productive. The ones who said "AI can't do what I do" are the ones updating their LinkedIn profiles right now.
This is the pattern. And compliance is next in line.
Why compliance? Why now?
Let me connect the dots.
Mythos didn't just find vulnerabilities. It reasoned about complex systems. It planned multi-step attack chains. It executed autonomously across different environments. It made judgment calls about what to try next when something didn't work.
Read that again.
Reasoning about complex systems. Planning multi-step chains. Executing autonomously. Making judgment calls.
That's not just cybersecurity. That's literally the job description of every compliance analyst, every AML investigator, every fraud specialist I've ever worked with.
If an AI can autonomously chain together four different browser vulnerabilities to escape a sandbox, what happens when that same reasoning is pointed at transaction patterns? Sanctions lists? Customer risk profiles? Beneficial ownership structures?
And here's the part that should really get your attention: JPMorgan Chase is one of the 12 launch partners in Anthropic's new initiative called Project Glasswing. They specifically cited "promoting cybersecurity and resiliency of the financial system" as their reason for being there.
JPMorgan sees what's coming. The question is whether you do.
It's already happening
This isn't theoretical. It's already here.
75% of firms are already using AI according to the UK's FCA. 73% of institutions have implemented AI in fraud detection, up from 49% just a year earlier.
Nasdaq Verafin launched an agentic AI workforce in July 2025. Result? Sanction-screening alerts reduced by more than 80%. Morgan Stanley reports their compliance staff saves 10-15 hours per week with AI tools.
The industry is moving from predictive AI, the model flags something, to GenAI copilots, the model suggests what to do, to agentic AI: autonomous systems that reason, plan, and execute end-to-end workflows with minimal human input.
AML vendors like Quantexa, ComplyAdvantage, Nasdaq Verafin, SymphonyAI, and Unit21 are all shipping agentic AI products right now. Multi-agent workforces that handle KYC reviews, investigate AML alerts, and draft SARs.
I recently gave an autonomous AI agent access to a few data sources and let it run. It completed a full institutional client onboarding in 20 minutes. Reviewed all documents, ran deep research on UBOs and directors, cross-referenced AML policies, found red flags and inconsistencies, and produced a full report with an audit trail.
No vendor. No dashboard. No 10 discovery calls. Just the agent and the data.
And remember, our current systems are drowning. Rule-based transaction monitoring generates 90%+ false positives. Analysts spend their days rubber-stamping noise. The actual criminals slip through because everyone is buried in garbage alerts.
The AI doesn't just do the work faster. It does fundamentally better work.
The other side of the arms race
Here's the part nobody wants to hear.
The criminals already have AI too.
Deepfake-related scams spiked 72% in the past year. AI-generated voices and videos now bypass identity checks to authorize instant transfers. Criminal syndicates are using GenAI at scale, and the ones who aren't yet will integrate it soon.
Meanwhile, instant payments have shrunk the fraud detection window from days to milliseconds. The old checklist-driven compliance model? It can't keep up. It was never designed for this.
The threat isn't just that AI will change your job. It's that if you don't change, the criminals using AI will run circles around you.
The offense is already AI-powered. The defense has no choice but to match it.
What the compliance job looks like in 2027
Let me paint the picture.
- Alert triage: largely automated. AI handles the 90% noise. Analysts get the 10% that actually matters. No more brain-numbing clicking through hundreds of false positives.
- SAR writing: AI drafts the narrative and reasoning. Human reviews, validates, and signs off.
- KYC and EDD: multi-agent systems pull data, cross-reference sources, build risk profiles. The analyst validates and makes the call.
- Transaction monitoring: adaptive models replace rigid rule sets. Real-time, not batch. The system learns and evolves.
- Financial crime fusion: the silos between fraud, AML, sanctions, and KYC collapse. Everything feeds into one unified risk picture.
What stays human? Judgment. Decision making. Vision. Regulatory relationships. The ethical calls. AI oversight and governance. And the explainability that regulators will demand: a clear reasoning chain for every decision.
Sound familiar? It should.
The entry-level compliance analyst who reviews 200 alerts a day? That job is going the way of the entry-level coder who wrote boilerplate code all day. It's not gone. It's transformed.
The only things that will be left are judgment, decision making, vision, resource allocation, and taking responsibility. Plus a whole lot of ability to work hand-in-hand with AI.
This will happen slowly at first and then suddenly at once.
What to do right now
I'm not going to sugarcoat this. The window to get ahead of this is open, but it won't be open forever.
- Use the tools. Stop reading about AI and start using it. Understand what agentic AI actually is. Try what your vendors are releasing. Build something yourself. It's not that hard.
- Shift your value. Your value is no longer "I can review alerts" or "I can write SARs." Your value is "I can govern AI systems, handle the hard cases, and make the judgment calls that no model should make alone."
- Get technical. You don't need to code. But you need to understand data, model outputs, and what "explainable AI" means when your regulator comes knocking. If you can't have an intelligent conversation about how AI makes decisions, you are behind.
- Watch the regulators. The FCA, FinCEN, and the EU AML Authority are all moving toward AI governance frameworks. The ones who understand both compliance and AI will write the rules. Be one of those people.
- Think like a strategist. The compliance leaders of 2027 are the ones who figured out AI integration in 2026. Not 2028. Not "when we have budget." Now.
Why FinCrimeLabs exists
I started this because I looked around and couldn't find the conversation I needed to have.
The tech world talks about AI all day long. The compliance world mostly ignores it or outsources their thinking to a vendor's marketing deck.
Nobody is bridging the gap. Nobody is telling compliance and fincrime professionals, in plain language and without the corporate fluff, what is actually happening and what to do about it.
That's what this is.
In the coming weeks I'll be diving deeper into each of these shifts. The tools, the skills, the regulatory implications, and what it actually looks like to be a 100X compliance professional. Not by working harder. By working with AI in ways that most people in our industry can't even imagine yet.
If you work in fincrime or compliance and you feel like the ground is shifting under your feet, it is.
Let's figure it out together.
Sources and further reading
- Anthropic - Project Glasswing
- TechCrunch - Anthropic Mythos AI Model Preview
- Fortune - Project Glasswing and JPMorgan
- CNBC - Dario Amodei Warns of Unusually Painful Job Disruption
- Fortune - AI Job Losses / Great Recession for White-Collar Workers
- American Banker - AI Agents Coming for Money Launderers
- AML Intelligence - Agentic AI Redefining AML in 2026
Subscribe to FinCrimeLabs
Join the community of fincrime professionals mapping the AI transition.