What the EU AI Act, FinCEN, FATF, and FCA Say About AI in AML

It's not a new idea to use AI in anti-money laundering (AML) work. Regulators are already seeing it as a real compliance tool. It can help, but it can also cause new governance problems if companies aren't careful. And the message is starting to sound pretty familiar in a lot of places which goes along the lines of “We welcome new ideas. But only to a point.” Companies still need to explain what the system does, show that people are still responsible, and make sure that the data, controls, and oversight behind the model are strong enough to trust.

You can see that pattern in a lot of places at once. The Financial Action Task Force (FATF) wrote about it in its 2021 report on new technologies. The Financial Crimes Enforcement Network (FinCEN) used a similar tone in its 2024 Anti Money Laundering/Countering the Financing of Terrorism (AML/CFT) program proposal. The Financial Conduct Authority (FCA) has been pushing for responsible experimentation instead of blindly following what others do. The EU AI Act, on the other hand, comes from a different legal tradition but goes in the same general direction. Different words and rules, but the same basic instinct.

That changes the conversation for compliance teams. The question is no longer if AI can be used in AML. What regulators expect when companies actually start using it is a more important question now. The answer is not to "make a perfect model." It is much more useful than that. Be careful with the technology. Make sure it's easy to understand. Write down the reasons. Check the controls. And don't let automation make it hard to tell who is responsible for what.

The sections below explain what the main authorities are really saying about AI in AML and what that means for compliance teams in real life. This makes the regulatory picture easier to understand.

  • FATF: Supports AI but not black box AML.
  • FinCEN: Neutral about technology, open to new ideas, and still accountable to humans.
  • The EU AI Act: Important for AML Teams, but Not as Simple as "AML AI Is High Risk"
  • FCA: Pro Innovation, but With Guardrails;
  • NYDFS: The Bias Warning Is Not AML Specific, but It Still Matters
  • MAS: Supportive on AI, but Very Big on Governance
  • AUSTRAC: Open to AI, but Still Focused on Human Control
  • A Quick Comparison: What the Main Regulators Are Actually Saying
  • What This Means in Practice for AML Teams

FATF: Encouraging AI, but Not Black Box AML

FATF's 2021 report, "Opportunities and Challenges of New Technologies for AML/CFT," is the most important document. FATF's starting point is that new technologies can make AML and CTF measures faster, cheaper, and more effective. It also talks about AI, Natural Language Processing (NLP), and other tools that can help find risk, keep an eye on suspicious activity, and make supervision better. Financial Action Task Force (FATF) also talks about specific uses, such as monitoring transactions, screening names, assessing risks, analyzing networks, and screening Politically Exposed Persons (PEPs).

But FATF isn't saying, "Use AI and don't worry." The report is careful, and in some cases, a little too careful. It calls AI or ML technologies that are "black box" systems because they don't give enough information about how they got to a result. Then it says that not being able to explain or see how something works can make it harder to tell if an AI solution is correct when it comes to finding suspicious behavior. The FATF also talks about the risk of bad data quality, saying that if bad or biased data is put into a machine learning system, those mistakes can be learned by the model.

That's why the FATF message is more complex than just "AI is good." The real message is that AI is helpful when it makes audits, accountability, and risk based decision making better, but it still needs people to check it, review it, and govern it. FATF even says that most of the tools that are available still need people to use them and review them, and that they are improvements to current systems rather than replacements for them. That is probably the most important thing that AML teams can learn from FATF. The organization that sets standards around the world isn't telling companies to stay away from AI. It tells them not to use it as a shield.

FinCEN: Technology Neutral, Innovation Friendly, and Still Human Accountable

Financial Crimes Enforcement Network’s (FinCEN) point of view is more practical than philosophical. FinCEN has no separate rulebook for "AI in AML," but it has always supported new ideas. In a joint statement with the federal banking agencies in 2018, FinCEN and the other agencies said they would not push for a specific method or technology for Bank Secrecy Act (BSA) / AML compliance. They also urged banks to think about and responsibly use new ideas. The same statement clearly says that digital identity and artificial intelligence technologies are tools that can help with BSA/AML compliance and better transaction monitoring.

That same stance on technology is still present in FinCEN's current materials. According to FinCEN's innovation page, the agency wants to encourage responsible financial services innovation that supports the goals of the BSA, which was changed by the Anti Money Laundering Act of 2020. It says that new technologies and private sector innovation can help institutions improve their AML compliance programs and their recordkeeping and reporting. That is support, but it is support with conditions. The institution is always responsible for keeping the program running well.

FinCEN's proposed rule from June 2024 to modernize AML/CTF programs is the most clear recent sign. The proposal would make it necessary to have effective, risk based, and well thought out AML/CFT programs that include a required risk assessment process. According to FinCEN's fact sheet, the proposed rule is meant to encourage new ideas in the fight against financial crime. It also says that one goal of the AML Act is to promote the use of new technologies to better fight money laundering and terrorist financing. It also says that a company's internal controls may include thinking about, evaluating, and putting into action new ideas as needed based on the company's risk profile.

FinCEN has not released any official AML guidance on AI generated Suspicious Activity Reports (SARs) or agentic AI as a category, at least not in any publicly available guidance as of April 2026. Its public innovation materials and proposal for an AML/CFT program stay at the level of responsible innovation and flexible risk based design. They don't approve or suggest specific AI architectures. The U.S. message is supportive but still general: Use innovation if it helps, don't expect a special AI safe harbor, and don't think that responsibility moves from the institution to the model.

The EU AI Act: Important for AML Teams, but More Complicated Than “AML AI Is High Risk”

The EU AI Act is important for AML teams, but what it boils down to is not as simple as saying "AML AI = High Risk." People say all the time that using AI in AML is considered risky under the EU AI Act. It’s a catchphrase at this point, and it’s not entirely true.

The real situation is more complicated than that. On August 1, 2024, the EU AI Act went into effect. Most of it goes into effect on August 2, 2026, but some obligations for some high risk systems linked to regulated products don't go into effect until August 2, 2027. So, yes, the law is important now. But the question of compliance is not just when does it apply. The harder question is what exactly in your AML stack fits into each category.

And that's when the simplifications start to fall apart.

There isn't a simple bucket called "AML systems" in Annex III of the Act. Not like people often say. In the financial services section, it clearly says that using AI to check a person's creditworthiness or credit score is high risk. However, it makes an exception for using AI to find financial fraud. That matters. A lot. This is because you can't just point to an AML tool and say "financial crime" and think that the classification is set. What the system is doing is what matters.

People miss another small detail. The Act also says that Financial Intelligence Units (FIUs) that do administrative work under Union anti money laundering law shouldn't be thought of as using law enforcement AI systems to catch criminals. It just means that the law is more careful about drawing lines than people usually are. This is the main point: the AI Act classifies things based on how they are used, not based on a slogan.

So, what should compliance teams learn from this?

First, the EU AI Act is very important for AML functions. No doubt about it. Companies need to take governance, documentation, model logic, and accountability much more seriously than they did before, even if an AML related system isn't automatically high risk. And if a certain system is high risk, it has a lot of responsibilities. The law says that there must be things like a risk management system, data governance and quality controls, logging, human oversight, and rules about accuracy, robustness, and cybersecurity. That is not easy compliance. That is compliance at the level of architecture.

Second, this makes businesses and vendors follow a more strict internal process. You don't start with the label. You begin with the function. What does the tool really do? Is it alerts for rankings? Helping investigators? Making summaries? Making decisions about people in a way that fits with Annex III? Those questions come before the legal conclusion, not after it.

The AI Act is very important for AML teams, but not because it magically classifies all AML AI as high risk. It matters because it makes companies stop being vague. To carefully sort. To write things down correctly. And be ready to explain exactly why a system is where it is if someone asks you to.

FCA: Pro Innovation, but With Guardrails

The UK Financial Conduct Authority (FCA) is following a well known UK stance which reflects that they want to promote innovation, stick to existing rules, and not rush into making rules that only apply to AI. The FCA's AI materials say that they want AI to be used safely and responsibly in UK financial markets. Its AI Update also fits with the UK government's general principles for all sectors about fairness, openness, and explainability, as well as accountability and governance.

The FCA's position is different because it is building a lot of testing infrastructure around that message. The FCA now has an AI Lab, a Supercharged Sandbox, and AI Live Testing. It says in public statements that those tools are meant to let companies try things out safely, with help and oversight from regulators, and to help the FCA learn more about how AI is being used in the market. The FCA has also made it clear that businesses can create AI services without having to wait for a new set of rules from the FCA. That tells compliance teams that the regulators in the UK are more interested in outcomes, controls, and governance than in telling people how to do things.

For AML teams, that means they have a set of expectations that they are used to. Can you explain how the system works? Can you keep an eye on the results? Is it clear who is in charge and who is responsible? Is the model fair? Are human reviewers still really in charge where they need to be? The FCA is not letting AI in AML get away with not following normal rules and systems and controls. It is treating AI as if those standards still apply to it.

NYDFS: The Bias Warning Is Not AML Specific, but It Still Matters

The New York Department of Financial Services (NYDFS) did not make a rulebook for AI that says it has to follow Anti Money Laundering (AML) rules. That's the first thing you should keep in mind. The 2024 circular letter, which is about AI and consumer data outside of insurance, keeps coming up in conversation. Not AML. Not right away. But it still matters. It looks like more than it is.

Because the area is smaller than the warning.

NYDFS's message is clear. You can't blame the model. The provider of the model can't be blamed either. And not the set of data. Even if an AI system gives a bad result, the company still owns it. The organization stays in charge. How it’s responsible and accountable can not be transferred to the AI.

Think about screening. Or watching transactions closely. Or giving a customer a score based on their risk. These systems are able to move. They can put up too many red flags. They may favor certain names, places, or proxies that appear neutral on paper but may not be so in practice. Bad training data is sometimes the problem. In some cases, it's because the features weren't made well. The business didn't always look things over well enough. No matter what the reason is, the output starts to lean in ways that the institution may not be able to explain.

That's the point. NYDFS is not giving AML teams a new AI guide just for them. It's doing something else. It is reminding businesses that are regulated that bias isn't just a technical issue. There is a problem with following the rules. A problem with how things are run. It could also be a problem with how things are being watched. Once the model starts to affect decisions, fairness is no longer just an idea. It begins to function.

AML teams shouldn't think of this circular as someone else's problem, even though it came from an insurance company. Be ready for questions if you use AI to score alerts, screen names, group customers, or help with investigations. Hard ones. About being fair. About the test. About who looked at what and when. NYDFS may not be making rules for AML AI in this case. But the signal is still strong enough. The model can be helpful. The company is still to blame.

MAS: Supportive on AI, but Very Big on Governance

The Monetary Authority of Singapore (MAS) has not put out a neat little "AI for AML" guide that tells compliance teams exactly what to do. Instead, MAS has created a bigger framework for using AI safely in financial services and then let companies use that framework in real life situations like screening, monitoring, and risk scoring. The Fairness, Ethics, Accountability, and Transparency (FEAT) principles, which were first introduced in 2018, are still the starting point. Even now, these ideas are at the heart of MAS's thinking about AI. In a way, MAS says use it, but make sure it is fair, explainable, and well governed.

That sounds vague until you see what MAS did next. It didn't just stop at principles. MAS worked with the industry through the Veritas initiative to make those ideas into things that companies could really test and use. Veritas was made to help banks and other financial institutions better evaluate AI and data analytics tools against the FEAT principles, not just agree with them at the policy level. Later MAS releases made it even clearer by saying that the Veritas toolkit could help companies and fintechs better evaluate AI use cases. The message from MAS is more than just "be responsible." It is more like "show your work."

From an Anti Money Laundering (AML) point of view, that is more important than it may seem at first. Monitoring models, screening engines, customer risk tools, and alert triage systems all ask the same uncomfortable questions. Can you tell me why the system marked this customer? Can you show me who gave the design the green light? Can you show that someone is responsible when the output is wrong? MAS doesn't need a specific AI in AML make those questions important. Recent work on AI risk management shows that the regulator is still going in the same direction: innovation is fine, but only if companies can control it and protect it later.

If you look at MAS from a compliance point of view, the main point is pretty clear. It doesn't go against AI. But it is very clearly against "just trust the model." And to be honest, that's probably the right thing to do.

AUSTRAC: Open to AI, but Still Focused on Human Control

Australia's Australian Transaction Reports and Analysis Centre (AUSTRAC) is looking at AI in a unique way because it is both a regulator and a financial intelligence agency. That changes the way it sounds a little bit. Not only does it tell businesses what good governance should look like. It also talks about how it plans to use AI. AUSTRAC's AI Transparency Statement says that the agency wants to be clear about how it uses AI now and how it plans to use it in the future. It also says that AUSTRAC follows the rules set by the whole government about security, privacy, and accountability. It also says that people are still a big part of making decisions. That last part is important.

The tone is helpful, but not too loose. The statement says that using AI responsibly can help find money laundering, terrorism financing, and other crimes that are occurring financially. In a way, that's a strong endorsement. It shows the market that AI is not just a side project. But the statement also makes it clear that governance and openness are part of the deal. AUSTRAC is not telling people to "use AI because it is powerful." It says, "Use it wisely and be ready to explain how."

That makes things look pretty familiar for compliance teams. AI can be useful. It can help find things better. It can help with big review projects. But none of that takes the legal burden off the institution. AUSTRAC's broader AML/CTF Rules and guidance still expect companies that report to keep up their AML/CTF programs and meet all of their reporting and control obligations. So, even if the tools get smarter, the responsibility doesn't change. The company is still responsible for the decision, the control environment, and what happens when things go wrong.

That is probably the best way to understand what AUSTRAC is saying right now. It is open to AI. It thinks it's worth using. But it doesn't think of AI as a replacement for human judgment or the responsibility of institutions. More like an amplifier. Useful, maybe very useful, but it still needs to be part of a system that people can watch over.

A Quick Comparison: What the Main Regulators Are Actually Saying

You can see the pattern best when you look at it side by side. The table below is a short comparison of the main points made in official sources. In short, FATF is supportive but focused on explainability, FinCEN is open to new ideas and doesn't favor any one technology, the EU AI Act sets formal rules for systems that are considered high risk, the FCA is pro innovation but with limits, NYDFS is focused on bias and governance, Monetary Authority of Singapore (MAS) is promoting responsible AI through its Fairness, Ethics, Accountability, and Transparency (FEAT) framework, and Australian Transaction Reports and Analysis Centre (AUSTRAC) has released its own AI transparency statement with a strong emphasis on human oversight, explainability, fairness, and documentation.

Regulator

Position on AI in AML / financial crime

Key document

What matters most for compliance teams

Timeline / enforcement style

FATF

Encourages responsible use of new technologies, but warns against black box models

2021 Opportunities and Challenges of New Technologies for AML/CFT

Explainability, transparency, human review, data quality, governance

Global standard setting, not direct enforcement

FinCEN

Technology neutral; encourages innovation and modernized AML/CFT programs

AML Act of 2020 framework, 2024 AML/CFT Program NPRM, FinCEN innovation page

Risk based design, responsible innovation, documented controls; no AI specific AML rulebook yet

U.S. supervisory/enforcement model through BSA/AML program expectations

EU AI Act

Important for AML teams, but AML AI is not automatically high risk

Regulation (EU) 2024/1689

Classification analysis first; if high risk, expect risk management, data governance, logging, human oversight, robustness

Main date 2 Aug 2026; some high risk regulated product obligations from 2 Aug 2027

FCA

Supports safe and responsible AI adoption; relies on existing frameworks and testing environments

FCA AI Update, AI Lab, Sandbox, AI Live Testing materials

Fairness, explainability, accountability, outcome monitoring, controlled experimentation

Outcome based and innovation friendly, with live testing support rather than AI only rules

NYDFS

Strong concern about bias, governance, and accountability in AI enabled financial decisions

2024 AI / external data circular letter for insurance

Bias testing, governance, documentation, accountability

Sector specific, but influential beyond insurance

MAS

Promotes responsible AI through FEAT principles and related industry tools

FEAT principles, Veritas initiative

Fairness, Ethics, Accountability and Transparency in financial services AI

Principles and governance led, not AML specific rules

AUSTRAC

Publicly committed to responsible AI use, with human oversight and strong governance

AUSTRAC AI Transparency Statement

Human oversight, explainability, fairness, documentation, accountable official

AI governance statement and public transparency approach, not a dedicated AML AI rulebook for industry

What This Means in Practice for AML Teams

It's easy to make the mistake of thinking that there is one regulator who will finally tell everyone what to do. That paper doesn't really exist yet. Instead, there is a growing agreement on a few things.

One, don't let AI turn into a black box. This is what FATF says. The EU AI Act makes it work when high risk systems are involved. The FCA and NYDFS look at it in terms of openness, fairness, explainability, and governance.

Two, make sure people are responsible. Not people who are just symbols. People who are real and have enough power, information, and supervision to question the results. The FATF, the EU AI Act, the FCA's materials, and AUSTRAC's AI transparency statement all point in that direction.

Three, see data and model governance as compliance issues, not just engineering issues. Data quality, bias, logging, and auditability are now part of the conversation about rules in almost every country.

And four, don't wait for AI specific AML rules to make controls. FinCEN is not going to wait. The FCA is not going to wait. FATF is not going to wait. The expectation is already here: if you use AI in AML, you own the result.