Can AI Write Your Suspicious Activity Reports? AI-Generated SAR Narratives

Writing Suspicious Activity Report (SAR) narratives is one of the most time-consuming parts of Anti-Money Laundering (AML) work, but Artificial Intelligence is starting to change that. It's easy to understand why people want to appeal. Writing SARs takes time, is usually done by experienced analysts, and often slows down the final stage of the case. But the question isn't just whether AI can write well. The question is whether AI can help make stories that are still correct, complete, ready for regulators, and safe to file.

This article talks about where AI can really help, where the risks start, and what compliance teams should keep a close eye on.

  • The SAR Bottleneck: Why Narrative Writing Is So Expensive
  • How AI Can Help: Pre-Filling, Drafting, and Quality Assurance
  • The Risks: Hallucination, Accuracy, and Regulatory Acceptance
  • What FinCEN Hasn’t Said (Yet)
  • Best Practices for AI-Assisted SAR Writing
  • So, Can AI Write Your SARs?

The SAR Bottleneck: Why Narrative Writing Is So Expensive

Everyone in AML knows that the deadline for filing is important. Everyone knows that the hard part is usually not filling out the form.

It is writing the story.

That's when the case has to turn into a story. One that is useful. A story that police can read and use. FinCEN has been very clear about this for years: The narrative is the only free-text part of the Suspicious Activity Report. It is where the person who filed the report has to clearly explain the suspicious behavior so that investigators can understand what happened, who was involved, when it happened, where it happened, and why the institution thought it was suspicious. FinCEN says that a good SAR narrative should make the activity clear without making the reader guess at the logic and include those "five essential elements."

It sounds easy until you actually do it.

Because the story isn't just a summary. It's usually doing more than one thing at a time. It has to make the transaction history easier to read. It needs to explain why the pattern is suspicious, not just bring it up. It needs to keep track of the steps it took to investigate. It usually has to include the facts that made the institution suspicious in the first place. And then it has to match up with the structured fields in the SAR so that the whole filing makes sense. FinCEN's instructions for filing SARs electronically make it clear that they must be filled out with as much correct information as should be complete. The narrative guidance says the same thing in simpler terms: Weak stories make it harder to do good investigations.

That's why narrative writing takes a lot of time for analysts.

Not because typing takes a long time. Because it takes a long time to judge. A good story doesn't just list events. It picks out what is important. It puts things in order. It decides which facts should be in the report and which should not. It turns unprocessed internal investigation work into a format that is still clear enough for law enforcement and still accurate enough for a regulator. One reason this work often goes to more experienced investigators or senior analysts is because of that. Even when junior teams do a great job of the first-level review, the final narrative usually needs someone who can see the whole case, not just the alert. It's not exactly a rule from the government. It is more like how things really work.

And that way of doing things costs a lot.

The 2025 paper Co-Investigator AI: The Rise of Agentic AI for Smarter, Trustworthy AML Compliance Narratives says that SAR generation is a high-cost, low-scalability bottleneck. It also says that the last steps of SAR preparation can take 25 to 315 minutes, depending on the case. That is a wide range, but it does show you something important which is writing is not easy, and it gets harder quickly as it gets more complicated. In a simple case, the story might just be a timeline with a pattern that looks suspicious. But in a messy case, the writer is using information from several sources, including multiple counterparties, outside sources, internal notes, and sometimes even linked customer behavior over time. The "last step" in the case suddenly becomes a lot of work.

A lot of teams feel the pressure here. There may already be a full queue. The investigators might already have too much to do. And the SAR story still needs to be told well. FinCEN's advice makes it clear that a timely filing is important, but so is a full and adequate one. Those two things are connected. It's not really a win if you go fast without being clear. Having clarity without being timely isn't much better. So the problem is right there, in that stress.

There is also a problem, but it is less loud. The quality of the stories tends to be very different from team to team. One investigator writes clear, straightforward, evidence-based prose. Another person writes something that is technically correct but not very clear. A third one has too much information that isn't important and hides the pattern that looks suspicious. The filing still goes in, but the police get different amounts of information depending on who wrote it. The paper talks about how differences in the experience and workload of investigators can affect the quality of their work. This is exactly what the paper is about. That matters because writing SARs isn't just a formality. It's the part of the case that someone else has to read later, usually without any background information and under a lot of stress.

So, the bottleneck is real. It's not only about the time spent on the task. It's about the skilled professional’s time spent writing it.

How AI Can Help: Filling Out Forms, Writing, and Checking Quality

"Let the model do the filing" is not the best way to think about AI in this case. It's more specific than that. More detailed. Honestly, more real.

AI can help in three ways that make sense for business.

The first thing is filling in structured fields ahead of time. This is the least controversial use case and the one that is probably the easiest to defend. Case systems already have a lot of the information that ends up in the SAR, such as dates, account numbers, transaction amounts, thirdparty/counterparty details and maybe some customer profile elements. Transferring those into the form automatically cuts down on time spent manually copying and pasting, which is where mistakes often happen. FinCEN says that people who file SARs should include as much information as possible and that all required fields must be filled out completely and correctly. That's not a big deal if automation can move validated case data into the right fields with less trouble and fewer mistakes. It is basic hygiene for operations.

The second use case is writing the story itself. This is where Generative AI starts to look good.

The use case is easy to understand in general. There are already some facts from the investigation: Transactions, dates, account relationships, investigator notes, typology clues, maybe outside intelligence, maybe bad press, and maybe inconsistencies in customer profiles. A model can use that set of evidence to make a first draft. Not a final filing, but a first draft that can be used. That is the temptation, and in some cases it makes sense.

The 2025 Co-Investigator AI paper is useful because it doesn't think of this as one big prompt for a language model. It gives you a more controlled environment. Different parts of the process are handled by specialized agents. These include planning, finding out what kind of crime is happening, gathering information from outside sources, checking for compliance, and checking for quality. That is important. A lot. Because writing a SAR is not just one job. There are a lot of tasks that need to be done. One part is getting the facts. Another is figuring out which suspicious pattern is the most important. Another is checking to see if everything is there. Another thing is making all of that into prose that is easy to read. It's a much better idea to break those up than to pretend that one model should improvise the whole thing at once.

And that's what makes the idea believable.

Not the fantasy that "the machine writes the SAR now." The idea is much more limited: AI can draft from existing evidence and do so in a structured way. A model can take a history of transactions and make a summary of them in order. It can help you figure out the "who, what, when, where, and why." It can also suggest a clear order. It can even make the tone the same for all investigators. Those are real gains, especially when the line is long and senior analysts are busy writing stories.

The third use case is quality assurance, and this one might be the most useful of all.

A story can be smooth and still be weak. It can be true and still not tell the whole story. It can name the actors and not say why. It can talk about the transactions and not say what made the pattern look suspicious. FinCEN's common errors guide is like a checklist of frequent issues that a quality control reveals: Missing important details, unclear timelines, suspicious behavior, incomplete descriptions, and stories that don't give law enforcement enough information to act on. AI can help with this kind of review, especially if the model is checking against a structured set of evidence instead of coming up with new ideas.

That matters because quality assurance isn't just about making things look good. It has to do with being complete and consistent. Did the story talk about the suspicious behavior that the structured fields put into groups? Did it figure out who the main people were? Did it make the institution's reasoning clear? Did it leave out an important date, amount, or account? Did the story go off track from the real case file? Those questions aren't very interesting. They are very real. It's also expensive for people to ask them over and over.

So it's not hard to see what the AI value proposition is here. Fill in the obvious fields ahead of time. Write a draft based on real evidence. Look for things that are missing or don't match up. Let the human analyst handle the filing.

This is the good side of the story.

The Risks: Hallucination, Precision, and Regulatory Approval

Now comes the hard part.

GenAI can write text that sounds real but is not true. That is the main risk.

The Co-Investigator AI paper says that big language models can help people speak more fluently, but they also have problems with factual hallucination, not fitting well with financial crime types, and being hard to explain. That's annoying when the stakes are low. It's a big deal in a SAR. A model that makes up a transaction date, makes up a customer link, lies about an account relationship, or quietly adds a fact that was never in the case file is not just being careless. It may make a regulatory filing inaccurate.

That really matters.

If the false information gets into the SAR, it can waste police time and money. It can lead an investigator in the wrong direction. It can change how people understand suspicious activity. It can also make people wonder if the institution's filing controls really work. FinCEN's public guidance doesn't mention hallucination by name, but it doesn't leave much room for that kind of mistake. It is expected that SARs will be full, enough, and on time. FinCEN's more general public language about reporting also stresses accuracy. When that becomes the norm, a made-up detail is not a small mistake in writing. The filing process is broken.

There is another risk that is less obvious here. GenAI can write text that sounds like it knows what it's talking about, even when the logic behind it isn't very strong. That is risky because it changes how people think about reviews. A tired analyst may not question a draft as much if it sounds polished. The more polished the writing, the more likely it is that the facts are true. This is why "human review" isn't enough as a phrase by itself. The human review must be based on facts and sources. Not in style. Not just a quick read for tone.

This is why you need to check every fact that AI makes against the source data.

The case file should show that the customer sent money to a linked account if the draft says so. If the institution says it found a pattern that didn't match what the customer said their business was, they need to back that up with KYC data and transaction history. If it says that the customer's actions involved a group of related counterparties, those counterparties should be in the evidence that supports the claim. The issue in a SAR is not solely that the model may articulate something inelegantly. The issue is that it could say something wrong and say it well.

Then there's regulatory acceptance, which is a bit less clear.

FinCEN doesn't seem to have any official, public rules right now about AI-generated SAR narratives. That doesn't mean you can't use AI. It means that there isn't a special set of rules for it yet. The institution still meets the same old expectations. The story has to be finished. Correct. On time. Helpful. The first draft came from a model, but those standards don't change. Also, the duty to be correct doesn't change.

That last point is important because companies sometimes mix up what they can do with what they are allowed to do by law. They are not the same.

What FinCEN Has Not Said (Yet) About AI-Assisted SAR Writing

This is one of the most important parts of the whole conversation because it's where businesses can get into trouble.

FinCEN has pushed for responsible innovation in programs that fight money laundering and terrorism financing. The innovation page says it wants to encourage responsible new ideas in the financial services industry that help the goals of the Bank Secrecy Act, which was changed by the Anti-Money Laundering Act of 2020. It also says that new technologies and innovations in the private sector can help institutions make their compliance programs better and keep better records and reports. That is clearly helpful. It gives businesses the chance to update. It doesn't tell them to stay away from AI.

But that's not the same as advice on how to write SARs that were made by AI.

As of April 2026, FinCEN does not seem to have released an official public document that explains how institutions should use AI to write SARs, what controls are required in an AI-assisted narrative workflow, or whether institutions should tell people that they are using AI to write SARs. The current public requirements still focus on the outcome, not the way the draft was written. SARs must be full, correct, and on time. Stories should make it clear who, what, when, where, and why. People who file should include all the information they have. That is the rule. FinCEN has not made a public statement that says, "This is the approved AI process."

That means two things at the same time.

First, there is space to use AI. FinCEN is not requiring a process that involves writing things down by hand. It doesn't require that a person write every sentence by hand without any help from software. In fact, its broader approach to innovation points the other way. FinCEN wants to modernize things where doing so makes them work better. The June 2024 AML/CFT program proposal fact sheet says that banks and other financial institutions should be able to update their programs with responsible innovation while still keeping an eye on illegal money risks. It also says that internal controls may include coming up with, testing, and putting into action new ideas.

Second, and this is the most important point, the institution still owns the filing.

It doesn't look like that goes anywhere in FinCEN's public documents. The filing institution is responsible if the story is wrong. The institution is still responsible for the output even if the model adds information that isn't true. The institution can't hide behind the tool if the writing is nice but not complete. FinCEN's language about innovation is a good sign, but it doesn't mean safety.

So the practical reading is cautious. Get help from AI. Use it to get things done faster and better. But just because there isn't a clear ban doesn't mean that regulators are okay with any workflow a company comes up with.

No, it isn't.

Best Practices for AI-Assisted SAR Writing

If an institution wants to use AI to write SARs without going too far, the rules have to be very clear.

Check all of the facts against the source data first. Not all of them. Not just the ones that "look important." Every claim that is true. Dates, amounts, account numbers, counterparties, linked entities, customer identifiers, investigative steps, relationship descriptions, and statements about suspicious patterns all need to be able to be traced back to the case record. The evidence should back up what the story says. That sounds harsh. It has to be strict. FinCEN's rules don't allow for "approximately true" reporting, they require dependable results.

Second, sending a draft prepared by AI without having a person look it over and approve it is never a good idea. It shouldn’t be a quick look, and definitely not a simple check of grammar. The human analyst must compare the draft to the file, question the language or tone used, confirm the information, and decide if the final report accurately reflects the evidence. The Co-Investigator AI framework keeps people involved for this very reason. That doesn't mean the design is weak. It is a feature that keeps you safe.

Third, write down that AI was used to write the document. As of now, FinCEN does not require SARs to have a line that says "AI helped with this narrative." But from an internal governance point of view, organizations should know when AI was used in the workflow, where it was used, and who gave the go-ahead for the result. If later audits ask how the institution made sure said report was complete and correct, it's much safer and presents reliability to have a clear record of the workflow.

Fourth, don't mix the evidence chain with the story. People don't think this is important enough. A good draft can make it look like the case is stronger than the evidence that supports it. Case data should all be kept safe and accessible outside of the narrative. These may include source documents, transaction history, customer records, or investigator notes. The writing is the filing. The proof is the support. They should line up neatly, but they shouldn't blend together.

Fifth, add the key terms and field selections that FinCEN needs by hand. This is not the time to be casual. FinCEN advisories often tell institutions to use certain suspicious-activity field selections and include certain key terms in the filing. A model shouldn't be able to guess those things based on style or context. A person who knows which advisory or typology framework applies should carefully check them. A model can help you organize the story. It shouldn't be trusted to figure out how to file on its own.

Sixth, the human analyst should still be the one who files. This is the base for everything else. AI can fill out forms, write drafts, make suggestions, and check them. The person who is responsible for the content should still be a person. That is the safest place to be in terms of the law, business, and morality. It also fits with what FinCEN is saying in public right now: Responsible innovation is welcome, but the institution is still responsible.

AI can be helpful if a company follows those rules. Very helpful, in fact. If the model ignores them, it might save time for a while, but it will make the problem much worse later.

That's how these things usually go.

So, Can AI Write Your SARs?

Yes. But, the most truthful answer is probably this: AI can help write them, but you shouldn't leave it alone with them.

It can definitely help with some of the work. It makes sense to fill in structured fields with case data ahead of time. It can make sense to write a first story based on verified evidence. It makes sense to run a quality-assurance pass to find missing parts or gaps in consistency. These are useful things to do. They match up with the real problems in the SAR workflow. They cut down on repetitive work without making it seem like the filing is just an exercise in writing. The Co-Investigator AI paper makes a strong case that a human-in-the-loop, agentic setup can make things faster and more consistent while still allowing for expert review.

AI shouldn't just quietly become the author of everything but the name.

That's where the risk changes. A model is good at making a story flow in a way that makes sense. That's why companies need to be careful with it. Writing well is not the goal in the SAR context. The goal is to write prose that is correct, defensible, and useful to regulators. No, those are not the same thing. A model can help bring the first one closer. The second one still needs to be checked and judged by a person.

FinCEN has made it clear and repeated that SARs must be full, correct, and on time, and that new ideas are welcome as long as they make AML/CFT programs stronger. Putting those two things together makes the practical answer clear. If AI helps in preparing your SAR, use it to write the draft. However at the end of the day a human reviewer should still be the one who is responsible and accountable. And the information within has to be reviewed and every fact should be checked against the source record before anything is sent.

That isn't very exciting. It doesn't look very futuristic either.

But it is probably the right thing to do.