Executive Summary. Baran Ozkan explains how AI-native systems, false-positive reduction, and workflow clarity are redefining how institutions scale regulated operations without losing audit defensibility.
Financial crime compliance is moving from rule-heavy oversight to operational infrastructure. As fintech and banking systems scale in complexity, institutions are being forced to rethink how monitoring, investigations, and audit readiness function in real time.
Baran Ozkan, co-founder and CEO of Flagright, works at the intersection of transaction monitoring architecture, AI-native compliance tooling, and operational workflow design. His perspective comes from building inside high-growth financial systems where alert volume, investigation speed, and regulatory defensibility collide.
This conversation focuses on operational clarity, false-positive reduction, AI-native compliance systems, and what it takes to scale regulated infrastructure without losing audit discipline.
AITJ: Baran, what was the tipping point that inspired you to build Flagright and how did your past roles at TransferGo and Forto shape that vision?
The tipping point was a long, frustrating search for a real-time, risk-based transaction monitoring solution when I was leading product at TransferGo. I was close to the reality of how compliance teams operate, especially when a business is scaling fast across corridors and customer segments. What I saw was a huge gap between what legacy vendors promised and what actually worked in production. Integrations were slow, tooling was rigid, alert quality was poor, and when teams needed to change something quickly, they often could not without lengthy vendor cycles. That is when it clicked for me that compliance tooling was holding fintechs back, not because teams did not care, but because the infrastructure was not designed for modern financial products.
TransferGo shaped my view of the problem from a fintech operator perspective. If you are moving money for real customers, you cannot afford brittle systems that create massive queues of low quality alerts. You need controls that can adapt as your product changes while still staying defensible for regulators and partners. That tension between growth and control is permanent, and I wanted to build a platform that lets companies scale without feeling like compliance is a constant drag on the business.
Forto shaped my view from a systems perspective. In logistics, you learn quickly that real world operations are messy and data is never perfect. You are building real time systems that have to stay reliable under pressure, and you need observability, clean data models, and clear workflows if you want decisions to be repeatable. That experience reinforced a core belief I carried into Flagright: the answer is not more rules or more alerts. The answer is operational clarity and an architecture that makes good decisions possible at scale.
The 180 page Financial Crime Compliance Handbook you released is ambitious. What is one insight in that handbook you believe could fundamentally shift how compliance teams operate today?
One insight I keep coming back to is that most compliance programs do not fail because teams do not know the rules. They fail because the operating system behind the program is weak. Teams may have policies, training, and a set of controls, but they lack the practical workflows, evidence trails, and feedback loops that make the program resilient day to day.
In the handbook we tried to translate regulation into operations. That means being explicit about what a good investigation looks like, what information should be captured, how decisions are documented, how quality assurance is run, and how reporting is prepared in a repeatable way. When teams treat compliance as an operating system rather than a checklist, two things happen. First, they reduce noise, because controls become measurable and tunable. Second, they become audit ready by design, because evidence is produced naturally through the workflow rather than retroactively assembled when an audit is looming.
If I had to summarize it in one sentence, it is this: compliance maturity is less about knowing more regulations and more about building a system that produces consistent decisions and defensible evidence at real world speed.
Compliance maturity is less about knowing more regulations and more about building a system that produces consistent decisions and defensible evidence at real world speed.
Baran Ozkan
You often speak about operational clarity over regulatory overload. How do you define operational clarity and why is it missing in most fincrime strategies?
Operational clarity means a compliance team can answer a few simple questions at any point in time without guesswork.
What risks are we actually seeing in our business right now
Which controls are firing and why
What is the quality of those alerts
How long does it take to investigate and resolve them
Where are the bottlenecks
What changed this week and what impact did it have
When a team has operational clarity, they are not reacting emotionally to volume. They can see signal and trend. They can defend why they made decisions. They can tune controls based on evidence, not intuition.
It is missing because the industry has historically optimised for regulatory coverage, not operational performance. Many stacks are fragmented, with one tool for monitoring, another for investigations, another for reporting, and a lot of manual glue in between. Data gets siloed. Rules get changed without good testing. Models get deployed without transparent reasoning. The result is a program that may look fine on paper, but feels chaotic in practice.
My view is that the best compliance teams operate like high performing product and engineering teams. They measure, they iterate, they test on real data, and they continuously improve. Operational clarity is what allows that mindset to work in compliance.
Can you walk us through the journey of helping a client reduce their false positive rate and what made that transformation possible?
Let me ground this in a public example. In one client deployment, false positives dropped from 99.1 percent to 15.3 percent, which is an 83.8 percent reduction, and investigation time fell by more than half. What made that transformation possible was not a single magic model. It was a combination of data, workflow, and control design working together.
First, we focused on controllability. The client needed to be able to change detection logic without waiting weeks for vendor intervention. They were operating across multiple jurisdictions, and when fraud patterns shift, time matters. We gave them the ability to create and modify custom rules independently, and to test configurations on live production data without impacting real alerts. That ability to experiment safely is a big part of how teams reduce noise without increasing risk.
Second, we focused on transparency. If an alert fires, an analyst needs to understand the reason quickly. In this case, the client had experience with proprietary systems that behaved like a black box, which made it hard to justify decisions. By making triggers and context clearer, analysts can work faster and quality improves because decisions are easier to validate.
Third, we focused on workflow efficiency. When teams have to jump between systems to understand one case, investigation times inflate and quality suffers. Unifying monitoring and case management helps analysts move through cases with less friction and more context, which directly improves throughput and consistency.
What this says about legacy systems is that many are built around vendor control and static assumptions. They often make change expensive and slow, and they produce alert volume that looks like coverage but does not translate into real risk reduction. When teams reclaim control, measure outcomes, and iterate on live data, performance improves fast.
Flagright positions itself as an AI native solution for compliance. How do you differentiate AI native from AI enabled and what advantages does that deliver?
To me, AI enabled often means AI has been bolted onto an existing product. You might have a chatbot interface, a model that runs as an optional feature, or a workflow that still depends on manual work for most outcomes. The foundation remains the same, and AI is a layer on top.
AI native means AI is designed into the system from the beginning. It affects how data is structured, how decisions are made, how workflows are automated, and how outputs are governed. It also means the system is built to learn from investigations, because the feedback loop is where the value compounds.
The real world advantage is not theoretical. When AI is native, it can reduce low value manual work in investigations, surface context that analysts would otherwise spend time assembling, and help teams produce consistent narratives and evidence trails. It also allows teams to scale without simply hiring more people to clear alerts. In compliance, scale without losing defensibility is the whole game.
In compliance, scale without losing defensibility is the whole game.
Baran Ozkan
With the handbook covering regional breakdowns across the EU, US, APAC, Middle East, and Africa, what was the most surprising nuance you encountered?
The most surprising nuance was how much of the real complexity comes from differences in operational expectations, not from the headline regulations. Many regions share the same broad goals, and global standards influence them, but the way teams are expected to operationalize those goals varies.
For example, reporting regimes can differ in what is required, how information is structured, and the internal evidence teams need to retain to defend the decision. Data privacy expectations also create very practical design constraints, especially when you are operating across regions and need to balance information sharing with customer protections.
What that taught me is that the winning approach is not to build a separate compliance program for every region from scratch. It is to build a core operating model that is consistent, and then make the regional differences explicit in workflows, documentation, and control configuration. That is what makes scaling possible without turning compliance into a patchwork.
From your perspective as both a builder and an investor, what signals do you look for in startups solving complex regulated problems?
The first signal is respect for the domain. In regulated industries, the product is not just a feature set, it is a promise of reliability and accountability. Founders need to understand the operational reality of the teams they serve, not just the regulation in abstract.
The second signal is evidence orientation. I look for teams that can explain how their solution will be measured in production, what outcomes they will improve, and what tradeoffs they are making. In compliance, there is no room for vague claims. You have to show how you reduce noise, improve detection, and strengthen audit readiness.
The third signal is trust maturity. Security posture, governance, and responsible deployment are not nice to have. They are the product. If a team treats those as an afterthought, they will struggle to win serious customers.
Finally, I like founders who are patient about distribution. In regulated markets, trust and adoption compound over time, and the best founders build for the long term rather than chasing short term hype.
Flagright emphasizes a no code, API first platform. What were the toughest challenges in maintaining simplicity without sacrificing depth?
The hardest challenge is that simplicity and depth often pull in opposite directions. Compliance teams need expressive power because real risk is nuanced. At the same time, if the system feels like a complex engineering tool, you lose the people who have to operate it every day.
Technically, building a no code scenario builder that is powerful and safe is hard. You need guardrails so teams do not accidentally create controls that explode alert volume or create gaps. You need strong testing capability so changes can be validated before they go live. You need performance so the system can score activity quickly, and you need observability so teams can understand how changes affect outcomes.
Philosophically, you have to be willing to say no to complexity that does not create real value. A lot of enterprise software adds features that create options, not outcomes. We try to keep the mental model clear. A good control should be understandable, testable, measurable, and defensible.
If you had to future proof compliance for the next 10 years, what three innovations would you bet on?
First, continuous controls with simulation and measurement. Teams will increasingly test changes on real data before deploying them, measure impact on false positives and true risk detection, and treat compliance as a living system.
Second, privacy preserving collaboration across institutions. Financial crime is networked, and defense needs to become more networked. Better ways to share typologies, signals, and patterns without exposing sensitive data will raise the baseline for the whole ecosystem.
Third, AI assisted operations with strong governance. AI will help investigations, triage, documentation, and quality assurance, but only if institutions can explain decisions, monitor performance, and keep accountability with humans. The winners will combine AI capability with audit readiness by design.
What personal philosophies or leadership principles do you return to when facing ambiguity or resistance?
I come back to three principles.
Be unreasonably customer obsessed. If I am unsure what to do, I go talk to the people living the problem. Reality is the best strategy document.
Take responsibility for security, accuracy, and outcomes. In compliance, trust is fragile. If we ship something, we own it, including the hard edge cases.
Move with urgency, but do not confuse speed with recklessness. I believe in iteration and action, but also in building the evidence and governance that make fast systems safe.
When I face resistance, I try to separate signal from noise. If multiple clients and practitioners say the same thing, that is signal. If the pushback is based on fear of change rather than data, we acknowledge it but we do not let it drive the roadmap.




