AI Tip Toes Into BFSI, Cautiously

Newspoint

After disrupting the consumer app landscape, artificial intelligence is now seeking to penetrate India’s banking and financial services industry (BFSI). However, the progress is measured. Here, AI assists, and humans run the show. It is the same script across India’s BFSI ecosystem — AI is being layered onto existing systems rather than embedded at the core.

From fraud prevention and risk scoring to document verification and customer servicing, the technology is enhancing efficiency without replacing human judgement. Of course, in an industry built on trust, capital protection and regulatory oversight, no algorithm should be allowed to move past human intellect.

While fraud detection models run quietly in the background, document processing is automated, and customer queries are resolved faster than ever, humans are still the anchors of every decision when it comes to moving money or approving credit.

However, this does not mean innovation is being discouraged in core finance operations. Well, the industry simply expects explainable, auditable and accountable stacks.

Founders, bankers and infrastructure providers also agree on one principle, which is to ensure that data foundations, governance frameworks and oversight mechanisms mature before AI is allowed deeper into decision-grade roles.

Hero Image

Now, before explaining why such caution is being exercised, it is important to understand the current state of AI in core financial systems.

Newspoint

AI Guides, But Humans Govern With An Iron Fist

In the payments and fintech segments, AI is widely used in fraud detection, anomaly tracking and risk scoring. But autonomy remains limited.

Siddharth Mehta, cofounder of Kiwi, said, “We see AI as an enabler, not an autopilot.” Kiwi relies on layered checks, multiple data signals and escalation mechanisms beyond automated systems.

Model accuracy, false positives and behavioural drift are continuously monitored. “AI should quietly improve safety in the background, not create anxiety for users,” Mehta said.

He also indicated that the reluctance to give AI full control stems from the asymmetric risk in payments. A wrongly blocked transaction can immediately damage customer trust, while a missed fraud case carries regulatory implications.

This philosophy is visible across the ecosystem.

AI is being deployed in document processing, customer assistance, fraud prevention and internal productivity. Core money movement and credit-level decisions remain under structured oversight.

At large banks, enabling AI inside regulated systems begins with restructuring how legacy data is organised and governed.

At Axis Bank, this has meant building curated data layers, introducing semantic frameworks and knowledge graphs to make structured banking data AI-readable, and operationalising privacy guardrails from the outset.

Prasad Lad, executive vice president & head of Business Intelligence Unit at Axis Bank, said access to legacy data requires disciplined transformation rather than experimentation.

“Data must be organised into curated layers and made AI-readable… What traditionally required a multi-year transformation can now be achieved in 12-15 months through a disciplined approach that leverages AI with human-in-the-loop oversight.”

Axis Bank’s AI governance framework is aligned around trust in data handling, defined accountability, privacy by design, inclusivity across technologies, human oversight, auditability, ethical use, and operational resilience. These principles are embedded across approval workflows, cybersecurity systems, fraud management, and technology resilience processes.

Rohit Mahajan, the founder and managing partner at Plutos ONE, said even though his venture has transitioned into an AI-first organisation, “innovation is always anchored in governance”.

He added that AI systems must remain secure, auditable, and regulator-ready from day one. The current phase across BFSI is about readiness rather than replacement, he said.

Why Are Fintech Players Cautious?

Financial services operate within strict regulatory guardrails. Institutions must navigate both established financial regulations and evolving AI governance frameworks.

Reeju Datta, cofounder of Cashfree Payments, said, “The objective is not simply faster adoption, but responsible deployment that strengthens product value and sustains customer trust.”

He argued that the deeper bottleneck is ecosystem maturity.

Datta suggests that the challenge is not algorithmic capability but the absence of a mature, trusted infrastructure. In his view, fragmented data systems and limited interoperability between institutions make enterprise-wide AI deployment difficult.

Until shared benchmarks for fairness, accountability and model validation stabilise, institutions will remain cautious about embedding AI into high-impact financial decisions.

Newspoint

Fragmented financial data, limited interoperability across legacy systems and evolving standards for fairness and accountability slow down enterprise-wide deployment. In such an environment, high-impact AI systems are introduced cautiously.

From a payments ecosystem perspective, Manas Mishra, the chief product officer at PayU, said compliance cannot be an afterthought in AI-led financial infrastructure. Mishra pointed to the company’s fully compliant, user-in-the-loop AI payments solution for hosted checkouts on AI platforms as an example of how AI deployment can remain auditable while supporting merchant innovation.

Saurabh Vijayvergia, the founder and CEO of CoverSure, believes the biggest structural barrier to AI scaling in BFSI is not the technology itself, but the surrounding ecosystem, comprising legacy core systems, weak data foundations, governance uncertainty and organisational inertia.

He noted that many institutions still operate on fragmented infrastructure, where data lacks consistent standards and ownership. “Without a clean and unified data architecture, AI scalability remains limited.”

For Vijayvergia, compliance cannot be retrofitted, and AI integration must treat governance as a design principle and not the final checkpoint.

What Will It Take For AI To Move Deeper?

If AI is to move into decision-grade roles in core banking, structural gaps must close.

According to Maaz Ansari, the cofounder at Oriserve, infrastructure and data silos are primary barriers. He said that most banks and financial institutions still operate on decades-old core systems that are not designed for seamless AI integration.

He added that data across disconnected platforms makes standardisation and large-scale deployment complex and costly. Even where pilots succeed in collections or lead qualification, enterprise scaling remains constrained by integration and data protection concerns.

Another emerging concern he shared was model sovereignty.

Many global AI systems are not fully aligned with local data protection requirements or sector-specific compliance standards. Until financial institutions gain confidence in domestically governed or regulator-vetted AI stacks, large-scale autonomy in core systems will remain a distant dream.

Then, regulatory clarity also needs to shape the next phase.

At Axis Bank, every AI use case is evaluated against a structured five-point framework that covers governance compliance, performance benchmarks, user validation, operational efficiency gains, and measurable financial value.

Lad explained that governance approval precedes development.

“Any process changes require formal approval from relevant assurance functions such as operational risk, compliance, information security, and the data privacy office before detailed development begins.”

However, beyond compliance, solutions must demonstrate accuracy and low latency, win trust in controlled user environments, improve turnaround times and first-time-right metrics, and deliver net financial value after accounting for model and infrastructure costs. “Only then can deployment expand in scope.”

Lad also argues that while data, regulatory baselines, and technology maturity are advancing, deeper AI adoption depends on mindset shifts, continuous reskilling, and fintech partnerships that help institutions leverage existing capabilities more effectively.

Datta opines that structured sandboxes and clearer model governance frameworks could accelerate responsible innovation.

Similarly, Mahajan calls for clearer guidance around AI accountability — for example: a principle-based AI governance framework and model explainability standards — arguing that innovation and consumer trust must evolve together.

Across stakeholders, the conclusion is consistent. BFSI is not resisting AI. It is sequencing it.

Until data frameworks mature, governance standards stabilise and regulators define clearer boundaries for AI autonomy, deployment will remain layered rather than deeply embedded.

Top Stories From India & Around The World
  • TechM Launches Hindi-First LLM: In collaboration with NVIDIA under Project Indus, Tech Mahindra unveiled an 8 Bn parameter Hindi-first LLM to democratise AI-powered education and enable autonomous Hindi-speaking agents.
  • RIL To Invest ₹10 Lakh Cr In AI: Reliance Industries’ chairman Mukesh Ambani announced ₹10 Lakh Cr worth of AI investments via Jio Platforms over the next seven years, unveiling Jio Intelligence, green data centres and an OpenAI partnership to power AI-driven streaming on JioHotstar.
  • G42, C-DAC To Deploy Exaflop Supercomputers: The Centre for Development of Advanced Computing has finalised a term sheet with UAE’s G42 to deploy a supercomputer cluster in India. The supercomputer cluster will be open for access to startups as well as other public and private sector entities for research, application development, and commercial use.
  • Activate, NVIDIA To Back Early AI Startups: Early stage VC fund Activate has teamed up with NVIDIA to provide funded startups access to Nemotron open-source models, GPU infrastructure, training resources and ecosystem support.
  • Anthropic, Infosys Join Hands: Anthropic has partnered with Infosys to build AI agents for telecom, financial services and other regulated sectors, combining Claude models with Infosys Topaz to enable compliant, enterprise-grade AI deployment.
  • India Joins Pax Silica AI Supply Chain Pact: India has signed the US-led Pax Silica Declaration to strengthen semiconductor and AI supply chain security, alongside a new India-US AI Opportunity Partnership to boost compute access and R&D.
The Weekly Buzz: Claude Sends Stocks Tumbling, Again!

AI’s next frontier isn’t just code, it’s the market itself.

Last week, Anthropic unveiled Claude Code Security, a new capability within Claude Code that scans entire codebases for hidden vulnerabilities and suggests patches the way a human security researcher would.

Designed as a defender-first tool in a limited preview for enterprise and team customers (with fast-track access for open-source maintainers), it goes far beyond traditional rule-based scanners by reasoning about data flows and component interactions.

But the launch triggered something bigger than product chatter. As word spread, cybersecurity stocks slid sharply on Friday — with CrowdStrike, Cloudflare, Datadog and Fortinet among the names in the red — as investors fretted that AI-native tools could encroach on niches long dominated by specialised cyber vendors.

The market reaction echoed the broader “SaaSpocalypse” narrative. AI advancements (especially from Anthropic and peers) are increasingly being priced as disruptive to traditional software business models, not just incremental tools.

Meanwhile, social media buzz reflected a more mixed reality. Defenders celebrated the promise of AI-powered vulnerability discovery, while critics warned the same capabilities could empower attackers if misused.

A product announcement, still in research preview, became a market event that shook confidence in legacy cybersecurity stocks, underscoring how quickly AI innovation can ripple beyond tech demos into broader financial and strategic anxieties.

Startup In The Spotlight: Bolna

Enterprise voice automation in India has largely revolved around either black-box solutions or stitched-together tools that struggle to scale beyond pilots.

While businesses experiment with AI callers for sales, onboarding or collections, many hit bottlenecks around latency, multilingual accuracy and orchestration across speech-to-text, LLMs and telephony stacks. Most deployments still require heavy technical support, limiting true self-serve adoption.

Bolna is built to address this gap. Founded in early 2024 by IIT Delhi alumni Maitreya Wagh and Prateek Sachan, the Y Combinator-backed startup provides a voice AI orchestration platform that allows enterprises to build and deploy production-grade AI calling agents in minutes, without writing code.

Rather than pushing a single model stack, Bolna integrates multiple speech-to-text, LLM and text-to-speech providers, enabling businesses to switch models through a simple dashboard as performance benchmarks evolve.

The company focuses on solving hard, India-specific challenges such as multilingual switching, noisy call environments and regional accuracy gaps. It is also investing in niche ML layers like improved Bengali ASR and vernacular noise detection to unlock revenue opportunities

Bolna operates on a per-minute pricing model and has scaled to roughly $60K in monthly revenue, serving over 50 clients across ecommerce, BFSI, recruitment and emerging use cases like matchmaking and SME export onboarding

As voice AI adoption accelerates, Bolna is positioning itself not as an application-layer bot company, but as the underlying orchestration infrastructure powering the next generation of enterprise AI callers.

Prompt Of The Week

What prompts and hacks are CTOs, CEOs and cofounders using these days to streamline their work?

Here’s Jayanth Neelakanta, CEO of Equip, on using Claude Code with HubSpot MCP as his daily sales co-pilot:








  • The post AI Tip Toes Into BFSI, Cautiously appeared first on Inc42 Media.