Innovation and trust will steer our AI future: Credo AI CEO
As artificial intelligence (AI) adoption accelerates across industries, AI governance is emerging as a boardroom priority rather than a compliance afterthought. Navrina Singh, founder and chief executive of US-headquartered Credo AI, believes the future of AI leadership will be defined not just by innovation, but by trust.
She told ET on the sidelines of the India AI Impact Summit that Credo AI has entered into a strategic partnership with Abu Dhabi-based G42 to operationalise responsible AI across the global south. The collaboration aims to embed governance frameworks into AI systems deployed at national and enterprise scale, with a focus on agentic AI, literacy and tooling.

For organisations just starting out, Singh defines AI governance in practical terms as the systems, processes and frameworks that enable companies to develop and deploy AI in a trusted and effective manner. It includes surfacing risks, recommending controls, documenting compliance and aligning to both internal policies and global standards.
“Trusted use of AI comes from aligning to internal policies and externally recognised standards,” she said, adding that governance allows businesses to maximise benefits while reducing legal, reputational and operational risks.
Contrary to common apprehensions, Singh argued that ethical AI does not stifle innovation. “Responsible AI actually unlocks more sustainable innovation,” she said. Companies that understand how AI systems are used, where decisions occur and who interacts with them are better positioned to manage risks and scale confidently, according to her.
Spending on off-the-shelf AI governance software is projected to quadruple between 2024 and 2030, reaching $15.8 billion at a 30% compound annual growth rate, reflecting growing enterprise demand for structured oversight.
Founded in 2020 at the start of the Covid-19 pandemic, Credo AI has raised about $50 million and works with Fortune 500 companies across healthcare, financial services, government and retail. Singh said the inflection point came after the launch of generative AI tools like ChatGPT, which turned AI governance into a board-level priority.
“The risk surface area drastically changed,” she said. The rise of shadow AI use, increased third-party AI procurement and embedding AI into core business functions forced enterprises to seek structured oversight.
Rather than engaging Credo AI solely to prepare for evolving regulations such as the EU AI Act, Singh said enterprises sought governance because they were already struggling to explain, defend or scale AI systems responsibly.
Three drivers consistently pushed adoption: rapid uptake of third-party AI systems, deployment of AI into business-critical use cases rather than pilots, and the need for long-term accountability as systems evolved.
ISO as the global backbone
With regulatory frameworks diverging across jurisdictions, from Europe’s AI Act to sectoral approaches in the US and guidance-driven models in India, Singh advises companies to anchor themselves in interoperable standards rather than wait for legal harmonisation.
She highlighted emerging ISO/IEC standards such as ISO/IEC 42001 (AI management systems) and related lifecycle and conformity frameworks as becoming the common governance language across borders. These standards, she said, provide repeatable structures for evidence generation, audit readiness and assurance.
“Governance cannot wait for legal harmonisation,” Singh said. “In a fragmented regulatory world, trust becomes a strategic asset.”
Preparing for the agentic AI era
While much of current regulation focuses on model outputs, Singh believes the next frontier lies in agentic AI, systems that autonomously plan, call tools and execute tasks.
“In agentic systems, risks scale through autonomy, multi-step workflows and tool access,” she said. Governance must therefore extend beyond transparency and documentation to enforce boundaries, permissions, identity management and continuous monitoring.
India’s strategic role
Credo AI is set to establish its first major overseas presence in Bengaluru, reflecting what Singh described as the “huge significance” of the Indian market.
Unlike Western markets focused on building ever-larger models, India’s ecosystem is prioritising contextual, multilingual and demographically inclusive AI systems, she said. With 1.4 billion citizens and rapidly expanding AI adoption, trust must be foundational.
“You can’t deliver technology here that doesn’t work for all,” Singh said. “It has to work for a farmer in a village, a nurse in Mumbai and a researcher at IIT Madras.”
Measuring responsible AI
Singh argued that ethical impact should be measured through embedded governance structures rather than standalone documentation. Clear ownership, consistent evidence and shared understanding of risk enable faster deployment.
According to Credo AI, clients using its platform in 2025 reported 70% faster AI use-case reviews, a 60% reduction in manual compliance effort and a threefold improvement in executive-level risk reporting.
Yet a recent joint report with the International Association of Privacy Professionals found that while 98% of organisations expect employees to assume AI governance responsibilities within a year, only 8% are hiring dedicated staff, highlighting a widening skills gap.
As AI systems become more autonomous and globally interconnected, Singh maintained that governance will determine whether AI’s promise translates into sustained societal and economic value.
“AI will not be held back by technology,” she said. “What will determine whether AI scales is whether organisations, governments and citizens have confidence in it.”
In her view, the winners in the AI race will not be those who move fastest alone, but those who move fastest with trust built into the foundation.
She told ET on the sidelines of the India AI Impact Summit that Credo AI has entered into a strategic partnership with Abu Dhabi-based G42 to operationalise responsible AI across the global south. The collaboration aims to embed governance frameworks into AI systems deployed at national and enterprise scale, with a focus on agentic AI, literacy and tooling.
For organisations just starting out, Singh defines AI governance in practical terms as the systems, processes and frameworks that enable companies to develop and deploy AI in a trusted and effective manner. It includes surfacing risks, recommending controls, documenting compliance and aligning to both internal policies and global standards.
“Trusted use of AI comes from aligning to internal policies and externally recognised standards,” she said, adding that governance allows businesses to maximise benefits while reducing legal, reputational and operational risks.
Contrary to common apprehensions, Singh argued that ethical AI does not stifle innovation. “Responsible AI actually unlocks more sustainable innovation,” she said. Companies that understand how AI systems are used, where decisions occur and who interacts with them are better positioned to manage risks and scale confidently, according to her.
Spending on off-the-shelf AI governance software is projected to quadruple between 2024 and 2030, reaching $15.8 billion at a 30% compound annual growth rate, reflecting growing enterprise demand for structured oversight.
Founded in 2020 at the start of the Covid-19 pandemic, Credo AI has raised about $50 million and works with Fortune 500 companies across healthcare, financial services, government and retail. Singh said the inflection point came after the launch of generative AI tools like ChatGPT, which turned AI governance into a board-level priority.
“The risk surface area drastically changed,” she said. The rise of shadow AI use, increased third-party AI procurement and embedding AI into core business functions forced enterprises to seek structured oversight.
Rather than engaging Credo AI solely to prepare for evolving regulations such as the EU AI Act, Singh said enterprises sought governance because they were already struggling to explain, defend or scale AI systems responsibly.
Three drivers consistently pushed adoption: rapid uptake of third-party AI systems, deployment of AI into business-critical use cases rather than pilots, and the need for long-term accountability as systems evolved.
ISO as the global backbone
With regulatory frameworks diverging across jurisdictions, from Europe’s AI Act to sectoral approaches in the US and guidance-driven models in India, Singh advises companies to anchor themselves in interoperable standards rather than wait for legal harmonisation.
She highlighted emerging ISO/IEC standards such as ISO/IEC 42001 (AI management systems) and related lifecycle and conformity frameworks as becoming the common governance language across borders. These standards, she said, provide repeatable structures for evidence generation, audit readiness and assurance.
“Governance cannot wait for legal harmonisation,” Singh said. “In a fragmented regulatory world, trust becomes a strategic asset.”
Preparing for the agentic AI era
While much of current regulation focuses on model outputs, Singh believes the next frontier lies in agentic AI, systems that autonomously plan, call tools and execute tasks.
“In agentic systems, risks scale through autonomy, multi-step workflows and tool access,” she said. Governance must therefore extend beyond transparency and documentation to enforce boundaries, permissions, identity management and continuous monitoring.
India’s strategic role
Credo AI is set to establish its first major overseas presence in Bengaluru, reflecting what Singh described as the “huge significance” of the Indian market.
Unlike Western markets focused on building ever-larger models, India’s ecosystem is prioritising contextual, multilingual and demographically inclusive AI systems, she said. With 1.4 billion citizens and rapidly expanding AI adoption, trust must be foundational.
“You can’t deliver technology here that doesn’t work for all,” Singh said. “It has to work for a farmer in a village, a nurse in Mumbai and a researcher at IIT Madras.”
Measuring responsible AI
Singh argued that ethical impact should be measured through embedded governance structures rather than standalone documentation. Clear ownership, consistent evidence and shared understanding of risk enable faster deployment.
According to Credo AI, clients using its platform in 2025 reported 70% faster AI use-case reviews, a 60% reduction in manual compliance effort and a threefold improvement in executive-level risk reporting.
Yet a recent joint report with the International Association of Privacy Professionals found that while 98% of organisations expect employees to assume AI governance responsibilities within a year, only 8% are hiring dedicated staff, highlighting a widening skills gap.
As AI systems become more autonomous and globally interconnected, Singh maintained that governance will determine whether AI’s promise translates into sustained societal and economic value.
“AI will not be held back by technology,” she said. “What will determine whether AI scales is whether organisations, governments and citizens have confidence in it.”
In her view, the winners in the AI race will not be those who move fastest alone, but those who move fastest with trust built into the foundation.
Next Story