Experts see opening for IT in Anthropic's big COBOL code jolt

Newspoint
As IBM made steady recovery after the stock’s historic plunge on February 23, IT industry experts emphasised that AI startup Anthropic’s advances in reading and modernising legacy COBOL code are more likely to unlock opportunities instead of threatening businesses.

Legacy codes have been built upon for more than five decades, creating a ‘spaghetti’ structure, and are therefore conducive to a plug-and-play AI model-as-a-solution. So, even as the pricing per line of code will shrink, the scope of work will increase for tech services as enterprises make this transition, the experts said, adding rewriting COBOL codes entirely could potentially unlock a $1.6 trillion opportunity.
Hero Image

“Our sense is that there is a $3 trillion worth of tech debt sitting over there…60% of it is in mainframes,” said Ravi Vasantraj, global delivery head at Mphasis. “Even if you were to convert 10% of it, you are talking about $300 billion.”

The real shift won’t be whether AI can convert code, but how complex that conversion would be. “The difference is between the what and the how,” he added.

COBOL systems are rarely standalone. “The code base is not just a code base; it's a spaghetti of surrounding systems which is coming together to give an end outcome,” Vasantraj explained, pointing to decades of regulatory patches, regional modifications, and application layers built on top.

Automated agents trained for months can’t simply be “allowed to go on duty” without oversight, especially in sensitive use cases such as credit card billing. “The consequence of getting it wrong is very high,” he said.

On Monday, a single blog post from Anthropic erased nearly $30 billion from IBM’s market value. Anthropic said its AI model Claude can read and modernise COBOL, the decades-old programming language that still runs 95% of global ATM transactions and critical banking, airline and government systems.

Industry experts, however, believe markets were too quick to react.

Nithin Seth, CEO at AI transformation company Incedo cautioned against premature conclusions. “It’s too soon to write obituaries,” he said. While frontier models have made rapid progress, “a promise of what Claude is able to do… is not necessarily a reality that has been achieved,” he said.

What remains critical is contextualisation. “Whatever requires enterprise-level data, enterprise-level specificity…will still need to be built on top of the LLM,” said Seth.

Vikash Jain, managing director and senior partner at Boston Consulting Group said one of the biggest hurdles in legacy transformation is lost “tribal knowledge.”

“It was not documented…there is a high level of perceived risk in making changes,” he said. Describing legacy systems as a “spaghetti ball,” he said pulling a single thread risks unravelling everything else.

AI is compressing timelines and improving quality. “This is actually one of the opportunities out there. It’s an old opportunity which had slower execution because of the risk and the business case, which is now becoming attractive,” he said.

Ankur Dhingra, CEO at workforce management company ProHance pointed out that AI agents aren’t operating independently. “It’s not that simple. It has to get rewritten; it has to run in parallel. There has to be a lot of A/B testing, and then it has to move to production… at scale,” he said.

(With contribution from Himanshi Lohchab)