Home

Beyond Sovereign Models: The AI Software Layer India Should Build

Most of India's sovereign AI conversation has centered on foundation models.

That focus is understandable. Models are visible. They make for clear milestones. They are also necessary. India should build strong foundation models, and the ecosystem support around them is a good thing.

But model building is not the full story.

I come from the application side. I came into data centers almost accidentally because, when we started Jarvis Labs, not many people in India were working on GPU infrastructure for developers. From that vantage point, the question that keeps bothering me is simple: when India succeeds in building strong models, how do we actually use them?

How do we deploy them? How do we serve them at scale? How does a company that does not have a deep AI infra team use these models in production? Who builds the inference layer, the search layer, the agent tooling, and the developer platform around them?

Training a model is one thing. Turning it into a reliable product that companies can use every day is another thing entirely.

That is the layer I think we should focus on next.

The Layer Above the Model

When people talk about AI, they often talk as if the model is the product. Sometimes it is. Most of the time, it is not.

For a model to become useful, a lot of software has to exist around it.

Inference is one example. A model sitting on a GPU is not the same thing as an API that can serve thousands or millions of users reliably. The inference layer has to handle latency, batching, scaling, failures, cost, and developer experience. Companies like OpenAI and Anthropic have spent years improving this layer. It is not a small problem.

Search is another example. As agents become more useful, they need a way to access fresh information from the web and from other sources. A human can open a browser, read ten pages, and decide what matters. An agent needs a different interface. It needs search as an API. It needs information in a form that software can consume.

Then there is agent tooling and the developer platform around all of this. Most companies will not want to stitch together GPUs, model servers, inference engines, search APIs, monitoring, and workflow tools by themselves. They will want platforms that make AI usable.

These are not separate from the AI opportunity. They are the AI opportunity.

If Indian companies only build models and rent compute, but the product and platform layer is built elsewhere, most of the value will be captured elsewhere.

The Lesson From IT Services

The framing I keep coming back to is this: Indian organizations need to create IP and solve these problems so that we do not become cheap infrastructure for the world, just like we became cheap labor for the IT sector.

I want to be careful with that analogy, because it can be misunderstood.

IT services was not a bad business. In dollar terms, it was often a very good business. Billing in dollars and paying in rupees made the economics work for Indian companies for decades. I am not against services.

The problem is what we did not build enough of.

When an Indian services company builds an optimized system for a foreign customer, the service company gets paid. Sometimes it gets paid well. But the customer usually owns the IP. The customer owns the product. The customer captures the value for years.

The Indian company built the thing and walked away with the invoice.

That is the part I worry about repeating in AI.

If we become the place where data centers are hosted, where compute is rented, where models are trained for someone else, but the products and platforms are built elsewhere, we will again be doing important work while someone else owns the highest-value layer.

The goal should not be only to participate in AI. The goal should be to move up the value ladder.

We should build products for the world.

Build for the World

There is one place where India-first can make a lot of sense: models for Indian languages.

Training a model that works well across Indian languages is a hard technical problem. The data is harder. The scripts are harder. The distribution across languages is harder. In that sense, Indian companies working on Indian-language models are solving a genuinely difficult problem.

But the same logic should not automatically apply to every part of the AI stack.

Software is often language-agnostic.

Take search. The crawler, the ranking system, the API design, the reliability layer, the pricing model, and the developer experience are not fundamentally different just because the user is in India. The same is true for inference platforms and developer tooling. These are global software problems.

So the market should be global.

If we build a search API only for Indian languages, the market becomes small very quickly. The same Indian customer can still use global alternatives. The global player can still compete in India. But the Indian company has limited itself before the fight has even started.

For language-specific models, India-first can be a strength.

For language-agnostic software, global-first is the better strategy.

This is not just a market-size argument. It is also a quality argument. If you build for the world, you cannot hide behind local context. The product has to be good. The API has to be reliable. The documentation has to be clear. The pricing has to make sense. The developer experience has to compete with the best options available anywhere.

That is a higher bar, but it is also how we build higher-value companies.

What I See From Jarvis Labs

At Jarvis Labs, we sit close to developers who are trying to use AI in real work. That has shaped how I think about this.

One recent example: I was working on marketing for Jarvis Labs, and I did not want AI to give me generic advice. I wanted it to understand the kind of marketing thinking I cared about. So I took some of the best marketing videos I could find, converted the transcripts into structured data, and created context that an AI system could use.

If I had used a hosted coding assistant or chat product for all of that processing, I would have burned through my subscription very quickly. Instead, I spun up an instance on Jarvis Labs, used Qwen, and processed the data myself. The cost was a few hundred rupees.

For me, that was manageable because I understand the stack.

But most users will not want to think about GPUs, model serving, batch jobs, token costs, or deployment. They will want a product. They will want an API. They will want a platform.

That is the opportunity.

The same is true for inference. There are teams that can rent a GPU, run vLLM, expose an endpoint, and manage scaling. But most companies do not want to do that. They want the model as a service. They want something they can call from their product.

Search is similar. Agents will need fresh information. They will need search interfaces that are built for software, not just humans.

These are not abstract policy categories for me. These are things I see developers needing.

No Easy Advantage

I do not want to pretend this is easy.

Indian teams do not automatically have an unfair advantage just because they are Indian. In fact, we have many disadvantages. Global competitors may have more capital, better distribution, stronger networks, and easier access to early customers.

I have seen this directly.

When we were building Jarvis Labs, investors would ask: how will you compete with AWS or GCP? What if they reduce prices? What if they make this easier?

Those were fair questions.

At the same time, large incumbents do not solve every problem. Sometimes they are too broad. Sometimes they are too expensive. Sometimes they do not care enough about the developer workflow that a smaller company can obsess over.

The one structural advantage Indian founders do have is runway.

An engineer in the Bay Area may need around $100,000 a year to live. In a city like Coimbatore, a founder or engineer can live decently on a fraction of that. That difference matters. It gives Indian teams more time to iterate, survive, and compound.

But runway only matters if we use it to build ambitious products.

If we use the cost advantage only to become cheaper service providers, we repeat the old pattern. If we use it to build global software products, it becomes a real advantage.

What We Should Build

The immediate opportunity I see is not to build more Indian versions of every US SaaS company.

The opportunity is to pick AI software layers where the market is global and the need is growing: inference, search, agent tooling, and developer platforms.

These are areas where Indian teams can build from India and sell to the world. They do not need to be limited to Indian customers. They do not need to be positioned as cheaper copies. They should be excellent products in their own right.

This also changes how we should think about policy and ecosystem support.

Support for compute is useful. Support for models is useful. But we should also support the companies building the software layer that turns AI capability into usable products. Not because models are unimportant. Because models become more valuable when a strong ecosystem exists around them.

The point is not to criticize what is already being built.

The point is to expand the ambition.

Moving Up the Ladder

India has the talent to build serious AI software companies.

But we have to choose the layer carefully.

If we only provide services, we get paid for effort. If we only provide infrastructure, we take on capital risk. If we build products and platforms, we create IP. We own more of the value we create.

That is the direction I want more Indian founders to think about.

Build models, yes. Build compute, yes. But also build the software layer that makes AI usable. Build inference platforms. Build search APIs. Build agent tooling. Build developer platforms.

And build them for the world.

That is how we move up the ladder.