Home

I Was Naive to Start Jarvis Labs

Work in progress

I was naive to start Jarvis Labs.

I had left my job. I wanted to start a company. I was reading Paul Graham's essays on how to find startup ideas, especially the advice that you should work on the edge of something, face problems before other people face them, and solve them.

I turned right and saw my workstation.

That workstation was one or two years old. I had used it heavily for Kaggle, but even then it was not enough. During competitions, I wanted to run more experiments, faster. I had also tried to order a server from Lambda Labs in the US. The server itself was expensive, customs made it more expensive, and if anything broke, shipping it back would have been painful.

In India in 2019, even getting a proper multi-GPU server was not straightforward. Local shops could help with single-GPU workstations, but multi-GPU machines were a different thing. Even if I bought one, where would I keep it? A four-GPU server could draw serious power. Backup was hard. Power cuts were real.

People in the US could buy such machines and keep them somewhere. In India, for many researchers, Kagglers, and deep learning people, that was much harder.

So the thought became: is there a business around it?

Could we build something like Lambda Labs in India? Lambda labs, did not enter the cloud market yet.

That is how Jarvis Labs started. Not with a polished deck. Not with deep infrastructure experience. Mostly with naive conviction, my own pain, and a belief that more people in world would need GPUs for deep learning.

The First Version Broke Before It Began

In the beginning, we explored gaming GPUs like the 2080 Ti. But the first real Jarvis Labs platform was planned around 16 RTX 5000 GPUs and 8 RTX 6000 GPUs.

It was my first business. Even "we" was not very clearly defined then. It was me and one or two friends, and we were figuring out company structure, co-founder roles, and everything else as we went.

We started looking for people who could sell us GPUs. We did not even know there was a word like "national distributors" exist. We could buy from retail shops in Bangalore or from Amazon, but prices were much higher than the USD prices we saw online. Pricing was not transparent, so we wanted to buy as high up the chain as possible.

Eventually we found RP Tech. Osborne for RPTech replied to us and supported us. Alok and Jay supported Jarvis Labs in a great way throughout the journey. We were very small. They were a much larger business. But they said they liked supporting startups, and they helped us understand what servers would need, were very accomidating when we missed things.

Then we hit the next problem. If we bought from distributors, we had to buy in bulk. What if we could not sell the GPUs? If we bought retail, the economics did not work.

The cloud idea started becoming more attractive. I was not even sure I was happy calling it a cloud at that time. The idea was simple: create a product where people could rent GPUs at a lower price than companies based in the Bay Area.

The math works out in Excel, not in real world.

The first public version was almost embarrassingly simple. It was a website with two boxes, images of two server or workstation boxes. We were deciding what case to use, what a one-GPU workload should look like, what a four-GPU workload should look like, and what should go inside. Even then, we were thinking about a small software layer around it.

Looking back, that small software layer was an early clue. I thought we were selling access to machines. The real work was already moving toward something else.

We wanted 10 or 20 GPUs at first, but only three or four were available then. In hindsight, it was good that GPUs were not available.

Soon after, NVIDIA made a policy change around gaming GPUs in data centers. Universities had some exception, but otherwise gaming GPUs could not be used in a data center. We had a choice: follow the rules or break them. We chose to follow the rules.

That decision hurt the economics for the next few years.

The model we had in mind was built around cheaper gaming GPUs hosted in a cloud-like way. That broke. Some of the GPUs we had bought later went to universities for free or to friends.

Then COVID hit.

We had requests to buy servers. One IIT asked for six or eight servers. That was our first RFP experience. We learned that payment would come only after delivery, sometimes two or three months later. We learned that service obligations meant being on-site within 24 or 48 hours if anything went wrong. Professors negotiated hard. Margins was a joke.

This was my first real lesson in hardware business in India.

Even when people wanted servers, we could not deliver because shipments were blocked. That experience left a scar. We dropped the idea of selling servers and jumped fully into cloud.

The Data Center We Built Ourselves

We ordered servers thinking they would arrive in three months. They took more than six or seven months.

The original plan was to put them in a data center in Hyderabad. Internet blogs made hosting servers in the US look simple. I assumed India would be similar.

It was nothing like that.

One Hyderabad data center quoted around Rs 5,000 per server, and the Excel math looked good again. Then we shared the server specs. They said the power requirement was too high and they could not host it. A server with multiple V100 or RTX 5000-class GPUs could need close to four kilowatts. Rack power density was not enough.

Tata and NTT quoted around Rs 1.5 lakhs per server per month. That was probably the profit we would make by selling one server. The economics did not work.

I remembered seeing servers at my father's bank when the bank was being computerized. There was a room, AC running all the time, and a generator. I thought this should be doable independently.

That thought was also naive.

We started looking for space near Coimbatore. First, we considered a residential house. The broker said he would handle power. We waited for a month and realized it would not happen.

Then we saw a warehouse. It was industrial, so power was not a problem, but it was too far and too large. It was not a place where we could work.

Then we looked at a studio-like place. The owner was desperate to rent it out, and we were tired after all the searching.

One rainy day, while returning home, we accidentally saw a "tolet" board in a commercial building. We called the number. The owner came after a few hours and showed us a hall of around 2,500 square feet.

It was just a big empty hall with pillars.

It was too big for us. For the next three or four years, we ended up playing cricket, badminton, PlayStation, and many other things there. I have very good memories in that place.

An early server rack in the Jarvis Labs office

One of the first racks in the empty Coimbatore office.

The owner was a visionary in his own way. He had a transformer because there was a bank in the building which added additional security, and years earlier he had worked hard to get a committed load of around 100 kilowatts. The floor had only around 7 or 8 kilowatts available, but he promised to upgrade it to around 40 kilowatts.

We said we would take the place if he could assure that.

He did it in about 15 days.

Then we had to make an empty hall run GPU servers.

Putting together the early Jarvis Labs rack

A lot of the early work was physical before it became software.

We split the office into two parts: one for people to sit and work, and one for servers. We knew vaguely that we needed a UPS and a generator. We called suppliers for an 80 kilowatt UPS and learned about modular UPS systems. We started with 20 kilowatts and bought a generator.

None of us had probably seen such a large generator up close before.

Electrical wiring was another scare. We got quotations from vendors, but none seemed to have real experience with what we needed. My father's friend, who used to run an ice cream factory, referred us to a senior electrical engineer who had built electrical systems for large companies. He agreed to help and his team did the wiring.

All this was costing us a bomb before we even made our first rupee.

Jarvis Labs started in 2019. COVID consumed the first several months. By the end of 2020, we moved from Bangalore to Coimbatore. The servers had arrived earlier, but they were stuck in Bangalore because Tamil Nadu required an e-pass for travel during COVID, and we could not get it for months.

I don't know how we survived even for six years. It was a very hard thing.

The First Users

Some money came from one of my friend. I did not take salary for the next five years.

Friends who joined were kind enough to avoid raises and take home only salary they needed to survive in the first few years.

When I tell the Jarvis Labs story, it is easy for the story to sound like one founder pushing through one problem after another. That would not be true.

Poonam, my life partner, played an important role in Jarvis Labs and in my ability to keep going through it. A company like this does not only take money and time from the founder. It takes emotional space from the people closest to the founder. There were many years where the company came first, salary did not come, uncertainty was normal, and every new problem felt like it could become existential. Poonam lived through that with me.

Selva Kumar and Vishnu Kumar also played a significant role in building Jarvis Labs. They were not just people who worked on a startup for some time. They carried the company through the hard years with me, when the work was not glamorous and the rewards were not obvious. When I say "we" in this story, it includes them in a very real way.

Poonam and the Jarvis Labs team with family in the Coimbatore office

Poonam, family, and the early Jarvis Labs team in the Coimbatore office.

Jarvis Labs went live in January 2021.

The first $10 or $20 probably came from a customer in Finland. He was a pilot who had lost his job during COVID and was trying to learn AI through fast.ai. He stayed with Jarvis Labs for quite some time before joining as a pilot again.

That is still one of my favorite memories. We were sitting in Coimbatore, with our own small server setup, and the first customer was someone in Finland trying to rebuild his life by learning AI.

The early Jarvis Labs team in the Coimbatore lab

The early Jarvis Labs team in the Coimbatore lab.

fast.ai played a huge role in my life.

I learned deep learning, problem solving, breaking problems into smaller pieces, and tenacity from fast.ai and Jeremy Howard. In the early fast.ai course, Jeremy encouraged students to participate in Kaggle competitions and spend five minutes a day from the start of a competition to the end.

I took that advice literally.

In my first Kaggle competition, I ended up near the top but was disqualified. I had used my partner's account to submit because I was confident I would never rank high. I was just excited to participate. I usually try to be ethical, but I was careless because I assumed there was no way I would do well.

I was close to gold, around 12th or 13th when 10 or 11 places got gold, before being kicked off the leaderboard.

That taught me two things.

When you keep hitting at a problem every day with all the might you can, you have no idea where you can reach.

And stick to ethics. Don't carelessly leave ethics.

Because I was a fast.ai student, I could post in the community about Jarvis Labs, why we started it, and people doing deep learning and Kaggle experiments used JupyterLab GPUs. fast.ai and Kaggle were the first places where I could talk about Jarvis Labs. Twitter did not work much then. Very few people knew me. Reddit was brutal.

Later, Jarvis Labs became one of the recommended platforms in fast.ai. That gave us another boost.

People loved the product because we listened. Larger companies were not always friendly to small users. They did not care if one student or one Kaggle user had a problem. We cared.

The first version was simple. We launched a container server. When someone paused it, we saved the container. When someone resumed it, we started it again.

It was good for users, but the architecture was fragile. If a user resumed and the original server was occupied, we had to copy the data from that server to another server with available GPUs. This worked early on and broke later with scale.

I was the only person on the team who really understood data science and deep learning when we started. I had to keep my laptop next to me while sleeping. Whenever someone pinged, I would wake up and respond. I also personally messaged several people asking them to try Jarvis Labs. Started understanding what coldmails are, how brutal is the response rate and why being shameless matters.

We had customers in the US, Europe, and countries we had probably never heard of. AI was not as famous as it is today. People were figuring things out, and we were figuring things out with them.

The Customer That Nearly Broke Us

After launch, we almost ran out of money in February or March 2021.

We needed money to pay salaries and rent. I was close to making a tough call, maybe taking a loan from my father.

Then we got an educational customer.

They had been using a platform built on AWS and spending a lot. They had around 700 students learning deep learning, and a new cohort every month. The students would all come to use the platform, especially near assignment deadlines. Jarvis Labs had never been stress-tested at that level.

The students had different workloads. They would spin up PyTorch instances quickly, stop them quickly, and expect everything to work. Many of them were still learning deep learning, so they could not always tell whether something was a platform issue or their own misunderstanding.

There was a lot of support. One bug could make things go haywire because so many users were active at the same time. We had to fix issues live while everything was hot.

We cursed ourselves for accepting this.

But after a few weeks, the platform became much more stable. Most bugs got fixed. We became more mature in handling customers, especially sensitive customers under pressure.

Inside the company, it became a joke. Whenever we faced a tough customer, we would tell ourselves, "We faced those folks, so we should be able to face anything."

That customer also helped money start flowing. Around $5,000 to $6,000 per month was enough for us to run the show.

It was the first time Jarvis Labs felt less like a fragile experiment and more like a real company.

Hardware, Software, Platform

For the first one or two years, everything was hard.

If I had known how complicated this would become, I probably would never have started. That is the strange thing about naivety. It was the reason we made so many mistakes, but it was also the reason Jarvis Labs existed at all.

Hardware was a nightmare. Maintaining a generator meant filling diesel in the rain and making sure we had enough diesel for a full-day power cut. Once the UPS went down and all servers shut down for some time.

Internet was unreliable. I originally imagined internet would cost something like Rs 5,000 for one gigabit. In reality, we were paying Rs 2 lakhs for one connection, and we still did not reliably get what was promised. We need public ips, and we need to get leased lines. They are different from what we use at homes.

Hiring was also hard. We did not have the luxury of hiring people easily. Strong candidates were getting hired by companies like Weights & Biases, Hugging Face, and bigger companies. We did not have the charisma or financial muscle to hire such people.

I talked to many customers and helped many people. Jarvis Labs was not the biggest platform, but people loved using it because we were always there for them, regardless of time zone.

At first, I thought I was building a product.

While building it, I realized there is something called a platform, and what we were building was actually a platform, not product.

The user wanted a simple experience: start a GPU instance, open JupyterLab, train a model, pause it, resume it, and trust that the work would still be there.

To make that feel simple, we had to solve hardware procurement, power, cooling, networking, containers, storage, billing, support, and reliability. None of those problems stayed separate. Everything touched everything else.

The AI boom after 2022 made this even clearer.

Demand increased. But GPUs became harder and more expensive to buy. The first A100s we bought were around Rs 6.5 lakhs. Later, the next A100s became hard to get and cost around Rs 13 to 14 lakhs. When H100s came, the price was much more absurd, around Rs 30 to 35 lakhs.

It became clear that without raising funds, we could not keep buying infrastructure.

Our own infrastructure also started showing signs of failure as we added more servers. The old resume architecture, where we copied data between servers, could not scale.

By then, decision fatigue had started to affect me. We had been running this for three or four years, and fear had started kicking in. We did not search for the perfect deal. We found a deal that seemed like it could move us forward, and we took it.

We moved to a data center. It increased cost compared with our own lab, but it let us add more GPUs without worrying every day about electricity, cooling, generator diesel, and internet.

But even there, promises did not always become reality.

One thing that bothered me was how late many larger players entered this space. For years, it felt lonely. Many cloud providers in India did not want to take the risk until AI hype peaked, or until government programs made GPU access a visible opportunity.

Then we had to solve the resume problem properly. We moved to Ceph.

I had a data science and Kaggle background, not a deep infrastructure background. Earlier in my career at Wipro, I had done some infrastructure work for Mastercard and set up big data clusters using Cloudera. I had set up Hadoop clusters before. But distributed storage systems were scary. I did not know RBD block storage or file systems deeply.

We tried to outsource it to the data center team. That turned out to be a bad idea.

We had two options: die or figure it out.

Over a week or 15 days, we built a Ceph cluster on the same GPU servers because we did not have money to buy separate storage servers. We used disks on the GPU servers and reserved some CPU cores and RAM for Ceph.

Jarvis Labs team with early servers

The infrastructure was never separate from the people carrying it.

Customers felt the platform had become extremely slow. It took time to make Ceph performant. This was before ChatGPT and Claude, so learning meant reading books, documentation, watching videos, and figuring things out the old-school way.

Somehow, we figured it out.

Why We Sold

Jarvis Labs was sold to E2E Networks in August 2025.

I want to be clear: this was not an exit announcement type of decision for me.

In 2025, Jarvis Labs was comfortable. We were lean. Some hiring in 2024 did not work out, and the team became three people again. We had started with three or four people, and we had returned to three.

The company was making money autonomously. I started taking a salary in 2025 because the team was small and revenue was accumulating. I had also decided not to buy new servers and instead leverage servers from other players.

On paper, that was comfortable.

But two things bothered me.

First, what would Jarvis Labs look like after two or three years? If we did not buy infrastructure or build new things, what would happen to the customers who trusted us? I was scared that Jarvis Labs would get rusted. We would not do justice to the people who trusted us.

That was the scariest version of failure to me. Not that Jarvis Labs would die suddenly. That it would slowly get rusted. Customers would still trust us, but we would stop deserving that trust.

Second, Selva Kumar and Vishnu Kumar trusted me with this. They left their careers at their peaks. I could not pay them market salaries when the market was growing. The acquisition meant the people who trusted me could finally take the salaries they deserved and not worry every month about managing their lives.

Those were the two main reasons I had to sell Jarvis Labs.

It was not easy. I was not initially sure if E2E would be the right place.

But Tarun Dua had been a risk-taker. He started buying GPU infrastructure before the hype hit. Tarun and Srishti were excited about Jarvis Labs even earlier, around 2022 or 2023, though it did not work out then. E2E had access to much larger infrastructure, including a large H200 cluster in India and building a large B200 cluster.

For Jarvis Labs, it felt like a way to get back on a growth trajectory.

Comfort can become dangerous. We had survived the hard part, but I did not want survival to become stagnation.

I had wanted to write this article for a long time, but I did not have the emotional strength to look back at all of it. Writing this meant admitting how close we came to giving up, how much people trusted me, and how much of the journey I still had not processed.

Now it feels like time.

What Continues

I still want to build something from India that stands a chance globally.

It bothers me that even in 2026, not much progress has been made by others in this particular space. Building infrastructure and software from India is hard, but it should not be seen as impossible. Whenever I come across hard things, I tell myself we are not building rockets, this should be doable.

I did not go to IITs. I did not come from the obvious path. But someone like me can build something good, have a chance in a very tough space, and continue building.

That is something I want to share to the world. Hope it inspires more people to take risks.

Jarvis Labs was never only an infrastructure company. It was infrastructure and software coming together. It was a platform shaped by users who wanted something simple and reliable, and by a small team that kept saying yes to problems that were bigger than us.

There are many people and communities I need to thank: Poonam, Selva Kumar, Vishnu Kumar, fast.ai and Jeremy Howard, the early Kaggle users, the first customers, RP Tech and the people there who supported us when we were tiny, friends who funded through money, time and worked through low salaries, people who worked with Jarvis Labs and moved on, my family, Tarun, Srishti, and the E2E team.

Jarvis Labs is now inside E2E Networks. The founder chapter has changed shape, but the work has not ended.

I want to take Jarvis Labs into a major cloud provider in the world built from India.

The next post will be about that vision.