• Cerebral Valley
  • Posts
  • Lumino AI - your open-source AI research company 👥

Lumino AI - your open-source AI research company 👥

Plus: CEO Eshan Chordia on the opportunity around building decentralized AI infrastructure...

CV Deep Dive

Today, we’re talking with Eshan Chordia, Co-founder and CEO of Lumino AI.

Lumino AI is a startup building open-source infrastructure to simplify training, fine-tuning, and deploying AI models. Founded by Eshan and his co-founder Yogesh Darji in 2023, Lumino is designed to cut costs by sourcing GPUs directly from data centers, slashing training and inference costs by up to 80%. The team launched their first product, an LLM fine-tuning platform, in late-2024, and it has amassed over hundreds of users since its launch. 

Today, Lumino is helping ML enthusiasts, researchers, and startups across industries like climate, legal, and fintech experiment faster, iterate better, and deploy AI into production with ease. 

In this conversation, Eshan shares how Lumino got started, the technical challenges of building decentralized AI infrastructure, and how they’re helping everyone—from researchers to startups—build better AI.

Let’s dive in ⚡️

Read time: 8 mins

Our Chat with Eshan 💬

Eshan - welcome to Cerebral Valley! First off, give us a bit about your background and what led you to start Lumino AI? 

Excited to be here! My name is Eshan Chordia, and I’m the Co-founder and CEO of Lumino AI. A bit about my background: I started out in software engineering and UI/UX design right after college. Early in my career, I launched a consumer app called Nootry. It was all about helping people find healthy food at restaurants—something I personally needed at the time. I was in my mid-20s, eating out a lot, and trying to stay healthy as a vegetarian who just came back from the gym. I wanted an app that could tell me how to get enough protein and stay within my calorie goals.

After Nootry, I joined ZestyAI, where we built the world’s most advanced wildfire risk model. It could predict the probability of a house being destroyed by wildfire. I spent 3 years there working on wildfire risk, hail risk, and property analytics using AI. Our wildfire product was particularly impactful—it helped insurance companies reduce their exit rates from California, and reduce insurance rates for homeowners that fire-proofed their homes. All of this was powered by deep learning, computer vision, and NLP.

Right before starting Lumino, I also spent time at Protocol Labs. Protocol Labs was able to hire Google-level engineering talent, and built the best open source decentralized infrastructure in the world, which helped shape my perspective as I was thinking about Lumino. We built libp2p, a decentralized networking library used by Ethereum and other blockchains, as well as IPFS, a peer-to-peer content delivery network, and Filecoin, a decentralized storage system.

Throughout this journey, I was exposed to just how expensive compute is for AI workloads. At ZestyAI, we were on GCP, but we’d compare GPU prices across different public clouds to decide if it made sense to switch providers. We even had an Excel sheet listing all the GPUs GCP offered, the prices, and how long the models would take to run to balance cost versus performance.

That’s when I started looking into sourcing GPUs directly from data centers. Data centers were way cheaper than GCP or AWS because they don’t have the same overhead costs, but they also lack the software, managed services, virtualization, and other products that the public clouds provide. So it didn’t make sense at the time.

When I joined the Filecoin cryptoeconomics team at Protocol Labs in 2022, I saw how storage providers—over 4,000 of them—were buying hardware, setting it up in co-located data centers, and providing compute and storage to their ecosystem. That’s when it clicked for me: you could use this kind of business model to bootstrap compute supply for AI, which really helps with reducing cost. 

I actually pitched this idea to Protocol Labs during my first week there, but we were focused on distributed compute frameworks at a lower level on the tech stack, and not specifically on machine learning and AI use cases at that time. They told me, “Maybe in the future.” A year later, I decided to take the plunge and start this company. That’s how the idea for Lumino AI really came together.

How would you describe Lumino to an AI engineer or enterprise who’s slightly less familiar with what you do? 

At Lumino, we’re building an open-source AI research company, but we focus on AI infrastructure and AI ops rather than foundational models. Our goal is to provide open-source AI infrastructure for end-to-end use cases like training, fine-tuning, inference, and more.

Our first product, launched late last year, is an LLM fine-tuning platform. It lets anyone use our SDK or UI to fine-tune open-source models like LLaMA 3.3. The cool part? We’re able to lower training costs by 80%. If you’re technical, you can integrate our SDK, and if you’re not, you can use our UI to upload a dataset, pick a model, and start fine-tuning—all in under five minutes.

This eliminates the need for teams to have ML engineers or build complex ML infrastructure to build a product. We’ve built that for you. We also handle key aspects like ensuring your model is trained properly, the evaluation is accurate, and your training and inference pipelines are optimized for high throughput and cost efficiency.

Who are your users today? Who’s finding the most value in what you’re building with Lumino? 

There are three main groups of users that we see engaging with our platform.

The first group is machine learning hobbyists and enthusiasts—people who love experimenting with the latest machine learning tools and techniques. They’re always pushing boundaries, trying out new methods, and figuring out how to build and deploy models. Honestly, they’re my favorite group to talk to because I end up learning so much from them. They’ve experimented with all the new tools, read all the papers, and have a deep passion for AI.

The second group is researchers and academia. This group is focused on finding the best ways to train models in an affordable environment. They don’t have large budgets like tech companies or VC-backed startups. Instead, they have to stretch their grant money as far as possible to run experiments and prove or disprove their hypotheses. It’s really inspiring to see how they use our platform to maximize their resources and advance their research.

The third group we’re working with is pre-seed to Series B tech startups. These are companies that haven’t built out full machine learning teams or infrastructure yet. They’re building software businesses, but the founders might or might not be technical. They fall into two buckets: 1) companies with larger engineering teams that want to iterate faster, and 2) companies that might only have a couple of engineers and aren’t ready to hire machine learning engineers yet. However, both sets of companies care about how good the model is, and want someone else to handle building out the ML infrastructure.

These companies are trying to iterate quickly and figure out how LLMs or foundational models can make their product better or generate revenue. They want to move fast, deploy models into production, and improve their product quickly. That’s where our product really helps—it lets them ship and iterate fast, so they can focus on making their product as good as possible.

Are there any specific use-cases that you think best illustrate how Lumino works? 

At a high level, what we’ve seen is that our product works across multiple industries and verticals. People often ask who we’re going after, and the answer is simple: early-stage machine learning teams. But it’s not limited to one specific industry like climate, legal, or fintech.

We’ve seen a variety of use cases. Someone was working on climate applications, another company was experimenting with consumer apps and using AI to generate content within their app. We’ve also seen B2B companies in legal and fintech using it. The flexibility comes from the fact that we’re providing the tools and infrastructure to help you train your model so it works best for your app. You bring the dataset, pick the model, and we handle everything else.

What has been the hardest technical challenge around building Lumino into the platform it is today?

One of the most exciting and challenging technical problems we’re working on today is our proof-of-training algorithm. There are complex challenges around trust, proof that the training job was done correctly, and ensuring data was handled correctly in a marketplace. For example, we know that GCP and AWS won’t steal your data or fake your training results. In a marketplace, we need to have technical and economic mechanisms to ensure data centers aren’t committing fraud like stopping training early.

This algorithm is designed to prove that the compute servers assigned to train a model actually did so correctly, using formal cryptographic methods. What makes this such an exciting and challenging problem to solve is that while cryptographic verification has become standard for deterministic functions, machine learning is inherently non-deterministic. Every time you train a model, the weights might differ unless you set the seed the same way every time. Changes in the dataset can also impact training outcomes, making it a fundamentally non-deterministic process.

The challenge we’re tackling is how to use cryptographic verification to prove the integrity of the training process. This means ensuring that node providers didn’t generate fake model weights, stop training early, or use only part of the dataset. Solving these problems are critical in showing that our network is secure and that compute providers aren’t delivering incomplete or inaccurate models.

How do you plan on Lumino’s product evolving over the next 6-12 months? Anything specific on your product roadmap that your existing customers are excited about? 

We’re planning to launch our inference product shortly. Right now, we have been focused on fine-tuning for LLMs, but adding inference is something we’ve heard a lot of demand for from customers. They’ll say, “Hey, I fine-tuned my model, but now what?” So being able to offer inference in a serverless or dedicated manner will be a big improvement. 

Another big focus for us is improving the efficiency of training and inference. Our key value proposition is cutting training and inference costs by 80%, primarily because we source GPUs from data centers instead of relying on AWS or GCP. Another way to lower the cost of compute is to increase efficiency across multiple parts of the stack without degrading the quality of training or inference.

As we work on these improvements and start open-sourcing them, incremental changes will stack up. For example, one feature might decrease compute costs by 3%, another by 1%, and another by 7%. Together, these enhancements will create an end-to-end AI infrastructure that makes training and inference far more efficient.

As a result, your costs are cheaper without lowering performance, and that’s really the bread and butter of what we’re doing. What’s super cool is that a lot of times, people think, “Okay, my costs are 80% cheaper, so I’m going to save $4,000 a month and have more money in the bank.” Yes, you do have more money in the bank—that’s one benefit—but it’s only part of the equation.

What we see is that it also allows customers to run more experiments and improve their models faster. They’re no longer constrained by the budget they thought they had. For example, they might decide to spend a little bit more because their model is getting better faster, they’re actually generating more revenue than they would have otherwise.

We often hear teams having these discussions with us about how much to spend and how to use OPEX strategically. It’s not just about completely eliminating OPEX, but also about using it in the right way to deliver the most value for their customers.

Lastly, tell us a little bit about the team and culture at Lumino. Are you hiring, and what do you look for in prospective team members that are joining?

We’re a team of 6 right now. Yogesh and I are the founders, and we have three engineers and one economist on the team. We have a hybrid work setup where we’re in the office three days a week and work from home the other two. Even though we’re hybrid, we think of ourselves as an in-office culture because everyone comes in on the same three days. If people came in on different days, we’d end up doing Zoom meetings anyway, and at that point, you might as well be fully remote.

Being in the office together really helps us move quickly. Sure, commuting takes time, but the in-person conversations let us work through technical challenges much faster. It’s way more effective than trying to coordinate over Google Meet or Zoom, where you might book an hour but really need more time, or those impromptu, “quick questions” just don’t happen.

Another big benefit is the team bonding. We get to hang out, have lunch together, play ping pong, go for walks—just normal team stuff. It’s a big part of why we prefer this setup. It builds a team culture where people genuinely enjoy working together. We look for people who love being part of a team, not lone wolves who just want to code their piece and check out.

For Yogesh and me, one of the most important traits we look for in team members is the love for learning. AI and tech are evolving so fast, and we need people who are excited to keep up and grow with it. We want people who enjoy the process of learning, even if it doesn’t feel immediately useful—it could be invaluable later. Another big value for us is bringing positive energy to the team. Everyone has bad days, and we’re all about being authentic, but overall, having a positive vibe makes the work way more fun.

Something we look for that might be a bit unique is a love for building—beyond just work projects. It’s an extension of loving to learn. We value people who genuinely enjoy experimenting, iterating, and creating things, whether it’s through hackathons, personal projects, contributing to open source, or building something fun with friends. During our interviews, we actually look for examples of this to see that you have that innate passion for building.

Right now, we’re hiring a machine learning engineer and a protocol engineer. If any of the challenges I mentioned resonate with you, please reach out!

Conclusion

Stay up to date on the latest with Lumino AI, learn more about them here.

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.