- Cerebral Valley
- Posts
- Lumino AI - your open-source AI research company đĽ
Lumino AI - your open-source AI research company đĽ
Plus: CEO Eshan Chordia on the opportunity around building decentralized AI infrastructure...

CV Deep Dive
Today, weâre talking with Eshan Chordia, Co-founder and CEO of Lumino AI.
Lumino AI is a startup building open-source infrastructure to simplify training, fine-tuning, and deploying AI models. Founded by Eshan and his co-founder Yogesh Darji in 2023, Lumino is designed to cut costs by sourcing GPUs directly from data centers, slashing training and inference costs by up to 80%. The team launched their first product, an LLM fine-tuning platform, in late-2024, and it has amassed over hundreds of users since its launch.
Today, Lumino is helping ML enthusiasts, researchers, and startups across industries like climate, legal, and fintech experiment faster, iterate better, and deploy AI into production with ease.
In this conversation, Eshan shares how Lumino got started, the technical challenges of building decentralized AI infrastructure, and how theyâre helping everyoneâfrom researchers to startupsâbuild better AI.
Letâs dive in âĄď¸
Read time: 8 mins
Our Chat with Eshan đŹ
Eshan - welcome to Cerebral Valley! First off, give us a bit about your background and what led you to start Lumino AI?
Excited to be here! My name is Eshan Chordia, and Iâm the Co-founder and CEO of Lumino AI. A bit about my background: I started out in software engineering and UI/UX design right after college. Early in my career, I launched a consumer app called Nootry. It was all about helping people find healthy food at restaurantsâsomething I personally needed at the time. I was in my mid-20s, eating out a lot, and trying to stay healthy as a vegetarian who just came back from the gym. I wanted an app that could tell me how to get enough protein and stay within my calorie goals.
After Nootry, I joined ZestyAI, where we built the worldâs most advanced wildfire risk model. It could predict the probability of a house being destroyed by wildfire. I spent 3 years there working on wildfire risk, hail risk, and property analytics using AI. Our wildfire product was particularly impactfulâit helped insurance companies reduce their exit rates from California, and reduce insurance rates for homeowners that fire-proofed their homes. All of this was powered by deep learning, computer vision, and NLP.
Right before starting Lumino, I also spent time at Protocol Labs. Protocol Labs was able to hire Google-level engineering talent, and built the best open source decentralized infrastructure in the world, which helped shape my perspective as I was thinking about Lumino. We built libp2p, a decentralized networking library used by Ethereum and other blockchains, as well as IPFS, a peer-to-peer content delivery network, and Filecoin, a decentralized storage system.
Throughout this journey, I was exposed to just how expensive compute is for AI workloads. At ZestyAI, we were on GCP, but weâd compare GPU prices across different public clouds to decide if it made sense to switch providers. We even had an Excel sheet listing all the GPUs GCP offered, the prices, and how long the models would take to run to balance cost versus performance.
Thatâs when I started looking into sourcing GPUs directly from data centers. Data centers were way cheaper than GCP or AWS because they donât have the same overhead costs, but they also lack the software, managed services, virtualization, and other products that the public clouds provide. So it didnât make sense at the time.
When I joined the Filecoin cryptoeconomics team at Protocol Labs in 2022, I saw how storage providersâover 4,000 of themâwere buying hardware, setting it up in co-located data centers, and providing compute and storage to their ecosystem. Thatâs when it clicked for me: you could use this kind of business model to bootstrap compute supply for AI, which really helps with reducing cost.
I actually pitched this idea to Protocol Labs during my first week there, but we were focused on distributed compute frameworks at a lower level on the tech stack, and not specifically on machine learning and AI use cases at that time. They told me, âMaybe in the future.â A year later, I decided to take the plunge and start this company. Thatâs how the idea for Lumino AI really came together.
Had a great time talking to @TwentyTwoNode about @luminoai, the future of AI, why decentralized AI is important, and about my journey into entrepreneurship!
â Eshan Chordia (@eshanchordia)
6:56 PM ⢠Sep 20, 2024
How would you describe Lumino to an AI engineer or enterprise whoâs slightly less familiar with what you do?
At Lumino, weâre building an open-source AI research company, but we focus on AI infrastructure and AI ops rather than foundational models. Our goal is to provide open-source AI infrastructure for end-to-end use cases like training, fine-tuning, inference, and more.
Our first product, launched late last year, is an LLM fine-tuning platform. It lets anyone use our SDK or UI to fine-tune open-source models like LLaMA 3.3. The cool part? Weâre able to lower training costs by 80%. If youâre technical, you can integrate our SDK, and if youâre not, you can use our UI to upload a dataset, pick a model, and start fine-tuningâall in under five minutes.
This eliminates the need for teams to have ML engineers or build complex ML infrastructure to build a product. Weâve built that for you. We also handle key aspects like ensuring your model is trained properly, the evaluation is accurate, and your training and inference pipelines are optimized for high throughput and cost efficiency.
Our SDK and web console is out! Using our SDK, everyone can now fine-tune LLMs with just a few lines of code! You donât have to set up compute instances, build ML infra, or pay ridiculous fees for GPUs. Our web console also enables anyone to fine-tune LLMs, even if you donât know⌠x.com/i/web/status/1âŚ
â Lumino đ§ (@luminoai)
3:46 PM ⢠Oct 1, 2024
Who are your users today? Whoâs finding the most value in what youâre building with Lumino?
There are three main groups of users that we see engaging with our platform.
The first group is machine learning hobbyists and enthusiastsâpeople who love experimenting with the latest machine learning tools and techniques. Theyâre always pushing boundaries, trying out new methods, and figuring out how to build and deploy models. Honestly, theyâre my favorite group to talk to because I end up learning so much from them. Theyâve experimented with all the new tools, read all the papers, and have a deep passion for AI.
The second group is researchers and academia. This group is focused on finding the best ways to train models in an affordable environment. They donât have large budgets like tech companies or VC-backed startups. Instead, they have to stretch their grant money as far as possible to run experiments and prove or disprove their hypotheses. Itâs really inspiring to see how they use our platform to maximize their resources and advance their research.
The third group weâre working with is pre-seed to Series B tech startups. These are companies that havenât built out full machine learning teams or infrastructure yet. Theyâre building software businesses, but the founders might or might not be technical. They fall into two buckets: 1) companies with larger engineering teams that want to iterate faster, and 2) companies that might only have a couple of engineers and arenât ready to hire machine learning engineers yet. However, both sets of companies care about how good the model is, and want someone else to handle building out the ML infrastructure.
These companies are trying to iterate quickly and figure out how LLMs or foundational models can make their product better or generate revenue. They want to move fast, deploy models into production, and improve their product quickly. Thatâs where our product really helpsâit lets them ship and iterate fast, so they can focus on making their product as good as possible.
Are there any specific use-cases that you think best illustrate how Lumino works?
At a high level, what weâve seen is that our product works across multiple industries and verticals. People often ask who weâre going after, and the answer is simple: early-stage machine learning teams. But itâs not limited to one specific industry like climate, legal, or fintech.
Weâve seen a variety of use cases. Someone was working on climate applications, another company was experimenting with consumer apps and using AI to generate content within their app. Weâve also seen B2B companies in legal and fintech using it. The flexibility comes from the fact that weâre providing the tools and infrastructure to help you train your model so it works best for your app. You bring the dataset, pick the model, and we handle everything else.
Hereâs a video tutorial on fine-tuning Llama using our SDK for those who prefer learning visually! Get started in minutes, no ML infra setup, and up to 80% cheaper than AWS, GCP, or Azure!
â Lumino đ§ (@luminoai)
5:52 PM ⢠Oct 3, 2024
What has been the hardest technical challenge around building Lumino into the platform it is today?
One of the most exciting and challenging technical problems weâre working on today is our proof-of-training algorithm. There are complex challenges around trust, proof that the training job was done correctly, and ensuring data was handled correctly in a marketplace. For example, we know that GCP and AWS wonât steal your data or fake your training results. In a marketplace, we need to have technical and economic mechanisms to ensure data centers arenât committing fraud like stopping training early.
This algorithm is designed to prove that the compute servers assigned to train a model actually did so correctly, using formal cryptographic methods. What makes this such an exciting and challenging problem to solve is that while cryptographic verification has become standard for deterministic functions, machine learning is inherently non-deterministic. Every time you train a model, the weights might differ unless you set the seed the same way every time. Changes in the dataset can also impact training outcomes, making it a fundamentally non-deterministic process.
The challenge weâre tackling is how to use cryptographic verification to prove the integrity of the training process. This means ensuring that node providers didnât generate fake model weights, stop training early, or use only part of the dataset. Solving these problems are critical in showing that our network is secure and that compute providers arenât delivering incomplete or inaccurate models.
đĽ Important update in the world of AI
Below is a demo of our decentralized ML training protocol working! Data centers and others will be able to supply compute to the network, and run training jobs for a fee based on the requirements of the training / fine-tuning job. This is⌠x.com/i/web/status/1âŚ
â Eshan Chordia (@eshanchordia)
5:51 PM ⢠Nov 5, 2024
How do you plan on Luminoâs product evolving over the next 6-12 months? Anything specific on your product roadmap that your existing customers are excited about?
Weâre planning to launch our inference product shortly. Right now, we have been focused on fine-tuning for LLMs, but adding inference is something weâve heard a lot of demand for from customers. Theyâll say, âHey, I fine-tuned my model, but now what?â So being able to offer inference in a serverless or dedicated manner will be a big improvement.
Another big focus for us is improving the efficiency of training and inference. Our key value proposition is cutting training and inference costs by 80%, primarily because we source GPUs from data centers instead of relying on AWS or GCP. Another way to lower the cost of compute is to increase efficiency across multiple parts of the stack without degrading the quality of training or inference.
As we work on these improvements and start open-sourcing them, incremental changes will stack up. For example, one feature might decrease compute costs by 3%, another by 1%, and another by 7%. Together, these enhancements will create an end-to-end AI infrastructure that makes training and inference far more efficient.
As a result, your costs are cheaper without lowering performance, and thatâs really the bread and butter of what weâre doing. Whatâs super cool is that a lot of times, people think, âOkay, my costs are 80% cheaper, so Iâm going to save $4,000 a month and have more money in the bank.â Yes, you do have more money in the bankâthatâs one benefitâbut itâs only part of the equation.
What we see is that it also allows customers to run more experiments and improve their models faster. Theyâre no longer constrained by the budget they thought they had. For example, they might decide to spend a little bit more because their model is getting better faster, theyâre actually generating more revenue than they would have otherwise.
We often hear teams having these discussions with us about how much to spend and how to use OPEX strategically. Itâs not just about completely eliminating OPEX, but also about using it in the right way to deliver the most value for their customers.
Lastly, tell us a little bit about the team and culture at Lumino. Are you hiring, and what do you look for in prospective team members that are joining?
Weâre a team of 6 right now. Yogesh and I are the founders, and we have three engineers and one economist on the team. We have a hybrid work setup where weâre in the office three days a week and work from home the other two. Even though weâre hybrid, we think of ourselves as an in-office culture because everyone comes in on the same three days. If people came in on different days, weâd end up doing Zoom meetings anyway, and at that point, you might as well be fully remote.
Being in the office together really helps us move quickly. Sure, commuting takes time, but the in-person conversations let us work through technical challenges much faster. Itâs way more effective than trying to coordinate over Google Meet or Zoom, where you might book an hour but really need more time, or those impromptu, âquick questionsâ just donât happen.
Another big benefit is the team bonding. We get to hang out, have lunch together, play ping pong, go for walksâjust normal team stuff. Itâs a big part of why we prefer this setup. It builds a team culture where people genuinely enjoy working together. We look for people who love being part of a team, not lone wolves who just want to code their piece and check out.
For Yogesh and me, one of the most important traits we look for in team members is the love for learning. AI and tech are evolving so fast, and we need people who are excited to keep up and grow with it. We want people who enjoy the process of learning, even if it doesnât feel immediately usefulâit could be invaluable later. Another big value for us is bringing positive energy to the team. Everyone has bad days, and weâre all about being authentic, but overall, having a positive vibe makes the work way more fun.
Something we look for that might be a bit unique is a love for buildingâbeyond just work projects. Itâs an extension of loving to learn. We value people who genuinely enjoy experimenting, iterating, and creating things, whether itâs through hackathons, personal projects, contributing to open source, or building something fun with friends. During our interviews, we actually look for examples of this to see that you have that innate passion for building.
Right now, weâre hiring a machine learning engineer and a protocol engineer. If any of the challenges I mentioned resonate with you, please reach out!
Conclusion
Stay up to date on the latest with Lumino AI, learn more about them here.
Read our past few Deep Dives below:
If you would like us to âDeep Diveâ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.