Prodia's unique take on decentralized compute 🔋

Plus: Co-founder Shawn on the intersection of AI and blockchain...

CV Deep Dive

Today, we’re talking with Shawn Wilkinson, Co-Founder and CEO of Prodia.

Prodia is an AI inference platform built on the concept of decentralized compute. Founded by Shawn and his co-founders Mikhail Avady and Monty Anderson in 2022, Prodia’s mission is to empower developers by simplifying the complexity that comes with building and scaling AI applications. The platform is served to developers via an API, and reduces both the latency and cost of AI inference by over 50%. Prodia currently focuses on image generation but is expanding into video and other modalities.

Today, Prodia serves a range of customers from small startups to publicly traded enterprises. The startup’s approach uses distributed systems to power compute tasks, with the goal of creating a more inclusive network where individual users can contribute to the GPU cloud (in contrast with over-reliance on large cloud providers). Prodia recently raised $15 million in a funding round led by Dragonfly and HashKey, to develop its distributed network and inference API. 

In this conversation, Shawn takes us through the founding story of Prodia, the challenges of building scalable AI infrastructure, and their roadmap for the next 12 months.

Let’s dive in ⚡️

Read time: 8 mins

Our Chat with Shawn 💬

Shawn - welcome to Cerebral Valley! First off, give us a bit about your background and what led you to co-found Prodia? 

Hey there! I’m Shawn Wilkinson, Founder and CEO of Prodia. Before Prodia, I mostly cut my career in distributed systems. Way back in the day, I was mining half a bitcoin a day in my dorm room in 2012. Unfortunately, I turned that off too early because it was making my room too hot - But I fell in love with the tech and got involved with a bunch of early projects in the distributed compute space, working on projects that needed to store a bunch of data. 

Back then, as it is now, AWS, Azure, and Google Cloud were too expensive. I thought, "I’m mining with my CPU and getting paid for it, why can’t I do that with hard drive space?" So, I started a project and then a company called Storj that did distributed cloud storage. It started out with four of us, and our dog! We were one of the very first projects in the space, back in 2014. Storj was one of the very first tokens on the Ethereum network. We raised $30 million, grew it to a billion-dollar company, and ended up hiring the ex-CEO of Docker, Ben Golub, to run it. 

Back in 2020, my co-founder Mikhail and I were among the first 300 users of GPT-3, and we were playing with a million different ideas. One of them was a generative music app where we used AI to generate music and create covers for the tracks. We put out a little demo tool, and it instantly went viral - from 200 users to 100,000 in 2 months. This, of course, used a lot of GPU compute. Having built a company that competed directly with AWS for cloud storage, I didn’t want to spend a bunch of money on GPUs on AWS. I thought, "What the heck? We have the experience, let’s just build a distributed compute layer."

We built that, and it allowed us to increase performance by two to four times, cut costs by an order of magnitude, and made it much easier to scale. So, we were able to build out this app and then realized that this was the real product. Everyone is struggling with AI infrastructure, and this is significantly better - so, we pivoted to provide AI infrastructure for apps and companies. We typically focus on more compute-intensive tasks like media, images, videos, and these kinds of things. For example, an image uses 100 times the compute of text/LLM generation, so we tend to focus on high-scale, heavy compute stuff.

The key benefits are that we increase performance by two to four times, save our customers anywhere between 50% to 90%, and make it a lot simpler. You don’t have to sacrifice your firstborn to AWS to get an allocation of GPUs or dedicate an entire engineering team to manage it. We make it a lot better, cheaper, and faster for people, which is exactly what they need right now.

Give us a top-level overview of Prodia, for those who are less familiar. 

At Prodia, we’re providing an API for AI inference that takes all the complexity out of it. At the end of the day, you're getting a two to four times speed improvement, spending 50% to 90% less, and not having to worry about scaling it up and down. We handle all of that for you. Prodia is an AI API that eliminates the headache.

Who are your users today? Who’s finding the most value in what you’re building with Prodia? 

Today, our focus is on small to large enterprises, particularly those doing a lot of inference compute. Right now, we’re primarily focusing on images, but we're expanding to things like video and eventually text and other formats. We're really focused on high-end use-cases - for example, if you Google "AI Image Editor," the first result has 70 million users and is powered by Prodia—that's the kind of high-scale app we help. There are likely applications you've used before that are powered by our technology under the hood.

We’re here to make scaling AI apps easier because it can be very difficult. Everyone is going through these pain points. We believe AI is going to power everything—every application, every company—but it has to be easier, faster, and cheaper. Many people integrating AI or wanting to integrate AI are having trouble, and we just want to make it easier for developers. With our solution, you don’t have to think about the infrastructure anymore. You can just focus on what your product actually does without having to make sacrifices to Amazon or other hyperscale providers in terms of capacity. 

Right now, we're focusing on images. Next, we'll expand into video, and then probably into text. We're really interested to see what comes after that, such as continuously-running AI agents and other advanced applications. We're starting with something that has a lot of demand - you see platforms like MidJourney and Stable Diffusion being used at scale, and we think that's a good entry point for people to start out.

In the next decade, I think everything - from software to hardware, AI to blockchain - will be fully integrated. The next 10 years will be like the first time you ever used a phone—transformative. AI will become integral to everything we do, just like our phones are now. The future holds so much potential; everyone has a job, a role, and AI will enhance all aspects of our lives. The next decade will be a significant step up from what we experience with our phones today.

How do you measure the impact that Prodia is having on your early users? What are some of the metrics you’re measuring? 

For the standard setups that we see, we're generally increasing their performance by two to four times. For example, we've seen standard deployments of Stable Diffusion have inference times that take six to eight seconds, but with our setup, we're saving them 50% to 90% on their AWS bills and eliminating all the complexity of trying to scale and bringing down the inference time to 2 seconds. We're doing this for images now, but we aim to do it for any AI application. We know that if you improve the inference speed by 2x, you get a 4x increase in user engagement.

If you decrease the cost, it allows you to scale up and make the technology more accessible to more users. We really believe in making this accessible to everyone, not just those who can afford high-end GPUs like the H100. I read a lot of sci-fi, and bad things happen when a lot of AI power is controlled by very few people. The stories end up very positive when everyone has access to AI and their own personal AI. So there's a more philosophical aspect to this as well.

You have a launch coming up this week. Tell us a little bit about what your users can expect - why is this a big week for Prodia? 

We have an upcoming launch for our V2 API. It increases performance by 400% in terms of image generation, but the same architecture can apply to other models and modalities as well. July 2nd, we also announced our fundraise. We raised $15 million from some great investors, including Dragonfly and HashKey, to build out this distributed network on the back end and the inference API on the front end. 

We just want to bring this to as many people as possible. As you know, people have spent around $100 billion on AI infrastructure in the last twelve months, and it's only going up. We’re developers who really struggled, and we want others to not struggle as much as we did and not break the bank.

What would you say is the hardest technical challenge around building and scaling Prodia to the levels that you're aiming for?

Scaling things is always challenging. In the AI sector, it moves so fast, and you have these applications that become extremely popular in a matter of days, reaching millions of users. But often, the code was something someone wrote as a test over a weekend, never intended for large-scale production deployment. If you look at some of the AI code and examples, like Stable Diffusion, it often started as a hobby project but became extremely popular and useful to people.

The largest challenge has been taking these viral projects and turning them into something that’s production-grade and can run at scale. This involves dealing with a lot of issues around scaling and stability. Maturing some of this stuff has been very difficult. Often, there’s no documentation, and you have to find solutions in obscure Reddit posts from months ago to figure out the right command or GPU flag to use.

Our goal is to handle these challenges for the end user and developer so they don’t have to go through all that. We want to make the process smoother and more reliable for everyone involved.

Given the API release and your fundraising announcement, what are your main focus points between now and the end of the year? What are some of your top priorities?

We're really killing it on images and we're excited about video generation. I think ChatGPT has captured the world's imagination. Image generation is definitely growing, and I think video will be next. Have you seen the Sora demo? It's mind-blowing. To our goal of making things more accessible, we want to provide tools that allow people to actually use and build with this tech. Video generation is 300 to 500 times the compute of image generation, but it's not going to be used if it is $30 for a ten-second clip. AI video generation is going to be very powerful, but we are going to be the only platform that it can run on at scale.

We want to bring more models and solve more problems for developers and people scaling things out. We have an interesting approach by using a distributed system to underpin some of the compute work. We aim to be more inclusive, allowing people to participate in this GPU cloud. Imagine having your own compute power under your desk, running AI generative tasks. We want a network that looks less like it's controlled by just Amazon, Google, and Microsoft, and more like a community-driven system.

In the next six months, we want to expand into more modalities and solve more problems for developers. We also want to be more inclusive, creating a network that's for developers by the people, rather than relying on the current big providers. I haven't found anyone yet who loves paying gobs of money to these providers and thinks it's working great for them. We want to change that. It's really impactful, but again, we need to make it easier and more accessible. 

From large companies to small developers, anyone can integrate into the zap and make a significant impact. One of our core principles is making this technology more accessible and easier to use, built on a base of compute power provided by people rather than just large cloud providers. More power to them—they're great for things like training when you need a bunch of GPUs for a short period. But when you want to run your app over the long term, committing to a two-year plan with limited availability isn't ideal. We shouldn't build the entire foundation of what's next on that model.

Conclusion

To stay up to date on the latest with Prodia, follow them on X and learn more about them at Prodia.

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.