- Cerebral Valley
- Posts
- Daytona - Composable Computers for AI Agents
Daytona - Composable Computers for AI Agents
Plus: CEO Ivan Burazin on building sub-60ms AI sandboxes, scaling to hundreds of thousands of concurrent environments, and why 2026 is the year of the sandbox...

CV Deep Dive
Today, we’re talking with Ivan Burazin, Co-Founder and CEO of Daytona.
Daytona provides composable computers for AI agents. Founded by Ivan and his long-time co-founding team, it provides fast, stateful composable computers popularly called “AI sandboxes” that let agents run code, commands, and computer-use workflows with full control over the environment, including CPU, RAM, disk, OS, and more, provisioned in under 60 milliseconds.
Today, Daytona is used by teams building coding & command-execution agents, computer and browser-use systems, and increasingly, reinforcement learning evaluation and training infrastructure. Its deployment options span Daytona’s own Cloud, but also customer-managed compute and self-hosted setups, including enterprise-friendly on-prem deployments, including EKS. In 2024, Daytona raised $5 million in seed funding led by Upfront Ventures with participation from 500 Global, bringing total funding to $7 million.
In this conversation, Ivan shares how Daytona was founded, why the market is shifting toward agent “computers” and long-running sandboxes, and his view on what the next year of AI agents will demand from infrastructure.
Let’s dive in ⚡️
Read time: 8 mins
Our Chat with Ivan 💬
Introduce yourself and give us a bit of background on you and Daytona. What led you to co-found Daytona?
It’s been a long path to Daytona. The founding team has worked together for a long time. We did everything from stacking server rooms in the early 2000s, to founding the first browser-based IDE in 2009. There was no VS Code or Kubernetes either, so we also had to build everything from the IDE to the orchestration and a lot of surrounding infrastructure all from scratch - which little did we know would prove very useful in 2025.
Later, I went off and built a developer conference company. It was acquired by a competitor of Twilio in the communications infrastructure space, where I ran Developer Experience.
Most of what we did was infra so the logical next thing to do was found another infra company, so in 2023, we founded Daytona, which started slightly differently. Originally, it was a dev environment manager. We automated dev environments for human engineers inside large enterprises. Tools like this existed inside the likes of Meta, Google, and Microsoft, but weren’t available to the rest of the Fortune 500. It was a very enterprise, on-prem product - basically managing virtual machines for humans.
Then we pivoted completely into creating those virtual machines for AI. We call them AI sandboxes now. We started working on that in January of last year, launched at the end of April, and that became Daytona as it exists today.
How would you describe Daytona to the uninitiated developer or AI team?
I’d say Daytona is a composable computer for an AI agent. If you think of an AI agent as a digital knowledge worker, we’re the equivalent of the laptop or PC you’d use to get your job done. It’s composable because you can literally define what that “computer” is: What CPU, how much RAM, does it need a GPU, how much disk, and even which operating system, all provisioned in under 60 milliseconds.
Tell us about your key users today. Who is finding the most value in what you’re building with Daytona?
There are three main user types.
First: code and command execution. These are agents that spend their day executing code or commands. This includes coding agents, but it’s broader than that. Any agent that needs to run code to do its job fits here. Imagine an AI architect designing a building: it still needs to run code to get the work done.
Second: anything visual. Think browser-use or computer-use. We don’t compete with those tools. Browser use, specifically, is a Daytona customer. Rather, we’re the layer underneath, providing the infrastructure so an agent can interact with human interfaces.
Third: RL environment infrastructure. This is new for us, but it’s growing fast. TerminalBench, one of the more well-known benchmarks, brought us into this. Their harness, Harbor, is used for RL evals and benchmarks, and Daytona is one of the default ways to run it. Because of that, we’ve picked up a lot of strong teams running RL workloads. Daytona enables them to spin up hundreds, thousands, or tens of thousands of machines concurrently.
Tell us about some existing use cases for Daytona. Are there interesting customer stories you’d like to highlight?
There is one customer that's doing something very, very interesting, and that is an AI scientist. I’d say this is the year AI scientists start to become a real category. Think genome work, drug discovery, and other scientific workflows where agents run tasks instead of humans.
Like humans, those agents still need a computer to do the work. Most people think of agents as scraping the internet, coding apps, or doing support, legal, or similar workflows. Scientists are a different kind of use case, and it’s a whole new market.
It’s meaningful to us that Daytona can enable that kind of work. We’ll probably never know whether Daytona helped accelerate a discovery that saved lives, because we don’t see the content of what people run. But at a high level, it's a great feeling to know that we are helping to accelerate things that can literally save lives.
Walk us through Daytona’s platform. Which use cases should new customers experiment with first, and how easy is it for them to get started?
Daytona is straightforward: there’s an SDK and a dashboard. And honestly, today you often don’t even need to read the docs. You can tell Claude Code to set it up, and it will do most of the work. You just log in, and let AI do the rest.
That being said, we've been investing in demo apps, mostly for inspiration. We have examples with frameworks such as LangChain, Mastra, or even Google’s ADK but also a few demos showing how to spin up Claude Code or Open Code inside Daytona, and how to run multi-agent systems.
If you’re new, start by having Claude Code wire things up. Then browse the guides to get ideas for what’s possible with sandboxes.
How are you measuring impact and results for your key customers? What are you focused on?
There are a few metrics we care about, depending on the customer.
One is speed. Daytona now gets to sub-60 millisecond spin-up time, which is extremely fast.
Another is concurrency. Spinning up one sandbox quickly is not the same as spinning up sandboxes quickly at scale. If you’re running RL, you might need thousands or tens of thousands of environments at once.
In terms of customer outcomes, we’ve seen teams save anywhere from 6 hours to 20 hours on certain workflows, especially in RL. That’s a big deal.
There are a number of companies working on sandboxes. What sets Daytona apart from a product or technical perspective?
First, the term “sandbox” is a bad name for what we’re doing. It’s ambiguous. People call everything a sandbox: WASM isolates, containers, microVMs, full VMs, and more. Those isolation layers have different tradeoffs and features, but they’re all getting lumped into the same label. That creates a lot of confusion.
Second, I believe strongly that an AI sandbox should be extremely fast, fully stateful, and long-running. No primitive currently delivers that combination, regardless of their feature set.
I’d say you can think about sandboxes along two axes: primitives and tooling.
The primitives axis is: how fast does it start, how many can run concurrently, can it run forever, and is it effectively equivalent to a full machine or VM.
The tooling axis is: what tools do you provide to the agent? A laptop ships with first-party apps like a browser, file explorer, and terminal. An agent “computer” should have equivalents. In Daytona, we include things like a headless terminal, a Git client, an LSP, firewalling, and other tools. Some of those make agents more productive, and others reduce risk, which matters a lot.
Directionally, Daytona is ahead across many of these areas. But the market is new, and we don’t have a clean shared definition yet, which is expected at this stage.
Daytona Cloud runs on bare metal. That lets us go a layer deeper: we’re not running inside a VM, so we avoid that latency and overhead.
For orchestration, we don’t use Kubernetes, Nomad, or other off-the-shelf orchestration systems. Those were built for deploying applications at scale - not for agent runtimes. That mismatch is why most “sandbox” products lack important capabilities.
So we built the entire Daytona stack ourselves, specifically for AI agents, and we run it on our own metal. The decisions were first-principles: agents are faster than humans, so the computer has to be fast. It also has to be stable, stateful, and long-running. We couldn’t get there without rebuilding the stack.
What has been the hardest technical challenge in building Daytona into what it is today?
Scale. It’s fairly trivial to get something n=1 working. The hard part is making it work at massive scale, think millions of concurrent sandboxes.
At that point, you hit problems you don’t anticipate. Just as an example, we currently add 100 million audit logs a day! Just the management of the audit logs is a technical feat of its own. Which has really nothing to do with something a user might appreciate or even be aware of.
Growth tends to look like: steady growth, then a 5x jump, then steady, then another 5x jump. Every 5x jump hurts in the moment.
2025 was called the year of AI agents. How does that concept factor into your product vision or internal process at Daytona?
Product-wise, I feel the market consensus is that 2026 is the year of the sandbox. Agents are arriving in volume, and they need somewhere to run. They need “computers” that can execute tasks, run code, use browsers, and so on.
I’d also say this is the year the IDE starts to die. Sure, people will still use IDEs to a degree, but the vast majority of code that will get deployed will not be from classic IDEs, but rather from natural language interfaces.
Are there ways you’re using sandboxes internally?
Yes. We have an internal multi-agent orchestrator. You give it a task, it spins up N agents in sandboxes to try solving it independently. Then, after they attempt a solution, you spin up a separate set of agents to review and verify the PR before anything gets committed.
Once you get used to it, it’s powerful. It’s not one Claude Code instance, it’s five, ten, or fifty working in parallel across multiple projects.
One interesting, still-unsolved problem is coordination. If Agent Three figures something out, how do you inform Agents One, Two, and Five so they stop wasting cycles and continue from the right knowledge? There’s no easy way to do that today. That creates product ideas: should we build something to support that, or will someone else?
That’s the kind of internal workflow we use sandboxes for.
How do you foresee Daytona evolving over the next 6 to 12 months? What should users be excited about?
Two things.
First: with any OS. From Linux to Windows to even Mac - We’ve already shipped this to our first user. It’s a fast microVM spin-up with snapshots, but running Windows. There are real use cases for that.
Second: running Daytona inside your existing cloud Kubernetes setup. You can run Daytona on-prem today, but we’re enabling deployments into EKS, AKS, or GKE. The important detail is: we don’t use Kubernetes to run the sandboxes. We use Kubernetes to run the nodes, and then our orchestrator runs the sandboxes on those nodes. It’s essentially two orchestration layers.
People are excited because it’s a clean security story and a clean DevOps story. You deploy into your existing environment, you get strong isolation, and it can autoscale up and down.
We launched this on Christmas Eve, and people on Twitter didn’t love the timing. But we did it to support a customer deployment, and the response has been strong, especially from enterprises that want the feature set but require specific security and deployment constraints.
It works because everyone wants it to be deployable in Kubernetes. It’s weird, but it solves a real requirement. There’s no performance degradation because our orchestrator runs inside the Kubernetes node, and then the sandbox runs inside that. It’s like inception: inside of inside.
And yes, you can spin up Kubernetes inside a Daytona sandbox too. It never ends.
Tell us about the team at Daytona. How would you describe your culture? Are you hiring? What do you look for in team members?
It sounds like a buzzword, but it’s agency.
We prefer people who aren’t top-down. Ask forgiveness, not permission. The org is flat: me and my co-founders and everyone just runs with problems. There are so many problems to solve, so we want people who pick something and execute.
Ideally, teammates bother us, not the other way around. We want them to bring work, ideas, and drafts to us: “Is this good? Is this bad? What should change?” rather than us micromanaging daily plans.
We do have a high-level strategy, but within that, we want people to choose what they’re excited about and drive it. Sometimes there are big launches and it’s all hands on deck: GTM, engineering, DevOps, security. Everyone ships together.
Some people love that intensity, and some don’t. If you do, it’s a lot of fun.
Anything else you’d like our readers to know about the work you’re doing at Daytona?
I believe we’re building fundamental infrastructure for the new economy. Over time, the majority of building, coding, and work will happen inside some form of agent “computer”, and that means inside sandboxes.
Conclusion
Stay up to date on the latest with Daytona, follow them here.
Read our past few Deep Dives below:
If you would like us to ‘Deep Dive’ a founder, team or product launch, please DM our chatbot here.