• Cerebral Valley
  • Posts
  • CopilotKit: infra for building Agent-Native Applications 🎛

CopilotKit: infra for building Agent-Native Applications 🎛

Plus: Co-founder and CEO Atai on why copilots are becoming a default UX staple...

CV Deep Dive

Today, we’re diving into a conversation with Atai Barkai, co-founder and CEO of CopilotKit

CopilotKit is an open-source platform for building best-in-class AI copilots into any product. It’s UI + elegant infrastructure for building deeply-integrated, best-in-class AI Copilots. It’s open-source; check out their GitHub (12.k stars) 

They just released CoAgents alongside LangChain / LangGraph. Infrastructure for building agent-native applications (think Replit AI agent, or OpenAI’s Canvas but for any vertical)They are hosting an in-app agents Hackathon in collaboration with AI Tinkerers, sponsored by Google, Weights & Biases, E2B, & more. Over $10k in prizes. Nov 2nd - 3rd in SF. 

As the novelty of AI wears off, users and employees don’t just want to chat – they want help actually getting things done. But it’s a multi-month, multi-million dollar nightmare for businesses to build great Copilots from scratch - that so far only the most ‘hardcore’ engineering teams have managed to ship (e.g. Replit, Vercel, Perplexity).

That's why after leaving Meta’s infra team, Atai founded CopilotKit to let anyone build best-in-class Copilots and now Co-agents that are tuned to handle complex tasks while checking in with the user for feedback and course correction - utilizing any of the agent orchestration platforms out there.

With CopilotKit, businesses can deploy highly effective AI copilots without requiring large engineering teams or deep AI expertise. The platform allows companies to build both internal copilots—AI tools that assist employees—and customer-facing copilots that help users navigate complex software applications. CopilotKit has already garnered significant attention with over 12,000 stars on GitHub and tens of thousands of users.

The company’s funding is not yet public, but includes top-tier backers. 

In this conversation, Atai explains why copilots are becoming a default UX staple, explains the company’s new Co-agents, and gives a preview of their upcoming hackathon on November 2nd and 3rd. The event, sponsored by Google Cloud, Weights & Biases, and others, focuses on building human-in-the-loop agents, with over $10,000 in prizes up for grabs.

Let’s dive in ⚡️

Read time: 8 mins

Our Chat with Atai 💬

Hey Atai! Let’s introduce yourself and give us a bit of background on yourself and what led you to co-found CopilotKit.

Sure thing. I’m an ‘all around nerd’, and I’ve been building & leading developer infrastructure my entire career. Most recently at Meta - building SDKs used across nearly every product surface at Facebook and Instagram & handling billions of operations per week. Before Meta, I was at Doximity, where I led the infra development of their flagship iOS apps and helped scale mobile engineering from a scrappy startup operation to what is now a multi-billion public company. I’ve been programming from a young age, and also got a couple of physics degrees along the way.

CopilotKit “pulled itself” out of us, after a bit of a journey through the LLM space. When GPT 3 came out, I was blown away by what computers were now able to do. My cofounder Uli and I started building an “LLM media database” that could perform intelligent content-based querying on long-form audio & video. After some uphill battles, we were processing tens of thousands of hours of audio every month – a fairly promising start.

But you know, since engineers always first build for engineers - about 2 years ago all of my engineering tools started having Copilots built into them. Famously GitHub Copilot & Cursor, but eventually even my terminal had a copilot built into it (shout out to Warp). The productivity impacts to my own work were dramatic. And not having an AI copilot was starting to feel increasingly archaic.

At some point, I built a Copilot for one of our internal tools, and sure enough the productivity impacts for us internally were pretty dramatic. We were starting to think, what would it mean if these productivity boosts became universal?  As a weekend project, I open-sourced our internal Copilot infrastructure, and quickly got a strong resonance response from engineers who saw it - it clicked. We really believed in the long term Copilot thesis and so a little over a year ago (and with a bit of a heavy heart - we had been making good progress after all) we decided to pivot the company into CopilotKit as the core product. It’s been a wild year.

By the way - I think we are continuing to see this trend play out – with engineering tools pioneering AI patterns that will go mainstream in a year or two.

How would you describe CopilotKit to the uninitiated?

It’s infrastructure for building best-in-class AI Copilots into products without requiring huge teams, long development cycles, or massive expertise.

Your audience is technical, so they’re likely familiar with Cursor, Vercel’s v0, and Replit’s AI Agent, and of course the tech-giants’ AI Copilots. Some might think such products are beyond what their team could build – but they would be mistaken! With CopilotKit and our ecosystem integrations, you really could build a Copilot surpassing what even Google pushes out in a few days of work.

And to clarify, as terms change fast in this space. An AI copilot is an in-app AI assistant. The most common (but not the only) form-factor is a chatbot that has access to realtime user context & can take in-app actions on behalf of users. 

Who are your users today? Who is finding the most value in CopilotKit?

We’re open source and aim to be the default choice for building copilots, so we have a pretty diverse user base. We have more than 12,000 GitHub stars and tens of thousands of monthly users. Broadly speaking, our users fall into two categories: internal copilots and customer-facing copilots.

Internal use cases are like having an AI “colleague” work right alongside you, much like a pair programmer, whereas external use cases are more about helping users navigate and get things done in a complex SaaS application. It’s about giving people control to build copilots that aren’t just "drop-in" but also not fully bespoke—they use CopilotKit to strike the right balance between customization and rapid deployment.

Are there any customer success stories that come to mind?

I’m really excited about two groups of users. First are startups that are truly at the forefront of AI-native applications. They’re building apps where AI copilots aren’t just helpful but integral to the user experience. For example, we have companies doing AI-native education where the copilot can see all the application data and facilitate deeply interactive learning sessions—an education experience that’s completely different from anything before. These are cutting-edge products, and what they’re doing today is likely what we'll see in the mainstream in a few years.

Then we have larger companies that are early in their copilot journey, but making significant strides in transforming how customers interact with them. Imagine you’re booking a vacation—you have three kids, one needs naps, you need certain amenities. Now, instead of clicking through endless buttons, you have a conversation, and the AI helps you plan everything in a very intelligent way, considering your exact needs.

I understand you write blog posts – What topics are you interested in right now? What are you writing about?

It actually connects a lot with the topic of copilots. The important thing about copilots is the "co"—the idea that the AI works alongside a person rather than replacing them entirely. In my view, the products that have seen the strongest product-market fit are these types of co-piloted experiences, precisely because they don’t need to be perfect. The human being right there makes the AI significantly more useful.

One way to think about it is an economic Turing Test. It's not about whether an AI can fool someone into thinking it's a person. Instead, it's about whether someone would "hire" the AI to do a job autonomously. So far, what we’re seeing is that AI copilots are doing 80-95% of the work, while a human is still in control of the strategic direction, the kind of CEO if you will.

Do you think the copilot UX paradigm is just an initial way that society gets used to AI, or is this going to persist throughout the long-term adoption of AI?

You know, initially, I thought it might just be a phase. But I’ve become much more bullish on this chat paradigm lately. And by "chat," I mean both text and voice—it doesn’t matter fundamentally. What you and I are doing right now, for instance, is chat. It’s an open-ended exchange of ideas, which is what language enables. So, chat is how we all communicate when buttons aren't enough. We’re already seeing some of the most impressive AI-native apps fully embed this as a core feature—not as some sidebar pop-up, but as a central, first-class experience. 

Interestingly, we see a pattern: developers first build these experiences for other developers. As happened with GitHub Copilot, and now, just two years later, copilots are expanding to almost every aspect of our interactions with software. We think we're at the start of seeing this next UX paradigm—one where AI copilots are deeply integrated and communicate in a human-like, open-ended way to help users achieve complex goals.

You mentioned a launch earlier. Could you talk more about what you're launching?

So we think of the copilot stack as a three-step process. We are launching step 3, Co-agents.

  1. Step one is having great AI UI components—things you can ship instantly (like a chatbot). 

  2. Step two, we call it "copilot OS," involves connecting AI intelligence to your application, making it an actor in your application that can interact with data, take actions, all while keeping the user in control.

  3. Step three is what we call Co-Agents. This is for those who have AI embedded everywhere and want to build specialized, handcrafted agents for specific verticals without relying solely on LLMs. CoAgents allow these agents to become deeply embedded parts of applications, with shared states and actions, rather than isolated bots. Think of it as a natural part of the app with full transparency—users can see and understand what actions the agent takes in real time. This builds trust, helps with error correction, and makes the AI much more effective.

Can you take us a little bit underneath the hood of CopilotKit and tell us how it works?

Sure thing, let me break it down a bit further. The core idea behind CopilotKit is to provide a way for developers to easily integrate AI copilots into their applications—making AI interactions a native part of the application experience. To do this, we’re working with common frameworks that most developers are already familiar with, like React on the front end, which makes adoption as seamless as possible.

React applications are built around components, each of which defines its UI and API calls, and with CopilotKit, we add a third layer: copilot interactions. This means that each component can now define the data it exposes to the copilot, and the actions it makes available to the AI. These actions could be anything—front-end related, back-end operations, or even connections to third-party systems.

For example, if you’re working with a form or a spreadsheet component, you can add a new "copilot-readable" hook, which tells the copilot what data is available in this specific context. The copilot can then use that data to provide intelligent suggestions or take actions directly, based on what's happening in the application at that moment. Similarly, you can define backend data that's relevant for specific components, which also becomes accessible to the copilot in that context.

The interesting part is that the intelligence layer emerges naturally. Just as components in React locally define their UI and data, CopilotKit allows components to also define their interaction capabilities. The copilot has access to all the data and actions exposed by the different components, and because it’s embedded deeply into the application, it’s able to understand the overall context rather than just responding to isolated requests.

And then we have the concept of "skills." Skills are like specialized capabilities that the copilot might need to understand specific parts of your application. These skills can be defined separately, and then you can assign them to different parts of your application. It’s almost like giving your copilot a set of "tools" it can use, depending on where it is in the app and what it needs to do.

From a developer's perspective, the integration is surprisingly simple. We’ve designed it so that it requires just a few lines of code to make your existing React components "copilot-ready." You add hooks that define what data is shared and what actions are allowed, and suddenly you have a system where the copilot can be a true participant in your application's workflow. It can even generate suggestions autonomously—like helping users make changes similar to what they’ve done before.

What’s the long-term vision for CopilotKit—where do you see things a year from now?

Long term, we're essentially building the infrastructure for human-AI interaction. We’re exploring the optimal balance between work that needs human touch versus what an AI can handle autonomously. We want to facilitate the synthesis between these two.

In the next year, we’re focused on becoming the default choice for building copilots—basically, making CopilotKit the "Algolia of copilots." We think the future is one where copilots are embedded into most, if not all, software. It just makes sense to have that intelligence layer present, and we want to be the ones making it easy for developers to add that in an efficient, scalable way.

What is the team like at CopilotKit, and what kind of culture are you trying to build? Are you hiring?

We’re currently a team of six, soon to be ten. We’re a fully remote team, with people spread out across different locations, and with a core presence in San Francisco. We’re aiming for an "A players only" environment—everyone here is top-notch, and we’ve made incredible progress with very limited resources, which is all down to the strength of our team.

We like to keep things practical and focused. My goal is to let everyone do their best work while insulating them from startup chaos as much as possible. Of course, deadlines exist, and startups are always intense, but we’re here because we want to build something impactful—something that matters. We see this moment as one of the most significant technological transformations, and we want to be at the forefront of making it real and practical for everyone.

We have a hackathon coming up on November 2nd and 3rd, in collaboration with AI Tinkers. It’s sponsored by Google Cloud, Weights & Biases, E2B, and several other innovative startups. We’re excited to have some great judges, including high profile founders, VCs and product leaders. 

The focus of the hackathon is on human-in-the-loop agents— we want participants to create cool, innovative use cases for in-app AI and human-AI interactions. There will be over $10,000 in prizes, including hardware, credits, and gift cards.

Conclusion

To stay up to date on the latest with CopilotKit, learn more about them here.

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.