Xpress AI - Your Enterprise Agents OS 🌐

Plus: CEO Eduardo Gonzalez on his vision for how AI agents can fundamentally transform enterprise operations...

CV Deep Dive

Today, we’re talking with Eduardo Gonzalez, Founder and CEO of Xpress AI.

Xpress AI is an operating system designed to manage AI agents at enterprise scale, enabling a digital workforce. Co-founded by Eduardo, Xpress AI provides the tooling to run, monitor, and manage numerous agents, offering features like budget controls, shared knowledge bases for collaborative learning, and performance tracking. It also incorporates human approval workflows to establish accountability in AI-driven tasks. Xpress AI aims to empower organizations to expand their teams with AI, automating workflows and improving efficiency.

Xpress AI is currently seeing adoption in sectors like financial institutions, particularly due to its ability to work with local models and maintain stringent data privacy. Its flexible deployment options, including on-premise solutions, cater to enterprises with strict security and compliance needs.

In this conversation, Eduardo shares his background, the genesis of Xpress AI, the technical challenges and breakthroughs in developing autonomous agents, and his vision for how AI agents can fundamentally transform enterprise operations.

Let’s dive in âšĄïž

Read time: 8 mins

Our Chat with Eduardo 💬

Eduardo, welcome to Cerebral Valley! First off, introduce yourself and give us a bit of background on you and Xpress AI. What led you to co-found Xpress AI?

My name is Eduardo Gonzalez and I'm the CEO and co-founder of Xpress AI. Before starting Xpress AI, my background was mainly in enterprise software and AI infrastructure. Just before founding Xpress AI, I worked at a Y Combinator startup, YC16, called Skymind, where we were doing deep learning for enterprise—specifically taking the best deep learning research at the time and making it run on large Hadoop clusters and similar systems for various companies.

Before that, I worked in Japan for a Japanese system integrator called Japan Business Systems. I started there as a programmer, eventually became a manager, and led product development, where we built enterprise iPhone applications and other tools.

My entire career has been in software, enterprise, and machine learning. I wouldn’t say machine learning was just a hobby—I’ve always been interested in it. My goal has always been to figure out how we can take machine learning and apply it to business so people don’t have to work so hard. That’s been my focus the entire time.

How would you describe Xpress AI to the uninitiated developer or AI team?

Xpress AI is where you go to put AI to work for you. We had done a bit of automation with deep learning at my previous job, but when I decided to start Xpress AI, I wanted that to be a key focus—how do we make deep learning easier for people so they can actually use it to automate work? That’s why we’ve worked on visual programming tools and also leveraged LLMs, combining those two things. We were really early in creating agents that could actually do things.

Today, if you look at the landscape, agents are clearly the next big thing. We’ve been working on them for a while. We recently released Xaibo, our new agent framework. It’s not our first agent framework—it’s the latest iteration, as LLM capabilities have improved. We’ve also expanded our scope. We feel none of the current libraries fully meet the need, so we’ve been building the Xpress AI OS—an operating system for agents to run fully autonomously, and even operate entirely on-premise, behind the enterprise firewall.

It’s not just a simple “here’s an event and then an agent responds.” The agent has idle cycles, managed context windows, and all the other features you need to create an agent that can run 24/7, improve over time, and become a real asset to your organization.

Tell us about your key users today. Who would you say is finding the most value in what you're building with Xpress AI? 

A lot of our customers right now are financial institutions. I wouldn’t say we’re limited to that sector, but they do have use cases that are a really good fit for us. One interesting aspect of Xpress AI OS is that we don’t just give you access to proprietary cloud models like OpenAI and Anthropic—we also have extensive tooling for local models.

Our customers have been pretty adamant about where their data goes. They can’t have data leave their data centers under any circumstances, especially with the kind of financial transaction data they’re dealing with and the privacy laws they’re required to comply with. Because we support local models and have built-in auditing, they’re kind of an ideal customer profile for us.

We’re targeting medium to large enterprises—especially those that care deeply about privacy and are looking to move toward being AI-native. The Duolingo CEO talked about this idea of the AI-native company, and I’d say he’s onto something. I’d love for the first single-person unicorn to be one of our customers. That’s still a goal. If you're a builder, we have tools to help you create an entire workforce with Xpress AI OS so you can operate solo.

At the same time, if you're a larger enterprise with compliance and privacy requirements, the platform is just as relevant. It helps you grow your team, even in challenging times. You can temporarily scale with AI to handle surges or accelerate your business by augmenting your team with the latest in AI.

You describe Xpress AI OS as an operating system for managing agents at an enterprise scale. Can you unpack that for us? What are the core capabilities you provide to help a company build and manage a digital workforce? 

Xpress AI OS is an operating system that allows you to manage agents at the scale of an entire enterprise—not just individual users. If you're looking to build a digital workforce, nothing out there really helps manage that at scale. Xpress AI provides the tooling you need to run, monitor, and manage agents across an organization.

That includes giving your agents budgets to make sure they’re staying within resource limits, and assigning them workspaces with knowledge bases. Right now, when a single person uses ChatGPT, all that work and knowledge stays with them. It doesn’t become institutional knowledge, and other agents can’t benefit from it. With Xpress AI, you have shared repositories of knowledge that agents can access, so they learn together and become part of a collective intelligence.

We also help ensure agents stay effective over time by tracking performance. And there's support for human-in-the-loop workflows—where the AI does part of the job and a human approves it—so there's clear accountability and ownership.

If you’re serious about expanding your team with AI, Xpress AI gives you the essential tools to do that.

Walk us through Xpress AI’s platform. Which use-cases should new customers experiment with first, and how easy is it for them to get started? 

The fastest way to get started is through the Xpress AI Cloud. You can sign up and immediately deploy turnkey agents from our templates to solve common problems. These are designed to provide value from day one.

For example, you can deploy our personal assistant, Harmony, to manage your Google calendars and tasks. You could also use Lexi, an agent that can read and answer questions about your PDFs, for HR policies. We even have a template for a Slack Translator to bridge language gaps in your team channels. Another popular use case is an agent that joins your meetings, summarizes the transcript, extracts action items, posts them to Slack, and follows up on them. These are fantastic starting points that solve real problems immediately.

But the real power comes when you expand these simple agents into more sophisticated, deeply integrated members of your team. That’s where our visual programming interface, Xircuits, comes in. You can start with a template and then use Xircuits to connect it to your company's internal APIs, legacy systems, or other tools.

We made a conscious decision to design Xircuits as a low-level, highly flexible layer that sits just above programming itself. The reason is simple: we never want our users to feel limited by the tool. As LLMs become more capable, you can move from defining granular steps to orchestrating high-level business processes. Our platform is built to handle both. And to make this powerful system accessible, we also built the Xircuits Assistant—an AI that can generate the components you need just by asking.

And for the enterprises we mentioned earlier with strict compliance or privacy needs, we have the Xpress AI OS. It’s a full operating system you can deploy on a virtual machine or a spare desktop, giving you the power to run everything fully on-premise, ensuring your data never leaves your control.

This approach allows you to start with an immediate win from a simple template and then build out a truly autonomous digital workforce that is tailored to your exact needs. It’s a journey from a quick fix to a fundamental transformation.

How are you measuring the impact and/or results that you’re creating for your key customers? What are you most-heavily focussed on metrics-wise? 

When we're targeting workflow automation, usually the first thing we want to determine is the average time it takes a person to do the task we're now trying to automate with AI. Let's say it's an hour or two. Then we look at the size of the backlog. Generally, when a customer comes in, how long are they waiting for this process to finish? For example, onboarding is a pretty good one. There are many cases where to onboard a customer, especially for financial processes, various eKYC steps need to be done, including taking screenshots or scans of different documents and then keying that information in.

We usually take a workflow like that and see how much time the customer is waiting and how much time is actually being spent on it. When it's automated, you can then do the math to see how much time customer support is now spending on exceptions that the AI couldn't handle, which is never zero, ideally, but rarely is. Then, how much less time are customers waiting? When it comes down to how much time customer service is saving, that's a cost-center type of calculation. But when it comes about how long our customers are waiting, that's more of a profit-center consideration and has a much bigger ROI.

We look at the difference between those two things. For example, we might have reduced customer service's workload by 95% because now, instead of processing 100 things a day, they only need to handle five exceptions. And our customers are now getting a response and being onboarded to actually creating their account within minutes instead of waiting days. Consequently, customers are much happier, and more are signing up.

Agents are still a novel concept for many enterprises, which can feel risky for them. How do you bridge that gap? What's your strategy for getting them over the line from curiosity to adoption? 

When we approach a large enterprise, we know that “agents” can sound a bit abstract or even risky, especially if the company isn’t AI-native. So we usually start with a very familiar concept for them: automation.

Most enterprises we talk to are already exploring or using RPA (Robotic Process Automation) tools, and while those tools claim to incorporate AI, what they often find is that the AI piece is minimal—usually just some OCR—and the rest is rigid, manual programming in a drag-and-drop interface. So we position agents as the next evolution of automation.

With agents, the pitch is simple: you’re not scripting every step. You give the agent the tools it needs and the goal—and it intelligently figures out how to get from A to B. For example, let’s say you're onboarding a new customer. Instead of a CS rep spending hours pulling data from documents and feeding it into systems, an Xpress AI agent can do that autonomously and contextually. And we can often get that kind of solution up and running in weeks, not months.

That’s the “quick win” strategy—deliver measurable impact right away. Once that’s in place, it’s much easier for internal stakeholders to justify broader adoption: “If agents can save us X hours and Y dollars on onboarding, imagine what they can do for IT ticketing, HR inquiries, internal data retrieval, and more.”

Eventually, it opens the door to much more ambitious transformation—like giving every employee their own Xpress AI-powered assistant that knows your org’s systems, policies, and workflows. It becomes a “land and expand” model that gradually builds a foundation for becoming an AI-native enterprise.

So to answer your question: we lower the barrier by starting with automation, proving value fast, and then expanding into deeper use cases. From there, the results speak for themselves.

Speaking of integration, the Multi-Agent Communications Protocol (MCP) has generated a lot of buzz. What’s your perspective on MCP? Do you see it as a critical piece of infrastructure for bringing AI agents into the mainstream?

MCP is definitely a meaningful development—and to be honest, my feelings about it have evolved quite a bit. I was at one of the first hackathons right after Anthropic introduced it, and at that time, I wasn’t totally sold. It felt like there were still a lot of gaps, especially around authentication, auditing, and enterprise-grade concerns.

But now? It’s shaping up to be a real infrastructure shift—similar to what SOAP was for web services back in the early 2000s. Back then, SOAP enabled systems to talk to each other in a standardized way. Eventually, that evolved into JSON, REST APIs, etc. MCP is doing the same thing, but for AI agents.

So yes, I do think MCP is on track to become the common protocol for agent interoperability. If you're an enterprise software vendor and your platform supports MCP, it signals you’re agent-ready—which opens the door to AI-native automation and intelligent integrations with minimal effort.

That said, just having MCP support doesn’t mean the job is done. You still need governance layers—things like audit logs, agent permissions, and oversight of what tools agents accessed, when, and why. But the existence of a standard like MCP means we're entering a phase where plug-and-play AI automation is viable, and that’s a big deal for enterprise adoption.

We were early in integrating tools into LLMs, and to see that concept now crystallizing into an open standard is super validating—and exciting for what comes next.


Looking ahead, what are the key priorities for Xpress AI for the remainder of the year? What can your customers and the community expect to see from you on the product front? 

Our main focus right now is expanding our library of pre-built Bots in our Agent Store. We refer to them as 'Bots', with their own identities (such as HAF-82 the frontend developer bot), to give them distinct roles. We want to ensure that our customers have as many turnkey agents capable from day one as possible, so we're really focused on that. 

The other piece is the OS. We are going to release a community edition of the Xpress AI OS, and I'd say that's going to be an interesting thing for hobbyists or home developers to really push the limits of what agents can do completely autonomously in their own lives.

I'd say that will help build the community towards this idea that agents don't have to be, you received an email, now you do things, or I've talked to you in this chat window, now you do things and then they go away. We are trying to build the platform that enables an LLM or an agent to go, hey, I noticed there's this bug in this repository that I just noticed. Do you want me to fix it? And I can be like, yeah please, that would be great. That kind of look where you know you can do that, but you're not spending a trillion tokens to have it looking at your repository all the time.

It intelligently keeps its budget in check and finds things to do and becomes this proactive agent and a real collaborator with humans. So we want to make the OS the framework for making that happen.

What has been the hardest technical challenge around building Xpress AI into the platform it is today? 

It’s been far from a linear journey. One of the hardest technical challenges we’ve faced at Xpress AI has been getting agents to be truly autonomous, especially in real enterprise workflows. Early on, we realized that context management was a huge bottleneck because large language models tend to degrade in performance over long context windows—a problem Microsoft even highlighted in a recent paper. To address this, we developed our Xaibo framework, which efficiently manages memory by keeping the LLM’s working memory light while offloading long-term memory to persistent storage. This allows agents to operate effectively over extended periods without losing track or degrading in quality.

Another significant challenge was deciding how much to rely on visual programming. We opted for a low-level, highly flexible visual layer that sits close to actual programming, ensuring users are never limited by the UI but still have transparency and control. As the capabilities of LLMs have improved, we’ve shifted from requiring detailed step-by-step instructions toward higher-level business process orchestration, where agents can figure out the “how” on their own based on a described workflow.

Unlike traditional RPA, which requires strict programming of each step, agentic automation is inherently non-deterministic, meaning the agent decides how to accomplish goals dynamically. To make this reliable, we had to build robust tools and environments that provide the right balance of autonomy, oversight, and access to tools, while also ensuring failover mechanisms, observability, and security. Looking back, I’m most proud of how we’ve evolved from simply wiring LLMs into workflows to creating a foundation for truly autonomous enterprise systems capable of learning, remembering, and orchestrating complex processes across departments with minimal manual intervention. This is what truly sets Xpress AI apart and makes it powerful.

As we wrap up, what’s the core message you want to leave with our readers? What is the ultimate vision for Xpress AI and the future of the AI-native enterprise? 

We're aiming for Xpress AI to be the foundation that the next generation of intelligent enterprises actually uses. I'd say a lot of companies built their stack on top of Microsoft Windows or Linux. You should totally think of Xpress AI and Xpress AI OS as the foundation for creating a whole digital workforce. So, if you're curious about it or if you've heard the term "AI native" and want to know what that would be like, if you want to embark on that journey, please reach out to us. We're happy to help some of the initial customers get those kinds of case studies out there. We want to create these success stories because we know that it's possible for AI to be more than just a copilot.

That's a waste of AI skills and a waste of human time to have to babysit an AI. Instead, give it the tools it needs so that it can work on its own and then actually collaborate with you, taking a load off of you instead of adding to it, as is often the case today. So definitely, if you're interested in that and want to give it a try, please talk to us. As long as we have capacity, we're happy to help customers get on that journey, get started, and be able to start talking about their success in harnessing AI and making it truly productive within the community.

Lastly, tell us a bit about the team at Xpress AI. How would you describe your culture, and are you hiring? What do you look for in prospective team members joining Xpress AI? 

The Xpress AI team comprises a few Skymind members I met in my previous team, as well as others. Culturally, Xpress AI is a remote and very global company. We have people around the world, including Germany, San Francisco, Japan, and Malaysia, among others. We're not really limited by geography; it's mainly about the talent. What we're looking for in our engineers are people who understand AI very deeply, not just that they've made a GitHub repository work, but more like they can actually implement that algorithm themselves. We need that level of understanding to make real progress on this stuff. We don't want to just use a particular configuration to make something happen.

We need to make that algorithm general and automatable in real time, which is different from the usual IPython Notebook approach where you click and maybe finagle the data a bit. We need to automate that entire process in a way that works. So, people who have worked with machine learning fundamentals and automating those kinds of things are the engineers we are really interested in. Enterprise sales is a main focus of hiring right now, as is customer success. I'd say the main challenge with Xpress AI right now is educating customers that this is actually doable and really achievable with these tools. People with strong engineering understanding who can explain that in a way that customers understand are the hardest to find, and that's one of our main focuses.

Conclusion

Stay up to date on the latest with Xpress, follow them here.

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.