• Cerebral Valley
  • Posts
  • V7 Go - AI Agents for Complex Document Workflows 🎛

V7 Go - AI Agents for Complex Document Workflows 🎛

Plus: CEO Alberto Rizzoli on why the real ROI of AI right now lies in solving admin work...

CV Deep Dive

Today, we’re talking with Alberto Rizzoli, Co-Founder and CEO of V7.

V7 is an AI platform built to handle unstructured data at scale—everything from complex medical imagery to dense financial documents. With V7 Go, the company is focussed on practical agentic automation in high-stakes verticals. Specifically built for teams in document heavy industries  - think Financial Services, Insurance and Real Estate - V7 Go is helping companies deploy AI agents that automate intricate workflows like underwriting, due diligence, claims processing, and more. 

Today, V7 is used by some of the world’s largest companies in healthcare, insurance, and private markets. Headquartered in London with over $50 million raised and 80 employees, the team has quietly become a category leader in AI-powered document workflows. In 2024, they launched their next major push: Go’s agentic orchestration layer, designed to act like an AI teammate that can execute multi-step tasks, collaborate across departments, and tie results back into centralized knowledge hubs. The ambition is bold: to build a workspace where AI agents interact with each other and with employees, operating not just as tool users but as intelligent collaborators across verticals.

In this conversation, Alberto shares why the real ROI of AI right now lies in administrative work, how V7 achieves grounded, high-precision outputs in compliance-heavy sectors, and why 2025 might be remembered as the year we outgrew the term “agent” entirely.

Let’s dive in ⚡️

Read time: 8 mins

Our Chat with Alberto 💬

Alberto, welcome to Cerebral Valley! First off, introduce yourself and give us a bit of background on you and V7. What led you to co-found V7?

Hey there! A bit about me: I’m a second-time co-founder in AI. My first company was called Aipoly, which I started back in 2015 during the early days of deep learning. We were building general-purpose computer vision models that could run on smartphone CPUs. The idea was that you could wave your phone around, and it would analyze the camera feed in real-time—about five frames per second. At the time, we were using the VGG-19 architecture, later ResNet34, and we had to do all this low-level optimization to make it run on something like the iPhone 5 CPU.

This was our first foray into building AI systems that could automate specific visual tasks - my co-founder Simon, who’s also my co-founder now at V7, was there as well. We’ve really seen the whole arc of the field, from those early deep learning days to where we are now.

We started V7 in 2019 to first tackle one of the biggest bottlenecks in AI: data labeling. Labeling is basically how you teach AI, and it shows up in a bunch of different forms depending on the domain. We specialize particularly in healthcare, life sciences, and some video-based workflows—especially robotic or embodied video. The document AI side of our data labeling business later turned into V7 Go as a standalone knowledge-work experience for business users.

We've become incredibly good in the scientific domains—to the point where our labeling platform is essentially a fully-fledged DICOM viewer with the same feature set you’d expect from a radiology tool, plus all the AI functionality layered on top. You can run models, triage their outputs, and apply additional annotations, either AI-assisted or semi-automated.

Then about a year ago, we launched V7 Go - an agents platform designed not just for extracting information from documents, but for executing full end-to-end process flows—it was a natural evolution of the product for us. Especially starting within the healthcare industry, and particularly in health insurance. In the UK, for example, half of the healthcare budget goes to care delivery, but the other half goes to administration. And interestingly, the measurable impact AI can have on people’s lives is actually greater today on the boring administrative side of healthcare than on the much cooler side like detecting cancer in medical images.

We built this product to massively accelerate those workflows and make them accessible to non-technical users—so you don’t have to be a researcher to set up an agent that can parse insurance forms or information memorandums, and then do the actual analytical work using LLMs and any custom guidelines you give it. We’ve raised over $50 million, we’re headquartered in London, and while our team is about 80 people, roughly 70% of our customer base is in the US.

How would you describe V7 to the uninitiated developer or AI team?

We believe that, at its core, V7 is a productivity platform. You can build full production lines on top of it that take any unstructured data—whether that’s complex documents like 10Ks and 10-Qs (common report types in the financial services space) , detailed information memorandums, or messy medical data—and use AI to turn that into structured outputs. That process typically mirrors some internal workflow inside the organization. To go a bit more abstract: the underwriting process for an insurance provider is a great example. It's usually very well-documented, but each company has its own playbook—its own guidelines, rules, and nuances that a general model like ChatGPT just doesn’t know out of the box.

With V7, you can teach an AI agent to follow your specific process. You can encode those rule sets, structure, and logic directly into the system and have it execute that workflow end-to-end as if it were a member of your team. This is where V7 Go shines.

Who are your key users today? Who is finding the most value in what you're building with V7 Go? 

A simple example from the V7 Go side: the world of private finance runs on processing (very accurately) huge volumes of paperwork during acquisitions—and it starts early, during due diligence, with an information memorandum. These are long, 50 to 100-page pitch decks filled with complex data, charts, tables, and context.

You can’t just drop one into ChatGPT and expect a “Should we buy this company?” answer. This is analyst work—painstaking, detail-driven work that usually takes 5 to 10 hours per document. With V7 Go, an asset management firm can build an agent that completes the same workflow in 15 minutes, including human review. All you need to do is define the input—an information memorandum—and the reasoning steps: extract total revenue, EBITDA, industry classification, conduct deep web research, identify competitors, pull internal benchmark data, and evaluate market share. All of that usually happens manually. For any founders reading this—every time you send a pitch deck, a VC goes through this exact process. We’re just automating it with agents now.

That’s something AI can already do pretty well—triaging whether you're a fit for a fund based on stage, revenue size, category, founder experience, and so on. All of that can be codified into a thesis doc and turned into an AI agent. But things get even more interesting during active fundraising. When you’re sharing a data room with a VC, it’s often hundreds of documents—employment contracts, customer info, legal docs—and none of that is trivial to sift through. In finance, speed is everything, and agents that can process a data room, extract all the standard due diligence answers, and do it autonomously are a massive unlock for both investors and founders.

Our vision is that two years from now, every fund and every enterprise will want V7 Go internally to automate these kinds of workflows. Even for us, when we raise again, we’ll just drop our entire data room into Go, send it to investors, and they’ll be able to ask any question they want. The agent won’t just skim with RAG—it’ll follow a rigorous, methodical process to retrieve accurate answers from complex document sets without missing anything.

Which existing use-case for V7 Go has worked best? Any customer success stories you’d like to share? 

So we have two layers of mitigation for folks who are worried about the risk of adopting AI for high-stakes workflows. Mortgage lending is a great example—no one wants to deny someone a mortgage because of an AI hallucination or error. There’s a tech solution and a human solution.

On the tech side, the key is to never rely on a single LLM or a single LLM call. If you just toss a mortgage application and some guidelines into GPT-4 and ask, “Should we approve this?” it won’t go deep enough. Even with a reasoning model like O3, it’ll usually miss things. The right approach is to break the process into a chain of thought: extract every relevant piece of information from the application, and for each parameter, look through the actual underwriting guidelines and reason whether it meets the criteria. If it’s something numerical, don’t leave it to the LLM—run it in Python, which GO supports natively. The platform can dynamically switch from LLM to a deterministic engine for things like financial calculations, so you know the math is always right.

We also allow cross-checking across similar cases. So if the system sees three previous mortgage applications with nearly identical parameters that were approved, it can ground its answer using that precedent. And finally, every output in GO has to be source-grounded.

There are a few different ways to handle this technically. The most common is chunk retrieval as part of a RAG system—pulling the chunk of text that contains the answer. But that approach has limits. For example, if you're chunking a financial statement, you might abstract away a lot of the actual numbers just by summarizing them. What GO does instead is scan the entire document page by page, pinpointing the exact passage all the way down to a bounding box. We OCR every page and find the precise location of a figure or clause. So if you're pointing to a specific financial stat or legal term, you're not just referencing it—you’re clicking straight to its exact position in a long document. That’s the tech layer of mitigation.

Of course, we’ve got all the enterprise compliance stuff—SOC 2 Type II, GDPR, deploy models wherever, etc.—but the real differentiator is the human side. A lot of our customers don’t actually have in-house AI talent. They’re just getting started. And the dirty secret of this space is that most AI companies need to do a lot of professional services to help customers get going. We have a team of solutions engineers who are all ex-ML engineers. Thanks to Go’s frontend, they can build something in two hours that would normally take a month. All the primitives are there—just configure them with the right prompts. So setting up a first agent to tackle a high-value task is actually a pretty light lift. 

Over time, the customer gains confidence, takes the reins, and starts building agents themselves. Eventually, they’re using Go to delegate all the repetitive, boring internal tasks across their team.

How have you approached enterprise adoption in highly-regulated industries like finance and healthcare?  

Usually the only real requirement is that the data is in the cloud. If it’s already there, we can process it under any compliance regime, using models deployed either on their end in their private VPC, or on ours.

The other historical blocker used to be around document complexity—Excel spreadsheets with multiple tabs, for example, were traditionally a no-go. But we’ve now solved that. At this point, we’ve essentially covered the full range of gnarly document types that used to be difficult to handle.

And finally, we look at whether there are at least three people regularly doing the task in question. If there are, the ROI is basically guaranteed. The cost of licensing the platform and setting up an agent is typically paid back with a 5x return.

How are you measuring the impact that V7 is having on your customers’ AI workflows? 

It’s usually about speed. The best application areas for us are where there's an internal headcount performing work that touches the top-line revenue of a business. Take insurance as an example—premiums are influenced not just by the cost of risk but also by the cost of processing the application, analyzing that risk, and presenting it back to the customer. When you add AI to that workflow, it can drastically reduce administrative costs—sometimes by up to 10x. That allows the insurer to underwrite 10x more policies and do it with greater diligence.

Accuracy can be a bit of a trap. AI, on average, is about as accurate as humans, but it still makes mistakes. So instead, we focus on speedups and always set the expectation that some level of human review is still necessary—especially for high-value or sensitive documents. The mindset shift is: the AI does all the heavy lifting before you even get into the office, and now your job is to review its work in one hour instead of spending 10 hours doing it from scratch.

The good news is that AI is now very good at spotting ambiguity, edge cases, or anything that looks out of distribution. So most companies already know how long a task typically takes—say, processing an offer memorandum—and we just measure how much time is saved when an agent does 80–95% of the work ahead of time.

Could you share a little bit about how V7 Go actually works under the hood? 

We have an internal technology called Index Knowledge, which allows us to treat any unstructured input as if it were a small database. Within this database, we lay out every component of the file—metadata, graphs (which we extract as images), and the structure of the content itself. For instance, if there's a long series of paragraphs followed by a dense table of numbers, those sections need to be indexed and chunked differently. This architecture removes a lot of the technical risk for any company trying to automate a document-heavy workflow. And once structured this way, that mini-database becomes the ideal playground for an LLM to query and retrieve information with high reliability.

When it comes to LLMs, we're completely model-agnostic. We support all the major providers—Gemini, Claude, GPT—and handle their quirks and capabilities under the hood. We allow customers to choose the model that fits their needs, and we can even recommend the best-performing model for specific use cases. For instance, Gemini 2.5 might outperform Claude 3.7 on certain types of financial analysis. Another really important feature is our table system for V7 Go. Instead of interacting with AI through a one-to-one chat interface, everything is processed in the form of tables. Think of each row as an entity, like a lease or an asset, and V7 Go processes all files tied to that entity. This is critical for use cases like hedge funds performing public market research—not just analyzing one document, but thousands at scale. The AI can "carpet analyze" a huge corpus and zero in on the rows and excerpts that truly matter.

How do you see V7 Go evolving over the next 6-12 months? Any specific developments that your users/customers should be excited about? 

By mid-year, we’ll have figured out everything agents can actually do. We might even look back on 2025 as the year we discovered whatever comes after agents. That could be how we remember it. Right now, we’re working on enabling agents to work with one another—passing tasks back and forth. One system we’re building is what we call the Agent Concierge: a centralized AI that represents all the work an individual is doing and can delegate it to specialist agents. These concierges are tuned to each employee and their workload. Eventually, these agents will even communicate with each other and assist on the human collaboration side—like suggesting you talk to someone else who's worked on something similar.

Another big component of our work this year is something we call Knowledge Hubs. We’re working with companies that have massive internal datasets—sometimes large enough to train their own LLMs—and we’re building systems to index and restructure that data to be far more human-readable. We’ll be releasing this in May. The idea is similar to a CRM like Salesforce, where people, companies, and deals are loosely tied together. But today, the data’s indexed pretty crudely—usually based on a domain or email address. Instead, if you and I mentioned something insightful about Llama in this conversation, that insight should live in the file that represents that entity. 

We’re building a system that restructures this kind of knowledge in a way that’s far more elegant and makes sense for agents to work within. So when an agent is processing a lease, a mortgage, or an insurance app, it knows to extract key info and enrich related files—just like a diligent human would in a CRM, but never does. This becomes a kind of “super brain” for internal AI agents.

How would you describe the culture at V7? Are you hiring, and what do you look for in prospective team members joining? 

We're a team of 80, and I think what makes working at V7 particularly fun is that everyone is very technical—even our account executives and salespeople genuinely love the underlying technology. We’ve built a culture that prioritizes intellectual honesty, which means breaking things down to first principles, especially when trying to understand the real problems our customers face. We approach solutions in a fundamentally technical way.

Another part that makes it exciting is how often we’re willing to throw away our work. Every six months, we challenge ourselves to build the product that would kill our own. It’s a way to keep the team fresh and hungry—like hitting refresh on the startup mentality. With new developments like computer-use agents popping up, even our own automation workflows are under pressure. But instead of fearing that, we see it as an opportunity to reinvent. We’re proud to be small and mighty—punching well above our weight in terms of both revenue and product impact. We aim to hire insanely smart people who want to do meaningful work, pay them well—especially by European standards—and lean heavily on AI internally so we can stay lean while moving fast.

We are hiring folks on our go-to-market team—so if you’re someone who wants to sell great AI products that actually work, and agents that actually deliver real value, come talk to me. We're especially looking for Solutions Engineers, which is honestly one of the most fun jobs at V7. You're constantly working on new projects, but not so many in parallel that it becomes chaotic. Plus, you get to use a killer product, dogfood it daily, and directly influence improvements across the board!

Conclusion

Stay up to date on the latest with V7, learn more about them here.

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.