- Cerebral Valley
- Posts
- V7 Go - AI Agents for Complex Document Workflows đ
V7 Go - AI Agents for Complex Document Workflows đ
Plus: CEO Alberto Rizzoli on why the real ROI of AI right now lies in solving admin work...

CV Deep Dive
Today, weâre talking with Alberto Rizzoli, Co-Founder and CEO of V7.
V7 is an AI platform built to handle unstructured data at scaleâeverything from complex medical imagery to dense financial documents. With V7 Go, the company is focussed on practical agentic automation in high-stakes verticals. Specifically built for teams in document heavy industries - think Financial Services, Insurance and Real Estate - V7 Go is helping companies deploy AI agents that automate intricate workflows like underwriting, due diligence, claims processing, and more.
Today, V7 is used by some of the worldâs largest companies in healthcare, insurance, and private markets. Headquartered in London with over $50 million raised and 80 employees, the team has quietly become a category leader in AI-powered document workflows. In 2024, they launched their next major push: Goâs agentic orchestration layer, designed to act like an AI teammate that can execute multi-step tasks, collaborate across departments, and tie results back into centralized knowledge hubs. The ambition is bold: to build a workspace where AI agents interact with each other and with employees, operating not just as tool users but as intelligent collaborators across verticals.
In this conversation, Alberto shares why the real ROI of AI right now lies in administrative work, how V7 achieves grounded, high-precision outputs in compliance-heavy sectors, and why 2025 might be remembered as the year we outgrew the term âagentâ entirely.
Letâs dive in âĄď¸
Read time: 8 mins
Our Chat with Alberto đŹ
Alberto, welcome to Cerebral Valley! First off, introduce yourself and give us a bit of background on you and V7. What led you to co-found V7?
Hey there! A bit about me: Iâm a second-time co-founder in AI. My first company was called Aipoly, which I started back in 2015 during the early days of deep learning. We were building general-purpose computer vision models that could run on smartphone CPUs. The idea was that you could wave your phone around, and it would analyze the camera feed in real-timeâabout five frames per second. At the time, we were using the VGG-19 architecture, later ResNet34, and we had to do all this low-level optimization to make it run on something like the iPhone 5 CPU.
This was our first foray into building AI systems that could automate specific visual tasks - my co-founder Simon, whoâs also my co-founder now at V7, was there as well. Weâve really seen the whole arc of the field, from those early deep learning days to where we are now.
We started V7 in 2019 to first tackle one of the biggest bottlenecks in AI: data labeling. Labeling is basically how you teach AI, and it shows up in a bunch of different forms depending on the domain. We specialize particularly in healthcare, life sciences, and some video-based workflowsâespecially robotic or embodied video. The document AI side of our data labeling business later turned into V7 Go as a standalone knowledge-work experience for business users.
We've become incredibly good in the scientific domainsâto the point where our labeling platform is essentially a fully-fledged DICOM viewer with the same feature set youâd expect from a radiology tool, plus all the AI functionality layered on top. You can run models, triage their outputs, and apply additional annotations, either AI-assisted or semi-automated.
Then about a year ago, we launched V7 Go - an agents platform designed not just for extracting information from documents, but for executing full end-to-end process flowsâit was a natural evolution of the product for us. Especially starting within the healthcare industry, and particularly in health insurance. In the UK, for example, half of the healthcare budget goes to care delivery, but the other half goes to administration. And interestingly, the measurable impact AI can have on peopleâs lives is actually greater today on the boring administrative side of healthcare than on the much cooler side like detecting cancer in medical images.
We built this product to massively accelerate those workflows and make them accessible to non-technical usersâso you donât have to be a researcher to set up an agent that can parse insurance forms or information memorandums, and then do the actual analytical work using LLMs and any custom guidelines you give it. Weâve raised over $50 million, weâre headquartered in London, and while our team is about 80 people, roughly 70% of our customer base is in the US.
How would you describe V7 to the uninitiated developer or AI team?
We believe that, at its core, V7 is a productivity platform. You can build full production lines on top of it that take any unstructured dataâwhether thatâs complex documents like 10Ks and 10-Qs (common report types in the financial services space) , detailed information memorandums, or messy medical dataâand use AI to turn that into structured outputs. That process typically mirrors some internal workflow inside the organization. To go a bit more abstract: the underwriting process for an insurance provider is a great example. It's usually very well-documented, but each company has its own playbookâits own guidelines, rules, and nuances that a general model like ChatGPT just doesnât know out of the box.
With V7, you can teach an AI agent to follow your specific process. You can encode those rule sets, structure, and logic directly into the system and have it execute that workflow end-to-end as if it were a member of your team. This is where V7 Go shines.
Who are your key users today? Who is finding the most value in what you're building with V7 Go?
A simple example from the V7 Go side: the world of private finance runs on processing (very accurately) huge volumes of paperwork during acquisitionsâand it starts early, during due diligence, with an information memorandum. These are long, 50 to 100-page pitch decks filled with complex data, charts, tables, and context.
You canât just drop one into ChatGPT and expect a âShould we buy this company?â answer. This is analyst workâpainstaking, detail-driven work that usually takes 5 to 10 hours per document. With V7 Go, an asset management firm can build an agent that completes the same workflow in 15 minutes, including human review. All you need to do is define the inputâan information memorandumâand the reasoning steps: extract total revenue, EBITDA, industry classification, conduct deep web research, identify competitors, pull internal benchmark data, and evaluate market share. All of that usually happens manually. For any founders reading thisâevery time you send a pitch deck, a VC goes through this exact process. Weâre just automating it with agents now.
Thatâs something AI can already do pretty wellâtriaging whether you're a fit for a fund based on stage, revenue size, category, founder experience, and so on. All of that can be codified into a thesis doc and turned into an AI agent. But things get even more interesting during active fundraising. When youâre sharing a data room with a VC, itâs often hundreds of documentsâemployment contracts, customer info, legal docsâand none of that is trivial to sift through. In finance, speed is everything, and agents that can process a data room, extract all the standard due diligence answers, and do it autonomously are a massive unlock for both investors and founders.
Our vision is that two years from now, every fund and every enterprise will want V7 Go internally to automate these kinds of workflows. Even for us, when we raise again, weâll just drop our entire data room into Go, send it to investors, and theyâll be able to ask any question they want. The agent wonât just skim with RAGâitâll follow a rigorous, methodical process to retrieve accurate answers from complex document sets without missing anything.
Which existing use-case for V7 Go has worked best? Any customer success stories youâd like to share?
So we have two layers of mitigation for folks who are worried about the risk of adopting AI for high-stakes workflows. Mortgage lending is a great exampleâno one wants to deny someone a mortgage because of an AI hallucination or error. Thereâs a tech solution and a human solution.
On the tech side, the key is to never rely on a single LLM or a single LLM call. If you just toss a mortgage application and some guidelines into GPT-4 and ask, âShould we approve this?â it wonât go deep enough. Even with a reasoning model like O3, itâll usually miss things. The right approach is to break the process into a chain of thought: extract every relevant piece of information from the application, and for each parameter, look through the actual underwriting guidelines and reason whether it meets the criteria. If itâs something numerical, donât leave it to the LLMârun it in Python, which GO supports natively. The platform can dynamically switch from LLM to a deterministic engine for things like financial calculations, so you know the math is always right.
We also allow cross-checking across similar cases. So if the system sees three previous mortgage applications with nearly identical parameters that were approved, it can ground its answer using that precedent. And finally, every output in GO has to be source-grounded.
There are a few different ways to handle this technically. The most common is chunk retrieval as part of a RAG systemâpulling the chunk of text that contains the answer. But that approach has limits. For example, if you're chunking a financial statement, you might abstract away a lot of the actual numbers just by summarizing them. What GO does instead is scan the entire document page by page, pinpointing the exact passage all the way down to a bounding box. We OCR every page and find the precise location of a figure or clause. So if you're pointing to a specific financial stat or legal term, you're not just referencing itâyouâre clicking straight to its exact position in a long document. Thatâs the tech layer of mitigation.
Of course, weâve got all the enterprise compliance stuffâSOC 2 Type II, GDPR, deploy models wherever, etc.âbut the real differentiator is the human side. A lot of our customers donât actually have in-house AI talent. Theyâre just getting started. And the dirty secret of this space is that most AI companies need to do a lot of professional services to help customers get going. We have a team of solutions engineers who are all ex-ML engineers. Thanks to Goâs frontend, they can build something in two hours that would normally take a month. All the primitives are thereâjust configure them with the right prompts. So setting up a first agent to tackle a high-value task is actually a pretty light lift.
Over time, the customer gains confidence, takes the reins, and starts building agents themselves. Eventually, theyâre using Go to delegate all the repetitive, boring internal tasks across their team.
Create and manage real estate listings 80% faster by processing property data, documents, images, and videos with V7 Go.
Focus more on closing deals and leave the administrative tasks to GenAI.
Check out V7 Go for Property Managers and Real Estate Brokers today:
â V7 (@V7Labs)
2:43 PM ⢠Jan 23, 2025
How have you approached enterprise adoption in highly-regulated industries like finance and healthcare?
Usually the only real requirement is that the data is in the cloud. If itâs already there, we can process it under any compliance regime, using models deployed either on their end in their private VPC, or on ours.
The other historical blocker used to be around document complexityâExcel spreadsheets with multiple tabs, for example, were traditionally a no-go. But weâve now solved that. At this point, weâve essentially covered the full range of gnarly document types that used to be difficult to handle.
And finally, we look at whether there are at least three people regularly doing the task in question. If there are, the ROI is basically guaranteed. The cost of licensing the platform and setting up an agent is typically paid back with a 5x return.
How are you measuring the impact that V7 is having on your customersâ AI workflows?
Itâs usually about speed. The best application areas for us are where there's an internal headcount performing work that touches the top-line revenue of a business. Take insurance as an exampleâpremiums are influenced not just by the cost of risk but also by the cost of processing the application, analyzing that risk, and presenting it back to the customer. When you add AI to that workflow, it can drastically reduce administrative costsâsometimes by up to 10x. That allows the insurer to underwrite 10x more policies and do it with greater diligence.
Accuracy can be a bit of a trap. AI, on average, is about as accurate as humans, but it still makes mistakes. So instead, we focus on speedups and always set the expectation that some level of human review is still necessaryâespecially for high-value or sensitive documents. The mindset shift is: the AI does all the heavy lifting before you even get into the office, and now your job is to review its work in one hour instead of spending 10 hours doing it from scratch.
The good news is that AI is now very good at spotting ambiguity, edge cases, or anything that looks out of distribution. So most companies already know how long a task typically takesâsay, processing an offer memorandumâand we just measure how much time is saved when an agent does 80â95% of the work ahead of time.
Could you share a little bit about how V7 Go actually works under the hood?
We have an internal technology called Index Knowledge, which allows us to treat any unstructured input as if it were a small database. Within this database, we lay out every component of the fileâmetadata, graphs (which we extract as images), and the structure of the content itself. For instance, if there's a long series of paragraphs followed by a dense table of numbers, those sections need to be indexed and chunked differently. This architecture removes a lot of the technical risk for any company trying to automate a document-heavy workflow. And once structured this way, that mini-database becomes the ideal playground for an LLM to query and retrieve information with high reliability.
When it comes to LLMs, we're completely model-agnostic. We support all the major providersâGemini, Claude, GPTâand handle their quirks and capabilities under the hood. We allow customers to choose the model that fits their needs, and we can even recommend the best-performing model for specific use cases. For instance, Gemini 2.5 might outperform Claude 3.7 on certain types of financial analysis. Another really important feature is our table system for V7 Go. Instead of interacting with AI through a one-to-one chat interface, everything is processed in the form of tables. Think of each row as an entity, like a lease or an asset, and V7 Go processes all files tied to that entity. This is critical for use cases like hedge funds performing public market researchânot just analyzing one document, but thousands at scale. The AI can "carpet analyze" a huge corpus and zero in on the rows and excerpts that truly matter.
How do you see V7 Go evolving over the next 6-12 months? Any specific developments that your users/customers should be excited about?
By mid-year, weâll have figured out everything agents can actually do. We might even look back on 2025 as the year we discovered whatever comes after agents. That could be how we remember it. Right now, weâre working on enabling agents to work with one anotherâpassing tasks back and forth. One system weâre building is what we call the Agent Concierge: a centralized AI that represents all the work an individual is doing and can delegate it to specialist agents. These concierges are tuned to each employee and their workload. Eventually, these agents will even communicate with each other and assist on the human collaboration sideâlike suggesting you talk to someone else who's worked on something similar.
Another big component of our work this year is something we call Knowledge Hubs. Weâre working with companies that have massive internal datasetsâsometimes large enough to train their own LLMsâand weâre building systems to index and restructure that data to be far more human-readable. Weâll be releasing this in May. The idea is similar to a CRM like Salesforce, where people, companies, and deals are loosely tied together. But today, the dataâs indexed pretty crudelyâusually based on a domain or email address. Instead, if you and I mentioned something insightful about Llama in this conversation, that insight should live in the file that represents that entity.
Weâre building a system that restructures this kind of knowledge in a way thatâs far more elegant and makes sense for agents to work within. So when an agent is processing a lease, a mortgage, or an insurance app, it knows to extract key info and enrich related filesâjust like a diligent human would in a CRM, but never does. This becomes a kind of âsuper brainâ for internal AI agents.
How would you describe the culture at V7? Are you hiring, and what do you look for in prospective team members joining?
We're a team of 80, and I think what makes working at V7 particularly fun is that everyone is very technicalâeven our account executives and salespeople genuinely love the underlying technology. Weâve built a culture that prioritizes intellectual honesty, which means breaking things down to first principles, especially when trying to understand the real problems our customers face. We approach solutions in a fundamentally technical way.
Another part that makes it exciting is how often weâre willing to throw away our work. Every six months, we challenge ourselves to build the product that would kill our own. Itâs a way to keep the team fresh and hungryâlike hitting refresh on the startup mentality. With new developments like computer-use agents popping up, even our own automation workflows are under pressure. But instead of fearing that, we see it as an opportunity to reinvent. Weâre proud to be small and mightyâpunching well above our weight in terms of both revenue and product impact. We aim to hire insanely smart people who want to do meaningful work, pay them wellâespecially by European standardsâand lean heavily on AI internally so we can stay lean while moving fast.
We are hiring folks on our go-to-market teamâso if youâre someone who wants to sell great AI products that actually work, and agents that actually deliver real value, come talk to me. We're especially looking for Solutions Engineers, which is honestly one of the most fun jobs at V7. You're constantly working on new projects, but not so many in parallel that it becomes chaotic. Plus, you get to use a killer product, dogfood it daily, and directly influence improvements across the board!
đ V7 has been ranked as one of the Fastest-Growing Startups in Europe by Sifted!
Weâre proud to be featured as the only GenAI startup in the Top 50 đ.
Huge congratulations to the entire team, and a big shout-out to all the other startups on the list!
â V7 (@V7Labs)
2:19 PM ⢠Dec 5, 2024
Conclusion
Stay up to date on the latest with V7, learn more about them here.
Read our past few Deep Dives below:
If you would like us to âDeep Diveâ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.