- Cerebral Valley
- Posts
- DeepL - The World’s Best AI for Translation and Communication 💬
DeepL - The World’s Best AI for Translation and Communication 💬
Plus: Founder/CEO Jarek Kutylowski on building purpose-built AI, why voice translation today feels like text translation in 2017, and what’s next for Language AI...

CV Deep Dive
Today, we’re talking with Jarek Kutylowski, Founder and CEO of DeepL.
DeepL is one of the most advanced Language AI companies in tech, quietly powering the critical language infrastructure behind over 200,000 global enterprises, from Softbank and Mazda, to Harvard Business Publishing and Panasonic Connect. Founded in 2017—well before generative AI went mainstream—DeepL has built a specialized, full-stack AI platform engineered for high-precision translation and communication. Its offerings include written and real-time speech translation, advanced AI-powered writing tools, and a robust API. Unlike general-purpose offerings, DeepL’s platform is powered by proprietary AI models specifically tuned for language, delivering higher accuracy, fluency and precision, with significantly reduced risk of hallucinations or misinformation.
Today, businesses rely on DeepL to translate everything from internal Slack messages and technical documentation to legal contracts and live voice conversations. One of its newest products, DeepL Voice, launched in late 2024, is already transforming multilingual collaboration for global teams. DeepL’s SaaS and API products are embedded across critical workflows—from customer support and research to real-time communication in industries like healthcare, logistics, and finance—where precision, latency and data privacy are non-negotiables.
We are very proud to announce that DeepL is on @TIME’s list of the 100 Most Influential Companies of 2025!
The list is chosen by TIME’s editors after polling its global network of contributors, correspondents and outside experts. We're recognized in the "Innovators" section,
— DeepL (@DeepLcom)
1:14 PM • Jun 26, 2025
In this conversation, Jarek shares how DeepL’s early decision to build proprietary models led to a defensible product edge, why voice translation today feels like text translation did in 2017, and how DeepL is building a team that blends deep research with a sharp product mindset.
Let’s dive in ⚡️
Read time: 8 mins
Our Chat with Jarek 💬
Jarek, welcome to Cerebral Valley! Let’s kick our chat off with a quick intro — how did you first enter into the world of AI-powered language translation and communication, and what’s the founding story behind DeepL?
Hey there! I’m Jarek, founder and CEO of DeepL. Back in 2017, my team and I recognized a pivotal moment when AI started to make its way into language workflows. We knew early on that there was huge potential to make an impact. This was before AI went mainstream, but translation was already showing early promise and reaching a level of quality that could meaningfully change how people worked with language.
So we jumped in and started building something specifically for businesses and enterprises. We've always seen language as critical to international business, whether it's internal communication between global teams or external communication with customers across continents. From day one, we've built all our AI and models with those business use cases in mind, and have embedded them into products that truly make a difference.
For someone hearing about DeepL for the first time, how would you describe what your team is building — especially within the broader universe of AI-native translation tools?
DeepL offers a solution for businesses that want to ensure that their communication flows naturally and efficiently, regardless of the language. Whether it’s a casual email between team members or more formal interactions, the assumption that everyone speaks English just isn’t the reality. You’ll have employees across different countries who may not be fluent, and for them, it’s often much faster and more effective to write in their native language and translate it. Our technology is now at a point where it outperforms most people – including myself, as a non-native English speaker – so it becomes a real productivity tool.
A staggering 69% of US executives report that language barriers are undermining their business' bottom line and hampering their growth. That’s one of the key findings in a new DeepL study that’s making waves across the US business media.
In the same study, 75% of execs say that
— DeepL (@DeepLcom)
5:52 PM • May 21, 2025
At the same time, for businesses looking to go global, which is most companies today, language is key to this. If you want to enter a new market, say from the US into Brazil, you can either find someone local who speaks the language, try to build that capability in-house, or use technology to kickstart the market. With DeepL, you can translate your marketing materials, contracts and other core assets and start building momentum right away.
There are so many high-value use cases for our technology across businesses – everything from casual communication to legal and technical documents. DeepL is built to handle all of it and help businesses eliminate language barriers entirely.
DeepL’s reputation for best-in-class written translation is well known. With the launch of DeepL Voice in late 2024, you’ve made the leap into live audio. What drove that expansion, and how are you approaching ensuring the same level of user experience in the voice space?
When we launched DeepL Voice in November 2024, it felt like a turning point, similar to how 2017 marked the breakthrough moment for text translation. The technology has, for the first time, really reached a level where it can be used effectively in real-world scenarios. Even for us, before launching DeepL Voice, we often had to rely on our own team for translation when speaking with customers – for example, when I was in Japan trying to communicate – or bring in interpreters to handle the conversation.
But since the technology has advanced and now, with DeepL Voice, we’ve been able to make those conversations flow much more naturally. We’re using tech to understand every sentence and every word being said in the room, and we’re doing it at a speed that’s pretty incredible. That speed is crucial when it comes to translating spoken language. You need to minimize latency as much as possible. The tech is now at a point where it’s not just impressive but actually useful and applicable. And we’re hearing directly from our customers that it’s making a big difference in how they work.
Today, we’re taking you to a new frontier in Language AI. Our new real-time speech translation technology is set to transform the way businesses communicate. Welcome to the family, DeepL Voice! deepl.com/blog/deepl-voi…
— DeepL (@DeepLcom)
11:35 AM • Nov 13, 2024
From a product and engineering perspective, what have been some of the biggest challenges transitioning from text to real-time voice translation? Which innovations on the model front have made this possible for your team?
The first big challenge is speech-to-text. That still remains one of the hardest problems in this space. You really need to clearly understand what’s being said before you can even begin to translate it. But beyond that, we've seen major improvements in translation models, especially ours, that can now handle the kind of far more complicated, less structured language we use in verbal communication compared to written text.
The progress in both areas has made this possible: better translation models and speech-to-text technology that we've developed in parallel, which is tightly tuned to work well with our translation systems. That pairing helps structure sentences properly so they can be translated accurately.
Let’s talk about customer use-cases for Voice — which areas of your customer base are using it most today, and which workflows are you seeing the most uptake in?
I’d say there are two main use cases. The first is real-time video conferencing, like meetings on Microsoft Teams, Zoom, and so on, where international teams are involved. What we’ve heard from customers is that participation in these meetings was usually limited to those who spoke English fluently. The barrier was even higher in live conversations, which effectively excluded some people from taking part. In many cases, they’d rely on one team representative to join and relay information. Now, with DeepL Voice, they can bring in the actual subject matter experts, which creates so much more efficiency. It allows people to speak directly, cutting down the overhead of relaying messages through multiple people to figure out next steps.
The second use case we're seeing more and more is with frontline workers using DeepL on their phones to communicate with people who don’t speak their language. This shows up in healthcare settings, customer service, or warehouse environments – places where teams often don’t all speak the same language. Here, DeeL Voice enables more meaningful, and often more complex, conversations. In these contexts, there’s also a strong focus on security and data privacy, especially for customers in healthcare and finance, which makes the quality and trustworthiness of our solution even more important.
In January, you launched a major upgrade to your API, including a new LLM and write functionality. Could you break down what your new release unlocks for your customers
A lot of people use DeepL as a SaaS application going to the website or using our apps. But what we’re seeing more and more with enterprises is that they want translation embedded directly into their internal workflows. That could be customer support, where inquiries get automatically translated, or research tools that scan the internet for updates on certain topics and translate content in the background.
So there's growing demand for our API to act as this ubiquitous translation layer, available everywhere. In response, we’ve been expanding our capabilities, bringing it to the same level of translation quality you’d get from our SaaS app and adding more functionality that matters to customers. One of the key additions is API access to DeepL Write, which allows teams to bring our AI-powered writing assistant into those same workflows, improving things like accuracy, clarity and tone.
Seamlessly translating websites is just one way that the DeepL API enables global growth.
It’s reviews like this that earned DeepL our 2025 G2 Best Software Award. We'd love to hear what you're creating with the DeepL API!
#BestSoftware2025#AIAwards#G2#API
— DeepL (@DeepLcom)
3:56 PM • May 2, 2025
You also launched Clarify earlier this year — an interactive assistant baked into the translator. What inspired that move toward more back-and-forth interactivity? 2025 is the year of the agent, and Clarify feels like a compelling step in that direction for the language space. What is your view on DeepL evolving into an agent-like translation layer in enterprise workflows?
Translation has historically been a very streamlined, linear workflow. You’d input a sentence, get a translated sentence back, and then either tweak it or accept it as-is. But we need to go further. Speed and efficiency really matter, and we want users to be able to work with translations in a way that is even more effective, dynamic and context-aware.
That’s something our customers really care about, so we’ve been building customization and interactivity features into our translation experience. Clarify is one of those features. It’s designed to help users quickly identify and improve parts of a translation that might be unclear or ambiguous, acting like a helper, highlighting areas that might be unclear or ambiguous to the reader, and prompting further edits. For example, if you’re translating something about basketball and mention the NBA, Clarify might flag that as potentially confusing to a reader unfamiliar with the term and prompt you to explain the abbreviation.
Clarify identifies those edge cases where human input is essential to produce a clear, contextually accurate translation.
At a technical level, could you share some of the key differentiators in how you’ve built DeepL. Specifically, what would you say is the philosophy driving the model architecture?
We have a pretty unique approach to AI and how we build it. That goes back to our 2017 launch, when the AI market wasn’t really formed yet. From the start, we believed that to produce the best solutions for specific use cases, you need to control the entire tech stack. So we’ve always built our own models and architectures, designing them specifically for translation rather than relying on off-the-shelf transformer models, whether commercial or open-source.
That includes everything from collecting training data to deciding on training methodologies. For instance, we ask: how much should we train on bilingual translated data versus monolingual data written in the target language? That tradeoff impacts how accurate the translation is versus how fluent it sounds in the target language.
We put a lot of work – real academic-level research – into designing model architectures that are tightly aligned with the product. It's not just about general-purpose models or throwing an LLM at the problem. It’s about building something purpose-built for translation that actually performs where it matters.
A lot of folks think of Language AI as a layer of infrastructure. How do you see DeepL positioning itself in that stack? Additionally, what do you believe DeepL does better than other players in the space?
I’ll have to be a bit secretive about the internal workings of our technical architecture, but I’d say the most important thing is that it’s constantly evolving. This is a highly innovative space, and we’re regularly changing our models, rethinking how we use them, and making tradeoffs, especially between model size, latency, and quality across both text and voice.
What really differentiates us is the focused nature of our models. They’re not just general-purpose like many other LLMs. That focus lets us keep the models more concentrated on the task at hand. We also invest heavily in research specific to this domain, which I’d say is pretty unique in the market.
Which product metrics matter most to you as DeepL scales? How are you measuring quality, adoption, and customer success across your suite of products?
That really differs depending on the specific application. Translation is super broad, which is why we run different model sizes and use different methods for operating those models. If you're translating a batch of 1,000 documents, the latency requirements are completely different compared to running a real-time conversation where latency has to be as low as possible.
We’re constantly making those tradeoff decisions, and at the same time, quality is always top of mind. We’re pushing for the highest quality possible within any given latency requirement. Sometimes that even means upgrading the hardware we’re using or deploying specific infrastructure for customer tasks. Accuracy and translation quality are what matter most to our customers, and are what we've built our reputation on.
Looking ahead to the next 12-24 months, what product updates should your customers be most excited about? Anything you’d like to highlight?
Two points come to mind here. First, I remain incredibly excited about speech translation and DeepL Voice. I’d say 2024 – and even now – feels a lot like 2017 did for text translation, meaning that we can expect to see a lot of quality improvements. DeepL Voice already supports a wide range of business use cases and is highly applicable across different situations, but the quality is only going to keep improving over the coming months and years. I think we’ll be blown away by how this technology will help us communicate.
Second, we're investing heavily in giving our customers the ability to customize how they translate and work with text. The idea is that translation shouldn’t just be accurate – it should also reflect the unique way a company speaks to its customers. That’s incredibly important, and we’re focused on building models that can be easily customized; that can be fed detailed guidance on how a company’s tone and language should come across.
And we're also doing this in a way that’s extremely easy to use. Our customers don’t want to train models or deal with technical complexity. It’s just not scalable. So the challenge is building AI that delivers deep levels of customization without placing a technical burden on the user.
Final one — tell us a bit about the DeepL team. What kind of talent do you look for, and how would you describe the internal culture that powers your company? What makes the DeepL team special?
We’re all super curious about technology, but also very product-oriented. There’s this unique mix at DeepL of academic research – on the one hand, being able to dive deep into the math behind the models and figure out the best training method. But at the same time, we’re always asking: how is this actually going to impact users? How will our enterprise customers use this technology in the real world?
That balance is incredibly important in AI today. It’s such a fast-moving space, and it’s easy to get distracted by the pace of innovation or be fascinated by what’s technically possible, and lose sight of what’s truly valuable for the customer.
We look for people who bring both mindsets – technical depth and real-world product thinking. DeepL is also a very international team, which makes sense given the diversity of languages we work with. But that global perspective extends beyond just the products we're offering – we also carry that through culturally and really value having a broad mix of perspectives and ideas coming from different countries and backgrounds.
“Our product really stands for being global, and for the ability to work across different places and languages. We’re proud of being founded in Europe, and part of building a great AI company is expanding beyond that.”
That’s DeepL founder Jarek Kutylowski discussing what it
— DeepL (@DeepLcom)
8:33 AM • Jun 24, 2025
Conclusion
Stay up to date on the latest with DeepL, follow them here.
Read our past few Deep Dives below:
If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.