Rad AI is Transforming Healthcare with AI 🏥

Plus: VPE Ken Kao on Omni, Continuity and AI x Healthcare...

CV Deep Dive

Today, we’re talking with Ken Kao, VP of Engineering at Rad AI.

Rad AI is a startup powering the intersection of AI and radiology. The startup aims to empower healthcare professionals using their LLM-based tools including Omni and Continuity - a radiology reporting tool and a patient management tool - which are designed to help streamline the day-to-day reporting and data-management tasks incurred by large portions of the healthcare industry. 

Today, Rad AI has more than one third of US radiology groups and health systems as customers, and is generating eight-figures in ARR from their two product ecosystems used by thousands of physicians and hospitals around the US. The startup has raised $50m from investors including ARTIS Ventures, OCV Partners, Kickstart Fund, and Gradient Ventures (Google's AI-focussed fund). In January, Rad AI announced a partnership with Google Cloud to collaborate on expanding healthcare applications of LLMs. 

In this conversation, Ken walks us through the founding premise of Rad AI, its two fast-growing product ecosystems, and its rapid transition to an AI-first company in 2024.

Let’s dive in ⚡️

Read time: 8 mins

Our Chat with Ken 💬

Ken - welcome to Cerebral Valley. Firstly, give us a little bit about your background and what led you to join Rad AI? 

Hey there! My name is Ken and I’m the VP of Eng for Rad AI. I joined Rad AI 4 months ago, and previously, I was at Meta working on AR/VR - specifically, the core tech around GPUs, CPUs and memory optimization for their headsets. There, I also worked on developer infrastructure and emulation - in a sense, deep systems-level engineering. Before that, I was at Airbnb where I ran hosting tools, and I’ve also worked with healthcare and blockchain startups, as well as at Palantir doing big data and ML. 

I joined Rad AI for a couple of key reasons. Firstly, this is one of the few healthcare AI companies that is ‘right here, right now’ - we have over 3,000 physicians using our product daily, and we have models deployed at scale where AI is actually making physicians more efficient, saving them a median of 1 hour per 9-hour shift, while reducing fatigue and burnout. We also use our AI for follow-up management - because of our software, we now help hundreds of patients a year detect early-stage cancer before it grows. This is hugely important to me.

From a business perspective, our ARR is growing 3x YOY, and we have thousands of doctors as paying users. Strategy-wise, we also have a strong data moat with over 500 million radiology reports, mostly with data exclusivity - no other AI companies can build a model fine-tuned and customized to this degree. On the culture side, I love our founders - who are strong visionaries who found the right intersection between radiology and AI - as well as our executive team.

Lastly, I realized that ours is a product that's AI-differentiated. At the end of the day, the biggest factor if we were to fail would be around execution of product strategy, not market risk. To me, joining as an engineer exec, that's something I know I can contribute to. If the company's biggest problem is they can't sell, I'm not going to be able to help with that problem. Our company's biggest risk is technical execution, and that's something I can really contribute to.

How would you describe Rad AI’s core product offerings to a new healthcare professional or AI engineer? Give us a top-level overview of Rad AI’s main products. 

We have two product ecosystems. The first ecosystem is what we call Continuity, which is our patient follow-up management software. For many patients who get radiology exams, there are actionable findings on those exams that could turn out to be new cancer, or enlarging aneurysms – but typically only 25% - 35% of those patients return for their recommended follow-up imaging.

There are a number of reasons for this – in the ER, the ER physician would not be responsible for ordering a follow-up exam in 6 or 12 months, as it’s not the original reason you went to the ER, and your primary care physician would typically never see the report. The patient often is never told that they need follow-up imaging, or may forget to come back. And sometimes the follow-up recommendation isn’t included in the radiology report, in which case follow-up rates are typically 0%.

With Continuity, we take in all of the radiology reports, and pass it through our NLP models to automatically categorize and subcategorize based on national consensus guidelines, and identify the correct timeframe for the follow-up. We then automate communications with both the provider and patient, fully customizable by the health system or radiology practice – in Epic, for example, we can provide MyChart app messages to patients, or InBasket messages to providers, and automatically update status as the follow-up exam is ordered, scheduled and completed.

We can typically increase successful follow-up rates from 30% up to 75-90% – more than a 2x increase. This markedly improves the quality of patient care – ensuring patients have their new cancers diagnosed and treated early, reducing liability risk for health systems, and generating new appropriate imaging revenue for the health system. In one of our client sites, 100 of the follow-ups each year turned out to be new cancer diagnoses! 

The second product offering is our reporting ecosystem. Radiologists actually spend most of their time dictating reports – typically, 75-80% of their time is spent doing manual dictation using speech recognition, with much less time needed to review the images themselves. Our LLMs allow radiologists to dictate only the key positive findings from the images, and we automatically generate part or all of the report in their preferred templates and language, thus saving them a great deal of time.

For example, if they only dictate the new changes between the current report and the prior report, we can generate the entire report in their preferred templates and language. Often, as our LLMs are fine-tuned on their historical reports, the entire report is generated exactly as they would have phrased it, and radiologists share that we’re “reading their mind”.

With this, what we’ve found is that we can actually reduce the number of words radiologists dictate by up to 90%, and save them up to 50% of their time, and so that’s very exciting. Within that ecosystem, there's a particular product we started out with, which is generating the Impressions summary section. With just that one single field that we generate, we've already proven that we can save doctors 1 hour out of each 9-hour shift. That one single product is giving us close to 8-digit ARR. Now, we're moving to provide even more value-add for physicians, by saving them more time across the entire report

Talk us through Rad AI’s journey from a health-focused startup into the world of LLMs and generative AI. How has that arc shaped itself since you’ve been at the company? 

Rad AI started in 2018, so we’ve been around for 6 years. For the first 4-5 years, we were building our own language models based on Transformers, before LLMs were a thing. It was heavy work - we had to build our own frameworks for evals and inference to even serve our model. Over time, of course, the proprietary and open-source communities accelerated and so we no longer had to build our own models - around the time I joined, we started pushing to instead focus on our main competitive advantage: our strong data moat

So, we decided to take open-source models and fine-tune them with our own data, while still maintaining our highly developed preprocessing and post-processing pipelines - and what we found for our Impressions product: our accuracy went up and our serving costs went down, and our inference times also went down by 50%. This was a big turning point for us, in moving away from our initial proprietary models and into fine-tuning open-source models.

We currently don’t believe in one big model that’s going to rule them all, because model complexity grows in a non-linear way - for example, the cost of training a 100b model compared to a 50b parameter model is not 2x, but probably 4x or 8x as it’s quadraticSo we actually have 5-6 models we use for different tasks - a combination of in-house models and fine-tuned open-source models, and we also leverage the proprietary models from Google and OpenAI for certain tasks. That’s the hybrid approach to models that we’re taking internally

How does Rad AI exist at the intersection of AI and healthcare, from a culture perspective? Do you have a specific way that you learn internally as an organization? 

Rad AI is different from many of the other AI companies I've talked to, in the sense that it wasn't founded by ex-researchers from OpenAI or DeepMind. It was predominantly founded by people in the radiology space - one of our co-founders is a radiologist himself, though with prior experience in ML. Of course, we brought on skilled engineers and AI scientists, but historically, our culture was a lot less similar to today’s AI startups and much more a radiology product-focused company.

That was key in building a strong brand within radiology, and deep trust from our partners – Rad AI is now the best-known AI company in radiology, in use by over one-third of all health systems and radiology groups in the US, and recently won “Best New Radiology Software of 2023” from AuntMinnie – the best-known annual awards in radiology.

That said, we did announce at the company’s January offsite that we’re evolving into an AI-first company, and we’re doing this for a key reason. We realize that a lot of our products and technologies can be applied to neighboring disciplines - for example, patient follow-up management in Pulmonary and Cardiology workflows, a wide range of cancer screening programs, or tracking follow-up and subspecialty referrals for labs, vitals and pathology, in order to ensure prompt diagnosis of diabetes, high blood pressure and many other chronic conditions. We're starting our first expansion efforts into other healthcare specialties, and so we’re shifting from being a radiology company that does AI, into a healthcare AI company that built its foundation in radiology. This has been a big shift for us

The pros of starting as a radiology product-focused company is that we understand the radiology space really well, respond and adapt quickly to physician feedback, and have very on-point product insight. The con, however, is that our tech ends up slightly behind the insight - we first get the insight, and then engineering is often playing catch up. We're trying to flip that script, and the questions I’m asking our team to think about are: what are the foundational technology pieces we need for the future? How do we invest in platform and AI research so that we’re ready to apply them to new ideas and business opportunities? That’s one of the big shifts we're making this year.

Could you talk us through how you’ve navigated training models on patient data, before and since the ongoing LLM data licensing debates? How have you overcome the inevitable privacy and data concerns in this space? 

There are two elements to this that set us apart from others, in a way. First, as the only company in radiology training Transformer models back in 2018, we’ve had a very long head start in building proprietary datasets for LLM training and working with healthcare customers across the US. Back then, these models weren’t yet called LLMs or generative AI. Due to our specific product use cases, historical data from each customer is required, in order to provide the exact language customization and user time savings our products are best known for.

At this point, the clear value-add and ROI of our products are very well-known across healthcare, and the Rad AI brand is trusted by 9 of the 10 largest US radiology practices, which helps us expand more quickly to new customers. Rad AI is of course SOC 2 Type II HIPAA+ certified, with ongoing annual third-party audits. We are also known for having the most robust de-identification pipeline in the industry for radiology reports, given the massive size of our report dataset and our years of work in this space.

The second piece here is about using the data itself. We have a strong flywheel going - whenever we generate radiology report text and it’s not perfect, a radiologist may make minor modifications, sign off and send it out. We’ll take that modification data and feed it back to our system and retrain the model, so that the more physicians use our product, the better we can make their exact language customization, and the more time they can save.

We get this network effect, similar to Google Search - the more you search on Google, the more feedback it collects, and the better it can serve you accurate results. For us, the more you use our product, the more we can improve the exact customization of your model, and then the better the product becomes for you. This is a super valuable flywheel to have.

How have you approached the problem of building breakthrough technology within a legacy industry that’s heavily regulated? Have your products found exceptional traction with any specific niche in the market? 

We're seeing strong traction with both products. Our revenue is growing rapidly year over year, and we actually have a lot more customers on the reporting side, where we generate time savings for radiologists, primarily because we provide such strong immediate ROI for radiology groups and radiologists at health systems, and also given our in-depth experience with radiology workflow.

For patient follow-up management, we're working with multiple clinical champions and key stakeholders at each health system, which tends to be more complex, but we also have quite a few customers there. That said, for patient follow-up management, each contract size can be several times as large as the contract size of reporting, and it also drives very strong benefit for the patients we serve.

From a healthcare perspective, radiologists tend to be more tech-forward compared to other specialties in healthcare. You can see this from their current software usage - they use PACS image viewers and advanced visualization software to look at scans, and have a wide range of annotation and reformatting tools. Our reporting solutions also allow open integration of imaging AI results directly into the report, and fit seamlessly into existing workflow, which makes it much easier for radiologists to adopt.

With hospital systems, these can be harder because the buyer is not the person using the software. What we have to do is prove specific ROI that matters more for health systems – for example, our reporting solutions significantly improve overall report accuracy, thus improving the quality of patient care, and also reduce radiology exam turnaround times, which matter a great deal for health systems.

Rad AI Continuity, our patient follow-up management solution, also has very tangible ROI for health systems – driving new appropriate follow-up imaging revenue, ensuring patients’ new cancers are diagnosed and treated early, and reducing liability risk for health systems. The message and the value-add for each product and each audience is distinct.

What has it been like making the shift from a health-first company to an AI-first one? 

I have to give a lot of credit to our two co-founders here, because I think sometimes I get a little bit of undeserved glory since I pushed for us to be an AI-first company. Actually, I think they deserve much more of the credit because they are the founders of the company. This is their company, and they decide what to do with it. As far as AI, they were very open to the shift and to new ideas, and were fully behind this transformation for us. 

It's very hard to find founders who are very willing and adaptive to change, and once you convince them of a good idea, they are bought in and are fully behind it. So, a lot of credit really goes to our founders for being bought into the shift.

How big is your team today? And how would you describe your culture in 3 words? 

We've been growing really quickly. When I joined four months ago, we were less than 80 full time employees, and right now, we’re at nearly 100 people, with more than half being in Engineering. 

As far as culture, the three words I’d use are transparent, curious, and collaborative. We are very transparent - we celebrate wins and share progress and challenges across each part of the team every week in Town Hall meetings. We're very collaborative - radiologists, engineers and product managers work closely together to develop and improve our products. And we're very curious - engineers are always learning more about radiology, and our sales team members are always learning more about AI. We are a group of very curious people.

Conclusion

To stay up to date on the latest with Rad AI, follow them on X(@radai) and learn more at them at Rad AI.

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.