• Cerebral Valley
  • Posts
  • Kindo is your enterprise's secure AI management software 🔒

Kindo is your enterprise's secure AI management software 🔒

Plus: Founder Ron Williams on AI regulation, SB 1047 and more...

CV Deep Dive

Today, we’re talking with Ron Williams, founder and CEO of Kindo.

Kindo is a secure AI management software for enterprises. It was born in 2022 out of Ron’s recognition of the growing need for enterprises to manage and control AI applications, much like previous waves of technology required dedicated management solutions. Kindo provides IT and security teams with centralized control over AI capabilities within an organization, ensuring that sensitive data is protected while enabling faster AI deployment.

Ron’s journey to founding Kindo began in the 90s as a programmer in the Air Force, where he developed advanced combat planning software. Over the years, he transitioned into scaling large systems and establishing dedicated security teams at companies like Earthlink. His experience in building and securing infrastructure was further honed at Riot Games, where he played a key role in scaling the infrastructure and building the security team for the world’s largest video game, League of Legends. Following Riot’s $8.5 billion acquisition by Tencent, Ron returned to the startup world, leading infrastructure and security at companies like Clover Health and Bird Scooters.

Kindo, a very recent Series A, raising $20.6m dollars from Drive Capital with participation from RRE Ventures, Riot Ventures, Marlinspike Capital, Eniac Ventures, Sunset Ventures, and New Era Ventures.

In this conversation, Ron discusses the vision behind Kindo, the current regulatory landscape for AI enterprise data, SB 1047, and the techniques Kindo uses to ensure secure AI deployment. 

Let’s dive in ⚡️

Read time: 8 mins

Our Chat with Ron 💬

Ron - welcome to Cerebral Valley! First off, can you give us some background on yourself and the journey to founding Kindo? 

My journey started as a programmer in the Air Force in the 90s, working on advanced combat planning software using technologies from Sun Microsystems, Oracle, and Cisco. After the Air Force, I moved into scaling large systems and eventually into management roles at companies like Earthlink. As the Internet grew, so did the need for dedicated security teams, which I helped establish and lead.

In 2011, Riot Games approached me to help scale their infrastructure and build their security team. At that time, Riot was transitioning from an on-premises setup to building data centers globally to support the growth of League of Legends, which became the world's largest video game until Fortnite overtook it. I joined Riot when they were a small team and stayed until Tencent acquired the company in an $8.5 billion deal.

After Riot, I transitioned back into the startup world, joining A round companies like Clover Health, Bird Scooters, and eventually, a startup focused on fixing Internet latency for real-time communications. That startup built one of the most interconnected networks in the world, but we had to shut it down in June 2022 due to changes in the VC market.

I think that the release of Stable Diffusion was a pivotal moment because it spurred on AI labs to start exposing their AI to the world, such as OpenAI with ChatGPT. I realized that AI and the open-source ecosystem had reached a point where a couple of smart engineers could build powerful startups using these technologies. This reminded me of previous technological waves I’ve experienced, like the rise of mobile devices and SaaS products, where every new wave required enterprises to adopt solutions to manage and control these technologies. Just as enterprises needed mobile device management tools (like Jamf) when smartphones became ubiquitous, and SaaS management tools as cloud applications proliferated, I saw the need for a similar solution in AI. That’s when I decided to start Kindo. We began in October 2022, backed by Venice California based Riot Ventures, with the goal of building a platform to help enterprises manage the wave of AI applications that are now emerging

Can you describe Kindo to someone who has never heard of your platform before? 

Kindo is an IT and security tool designed to give teams centralized control over the AI capabilities that a company wants to bring in. The key value propositions include controlling what happens to AI and where it happens, which is crucial for security and risk management. This level of control allows companies to move faster in deploying AI while ensuring that sensitive data isn't inadvertently sent to third-party AIs.

Kindo centralizes the use of AI onto a unified platform, including those from OpenAI, Google, Anthropic, Cohere, IBM, and even hundreds of thousands of open source models on Hugging Face. It’s not limited to large language models—any AI can be integrated into Kindo easily. We also provide integration tools that allow IT and security teams to connect their data sources, like SaaS apps, file systems, Google Drive, OneDrive, and Box, to Kindo. This gives them the ability to control who can connect which data source to which AI, audit usage centrally, and implement data loss prevention filters to protect sensitive information.

What sets Kindo apart is that we touch every employee using AI by shipping a chatbot that works across all chat models, providing a consistent interface for the enterprise. We also offer a no-code agent builder, enabling teams like marketing or HR to create human-in-the-loop agents to automate repetitive tasks, share them across the company, and even build AI assistants or GPT-style chatbots. This toolset allows IT and security teams to deploy AI consistently across the enterprise, manage policies more effectively, and maintain visibility and control over AI usage.

Can you give us an overview of the current regulatory space for AI enterprise data, where it stands now, and where it might be heading?

The key thing to understand is that there’s already a significant amount of regulation impacting AI, mainly because of existing software and data privacy laws. For example, federal regulations like Sarbanes-Oxley require public companies to adhere to strict security and data auditing standards, which naturally extend to AI. While AI-specific legislation is still developing, there's a mix of necessary and problematic proposals.

For instance, AI introduces new risks, such as the need for clear distinctions between human-created content and AI-generated content, as well as ensuring proper data control. These issues will likely require targeted laws. However, there's also concerning legislation like California’s SB 1047, which has the tech community alarmed. The bill attempts to legislate aspects of AI that aren't fully understood or feasible today, which could ultimately be harmful.

On the positive side, I support ideas like AI lab whistleblower protections, which is the only good part of SB 1047. These could be vital as AI systems become more powerful and AI labs handle more user data, raising significant privacy and competitive risks. However, the majority of the current legislative efforts aren’t ideal for the industry, and many share this concern.

You mentioned that SB 1047 could be detrimental, can you elaborate?

SB 1047, in my opinion and in the opinion of many others, is largely a political maneuver by its sponsors to appear proactive on AI because it's a hot topic right now. There's an anti-tech sentiment in California and among certain politicians nationwide, and I think they see this as an opportunity to show they’re being tough on AI. However, that approach is shortsighted, especially considering that AI is set to become central to every business and even personal life, as we'll see with developments like Apple’s AI technologies.

The bill is flawed because it addresses hypothetical future dangers, such as the 'Terminator' scenario, without acknowledging the significant benefits AI brings. It aims to cap the amount of AI power a company can deploy, giving unelected officials the authority to halt AI projects based on vague and poorly defined criteria. This could create a chilling effect on innovation, making businesses and investors hesitant to advance AI due to the fear of legal repercussions, including criminal charges for tech executives.

The bill essentially allows regulators to control anything they want under the guise of preventing some undefined, speculative AI threat, ignoring the immense potential of AI to solve critical problems like cancer, aging, and climate change. It’s a poorly conceived law that would stifle AI innovation. People in the tech industry, or any industry really, need to get informed about this bill, engage with their legislators, and advocate for its defeat. We need regulations that address real, current risks, not ones based on fear and misinformation.

Could you take us under the hood of Kindo and explain some of the techniques you’re using to ensure a company's data is secure when inferencing AI models? 

Absolutely. The core of our approach is observability—knowing what the AI is doing with your data and who is interacting with it. It’s crucial to determine if the right person or AI has access to specific data and whether they’re allowed to send it to a particular AI model. This problem is becoming more complex as AI models proliferate within enterprises.

Once you have visibility into what’s happening, you need control. You must be able to say, for example, that a specific AI can’t access certain data or that a particular user isn’t authorized to send data to a particular AI. This fine-grained control over data interactions with AI is the foundation of everything we’re building.

On top of that, we provide productivity tools like chatbots and agent builders. As AI increasingly integrates into businesses, traditional SaaS apps might start to fade because AI can handle data manipulation directly, removing the need for manual data entry. For example, an AI can listen to a sales call and automatically update the database, eliminating the need for a human to input that data.

As AI becomes central to your operations, it's essential to provide users with an interface that you can control, whether through voice or text, and to monitor how AIs interact with each other and with humans. Understanding where your data goes, what part of it was AI-generated, and maintaining data provenance are critical, especially in scenarios like intellectual property disputes.

At Kindo, we operate at the top layer of the AI stack, close to the end user. There are still important security concerns lower down in the stack, like ensuring data integrity before it enters an AI system or securing the infrastructure that runs the AI. We focus on the implementation and deployment of AI, managing the risks associated with integrating these AI 'entities' into your organization, much like HR manages risks associated with human employees.

There are numerous startups racing to provide secure AI for enterprise customers, what sets Kindo apart?

One of the key things that sets Kindo apart is our approach to security from the ground up. Our entire company starts with the question, ‘How do we secure this?’ and then works towards productivity, rather than the other way around. Many startups build a productivity tool first—like what happened with ChatGPT—get it into millions of people’s hands, and then realize they need to bolt on security later. However, it’s much harder, if not impossible, to secure something effectively after the fact. We’ve taken a different approach by integrating security into the foundation of everything we do.

Another critical factor is our team. Our VP of Product Andy Manoske, was the founding product manager for HashiCorp and the creator of Vault, one of the leading infrastructure security products for storing secrets, API keys, and passwords for cloud infrastructure. His experience in working with security teams is invaluable in helping us ensure that our users can securely use AI across various departments, including security, marketing, and HR.

Another significant differentiator is our involvement in an open-source security AI project called WhiteRabbitNeo, the leading cybersecurity red team AI model. WhiteRabbitNeo is essentially a proprietary dataset of security and coding tasks related to security that have been fine-tuned on the best open-source coding models. This tool allows organizations to pentest their infrastructure and code, identify security issues, rewrite code, and explain log entries or config file changes in a way that's accessible to both security engineers and developers focused on security. It’s currently integrated into Kindo, but as an open-source product, we hope other companies will adopt and expand on it.

We’re also strong believers in the importance of transparency, especially when it comes to security. That’s why we’ve invested heavily in getting the WhiteRabbitNeo community off the ground, because we think transparency is key to securing AI. This is another reason why we’re concerned about legislation like SB 1047, which could concentrate AI in the hands of just a few big companies and stifle powerful open-source initiatives like WhiteRabbitNeo.

What kinds of customers do you serve? How do you balance on-prem vs. external AI? 

One of the key value propositions of Kindo is that we also serve the on-premise market, which still represents about half of the server market. Most servers sold by companies like Dell or HP aren't going to cloud providers—they're going to enterprises that deploy these servers in their own data centers or IT closets. Kindo is maybe the only VC-backed product that allows you to take the entire Kindo stack and run it with whichever large language models you prefer, whether open source or very useful licensable models like Cohere or IBM. This can all be done completely in-house, disconnected from the internet. In fact, Kindo and powerful models can run on something as small as a MacBook, making it viable even for the most secure, high-security environments—you could theoretically run Kindo on a nuclear submarine deep in the ocean.

For highly secure customers, we recommend running the models yourself, in your own cloud or on-premise environment, rather than using a third-party AI model for sensitive data. Many of the AI startups today, including major players like OpenAI, are still developing their security posture. For example, OpenAI's head of security doesn't list any extensive security credentials on their LinkedIn profile, and this is common among AI companies that originated as research labs. It takes time for startups that aren't inherently security-focused to build a robust security framework, so we believe companies should maintain control over their models and data.

For scenarios where a third-party AI is necessary, such as using ChatGPT for marketing purposes, Kindo allows you to treat that third-party AI as an untrusted model. You can determine which users are allowed to interact with it and specify what kind of data can be sent to it. All sensitive data remains within the models that you control. Given the recent advances in open source models, particularly with Meta's support, there's no longer a strong reason to trust third-party AI when you can easily run these models yourself. Kindo makes this process turnkey, providing a seamless experience for companies that want to retain full control over their AI deployment.

How do you see Kindo progressing over the next 6-12 months? 

We recently completed our Series A round in June, so we're now scaling up the team, particularly with engineers, and focusing on increasing awareness, marketing, and sales efforts. Right now, we're at Def Con, educating the market about WhiteRabbitNeo, which is a significant part of our strategy. 

Our big goal in the next six to twelve months is to expand our current enterprise customer base, which primarily consists of mid-sized companies—typically ranging from 400 to 4,000 employees, with some going up to 10,000. These companies have substantial technology needs but might not be as deeply rooted in tech as a typical Silicon Valley firm. They may have smart data teams and skilled engineers, but tech isn't their core business. So, our turnkey solutions, which simplify secure implementation and use, are particularly valuable to them. We're aiming to add more of these customers and continue scaling the company. 

Conclusion

To stay up to date on the latest with Kindo, learn more about them here.  

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.