• Cerebral Valley
  • Posts
  • Stytch is AI's full-stack solution for fraud prevention 🔐

Stytch is AI's full-stack solution for fraud prevention 🔐

Plus: Founder-CEO Reed on AI agents, fraud prevention, and Stytch's culture...

CV Deep Dive

Today, we’re talking with Reed McGinley-Stempel, Co-Founder and CEO of Stytch.

Stytch is a developer platform that provides authentication, authorization and fraud prevention services via API and SDKs. Stytch was founded in 2020 by two former Plaid employees, Reed and his co-founder, Julianna Lamb.

Reed notes that as AI startups scale, they deal with ‘big company’ questions sooner than ever before: preventing fraud and abuse of AI endpoints, supporting enterprise security requirements like RBAC and SSO, and ensuring availability of application infrastructure. An early, well-thought-out approach to authentication and fraud prevention can alleviate a lot of pain and engineering effort down the road.

Today, Stytch has thousands of developers and organizations using its API-first authentication and fraud prevention platform, including notable AI companies such as Groq, Replit, Hex and Tome. At the end of 2021, the startup raised a $90 million Series B round from investors like Coatue, Benchmark, Thrive Capital and Index Ventures. You may have also seen their ads around SF over the last few weeks.

In this conversation, Reed walks us through the founding premise of Stytch, why proactive fraud prevention is so important for AI startups, and his goals for the company in the next 12 months.

Let’s dive in ⚡️

Read time: 8 mins

Our Chat with Reed 💬

Reed - welcome to Cerebral Valley! First off, give us a bit of background on yourself and your time at Plaid.

Hey - I'm Reed, co-founder and CEO at Stytch. My co-founder Julianna and I met back in 2017, while at a fintech company called Plaid. We both worked on the authentication team, which laid the foundation for what we ended up building at Stytch. We were responsible for all the consumer authentication - for example, how do you connect a bank account to Venmo, Coinbase, or Robinhood? We also had a B2B authentication surface area, which was for cloud developers logging into their dashboard with SSO or MFA.

On the consumer side, Plaid cared a lot about conversion and being able to allow ‘good’ users to log in seamlessly. After all, we only made money when a user successfully connected their bank account. The second piece was preventing fraud and account takeovers - passwords aren’t particularly secure, and people were always forgetting them. So we were the team that was consistently experimenting with concepts like biometrics, email, magic link, ‘sign up with Google’ and other higher-converting alternatives that were just as or more secure than passwords. 

Since Plaid was an API that connected 10,000 banks, we also had to deal with a lot on the fraud side. Whenever there was a large data breach elsewhere, attackers would immediately try to use those credentials with Plaid to see what percentage of those users reuse the same password at Chase, Bank of America etc. So, we realized from our time at Plaid how important it is to integrate auth & fraud prevention closely from the start.

What was the original insight from your time at Plaid that led you to think about co-founding Stytch? 

Back in 2019, prior to starting Stytch, we came across a bunch of issues as we were considering migrating from our in-house authentication solution and were evaluating other options on the market. First, we couldn’t find anyone that offered the kind of integrated auth and fraud protection solution we were looking for. As a result, we were going to have to pay an auth vendor and then still need to either build fraud tools in-house or get an additional vendor for bot detection to layer on top. We wanted Stytch to be an integrated fraud platform, that would provide security monitoring and auth, all in one.

We also wanted to create an authentication solution that was more like Stripe, than like PayPal. From a technical architecture standpoint, companies like Auth0/Okta are very focused around selling widgets as a service. In that case, the core product is a hosted page with a redirect, and it doesn’t expose the API-first primitives you'd want to be able to own the UX design and use-case customization in more detail. At Stytch, everything was built API-first, with pre-built UIs and SDKs on top of that. 

These issues we faced are what led to the differentiated pillars Stytch is built upon today

Give us a top-level overview of Stytch for those who may not be familiar with your products.  

Stytch provides full authentication, authorization and fraud prevention services via API and SDKs. For developers in particular, we focus on not just giving you a full-stack solution for auth and fraud mitigation in your app, but also on allowing you to build it in a highly customizable way - so that it feels like your own code, UI, and UX - and with as little user friction as possible.

Stytch gives you a lot of that power via our API and SDKs, but we allow you to control your own destiny in terms of how you build it.

Who are your users today? Who’s finding the most value in what you’re building with Stytch? 

Stytch is definitely designed with software engineers in mind, but our users are any function that ends up integrating it into their app. In terms of the companies that those software engineers belong to, it's definitely varied - we serve a lot of startups when they’re first building auth at the pre-seed and seed-stage (for example, YC). I’d also say we have a lot of mid-market enterprise customers, as those are typically the ones that are looking for more sophisticated B2B authentication settings than they had in their MVP - things like SSO, SCIM, custom session duration by customer. Usually, fraud prevention becomes increasingly important as you scale, because as you grow and get more users, your company also becomes a more lucrative target for fraud.

What’s been interesting with AI tools is that we see them needing both fraud prevention and more sophisticated enterprise auth features much earlier than before. A lot of companies using Stytch’s fraud prevention tools are actually AI startups - for example, Groq, Replit, Hex and Tome. Those are all startups in terms of employee size and how long they’ve been around, but because they have such outsized demand on their AI products, they also have a lot of abuse vectors. People are trying to reverse-engineer their apps in order to abuse the compute and effectively get free AI credits. Traditionally, Stytch’s value prop for earlier stage companies was around setting up and scaling their auth, but on the AI side is where fraud prevention has become a bigger need earlier from a lot of those customers.

Auth is actually the same way in some respects. AI startups are seeing individual users adopt and become advocates, and then that product-led growth unlocks enterprise deals earlier than ever - but also enterprise requirements. So suddenly you may need to support authentication protocols like SAML or offer enforced MFA way sooner than you would otherwise expect, because you’re signing bigger deals earlier in the startup lifecycle.

Tell us a little bit about your flagship fraud prevention service - how has it evolved since the advent of generative AI, and how are your biggest AI customers utilizing it today? 

We built our fraud prevention service prior to the real AI explosion, and it happened to be very well adapted for AI-first use-cases. At Plaid, in addition to defending against reverse-engineering from people trying to abuse Plaid’s APIs, we also had a lot of experience with reverse-engineering mobile APIs ourselves. That's how Plaid connected to banks for the longest time before banks offered official OAuth. So, we knew the defensive side really well, but also had studied the offensive angles - if you're trying to reverse engineer a sign up or login flow, what are the different tools available to avoid bot detection?

AI is a different paradigm in terms of what you're exposing to your end users - you're exposing client-side endpoints that give them AI compute capabilities. You might have heard that there's a popular GitHub repo called GPTforfree, which is effectively a list of sites that are being reverse-engineered to abuse their AI endpoints. Beyond those open-source solutions, we see a lot of custom attacks. Using a headless browser like Puppeteer to churn accounts and access free trial credits is another common vector.

What AI startups found was not only a ton of consumer demand for AI, but also a ton of bad actors looking to steal or piggyback on their AI compute - and that’s where our fraud suite comes in. Stytch has two primary fraud prevention products, and both of them are invisible to the end user. That way you aren’t adding additional friction, but get very deterministic results about whether you’re dealing with a real user, or a Python crawler, or a headless browsing script. Is this someone I want to let through, or is this traffic coming from a Tor exit node? 

The first product is device fingerprinting. For each piece of traffic that comes to a site’s protected points - if somebody's signing up, logging in or hitting a ‘create AI content’ button - we generate network, browser, and hardware fingerprints. These uniquely distinguish that user, and flag suspicious properties that might warrant a block or a challenge. 

The second product is an invisible CAPTCHA solution which is built around the same concept as device fingerprinting. The CAPTCHA is encrypted with device-based signals, which are invisible to the end user but mean the CAPTCHA can’t be solved by a bot or passed off to another device. 

There’s actually a thriving industry of ‘CAPTCHA farms’ - companies like anti-captcha.com or 2captcha. 

CAPTCHA farms basically have human workers at the ready, usually in countries with low cost of labor, who solve CAPTCHAs on demand for a small fee. That’s why the device-based signals are so important, because they ensure that CAPTCHAs can only be solved where they’re shown, and by real humans.

Those are a couple of the features that are most popular with AI companies like Groq, because if I'm Groq and I'm exposing this open chat interface, one of the first things I'm going to deal with as it gets any notoriety is people creating fake accounts.

The concept of AI agents has taken off in the last 6-12 months - how are you thinking about this shift internally? Will having thousands or potentially millions of bots navigating the internet change how you build fraud prevention products? 

The concept of AI agents was what got us looking at how we may have to change our product set for customers. It's not just whether it's users vs. bots, but users vs. good bots vs. bad bots. Certain agents will be used for malicious purposes, but others will be extremely useful for everyday mundane tasks that humans don’t want to do. There will probably need to be net-new protocols and paradigms that we, and our customers, need to support in the future. Luckily, a lot of what we’re doing already aligns well with this world. For example, with device fingerprinting, you can set rules as a developer to have an allow-listed agent that you can add to your account, and just do that via your account settings. 

A lot of what we hear from customers is that there’s interest in allowing AI agents to do more on their sites, but with guardrails. As you can imagine, there's a huge burden on any application developer to make their app easy to use for an end user, in terms of onboarding flows and activation. It would be really nice if you had the ability to outsource this to an AI agent that knows how to navigate on behalf of the user. The concern though, is whether these AI agents are on the right guardrails in specific scenarios - what if they try to do something like move money from an account, for example? Or, what if the end user actually wants a human in the middle for any particularly sensitive actions? That's where we've thought a lot about how the role-based access control model has to change to accommodate this

I'd say a lot of people we talk to are excited about this problem, and are themselves trying to figure out what they expect from an AI agent system and the protections around it. It’s an interesting top-of-mind topic for us that we're still trying to figure out - what’s the exact way to productize some of the things that we're prototyping there? I do think there's a pretty interesting opportunity both for developers and for Stytch within that ecosystem.

How do you measure success metrics-wise? And are there any customer stories that you’d like to share? 

We see uptime as a bare minimum - almost a foundational necessity - simply because we're core path. We rate ourselves on a five-nines-plus of uptime and sign contracts based on that. That matters, but it's also expected by our customers. So, typically the metric they are more curious about is ‘How many engineering sprints in weeks or months can you save me, both from building authentication and fraud prevention, and then maintaining it?’ I'd say that’s what we’re focused on in auth in particular. 

On the B2B SaaS side, I mentioned that we make it really easy to adapt your B2B flow to all of the different enterprise auth requests that you have. One of our customers, an AI presentation tool called Tome, migrated tens of millions of users from Auth0 to Stytch and were able to reallocate 3 engineers that were permanently maintaining the need to adapt to new enterprise auth requests. I'd say the big improvements there were the engineering hours saved. Other folks will be very conversion-centric and look at whether they can get a 20-30% conversion increase from natively embedding auth versus going through a redirect hosted page. 

On the fraud side, the big metric is typically the number of abuse events that you can reduce. Typically, customers like Replit or Groq think about it in a couple of different buckets - aka, how much automated traffic can I block? That's a very easy answer with Stytch, where we give you a promise that all of the automated traffic will be off of your site. The second component they care a lot about is how many of these abuse events by manual human beings can I prevent?

That one is definitely a more interesting balance, because you want to make sure you're not getting false positives since there’s a real human on the other side. The big thing we're actually working on this quarter is clustering. For example, maybe there’s a click farm that all shares the same network. We can send you a webhook that lets you know these are 50 real humans that signed up, but they're all engaged in a similar type of fraud

So, at the highest level, I'd say everyone typically cares about engineering resources. On the fraud side, everyone cares about how many abuse vectors and abuse incidents can I reduce, and how much money can that save me. And then sometimes people care about conversion. It truly depends on the team and whether there's a conversion-centric angle for them.

What’s the toughest technical challenge around building Stytch in the age of AI? 

The game theory of fraud is fascinating - if there's an expensive and valuable-enough resource for somebody to abuse, they're going to work to figure out a lot of automated and non-automated ways to get around your gates in order to abuse that resource. 

One of the more interesting things about that from a technical perspective is how deep our fraud team has had to go on zero-day feature detection. If somebody is trying to get around headless browsing detection by updating to the latest browsing system with Chromium and then building their own custom build, there are a lot of honey traps that we've built to detect that, where attackers don’t realize what signals they’re revealing with their behavior. This is because we want real users with new browser configurations to be completely unimpeded, but also still want to make sure we're still catching the bad actor that's just trying to churn enough of their browser characteristics to get around a block or deterministic risk score. This is what a lot of AI companies go through, which is scaling to pretty massive increases very quickly.

For example, when you have a customer migrate between providers with 25m MAUs as a startup, that's an influx of traffic on your application. This is really promising, but you also have to make sure your infrastructure is completely hardened to that type of increase. The same thing applies on the fraud prevention side - we've had customers that have gone from like 500k monthly device fingerprints to 100m device fingerprints per month, because their AI product just took off. We have a current customer that's done that over the last three weeks - that type of scale is very exciting for them. But, we’re always focused on scale and reliability because we know we're a critical path for all of our customers and want them to know they’re covered.

Tell us a little bit about the culture at Stytch - are you hiring, and what do you look for in prospective hires? 

The two things that I’d say characterize the folks at Stytch are ambition and kindness - this is what I look for when I’m interviewing someone and analyzing whether they’d be a good fit. When I think about what ambition means at Stytch, it's actually two-fold - first, how big are we thinking, and how aggressive are we trying to get in terms of what we're building product-wise? The second component of ambition is urgency, which is about compressing timeframes and cycles, and that matters a lot in terms of startup success and velocity. 

The second element is kindness. You may be giving people tough feedback sometimes, but it has to be given with care, and in the context of ‘how can I help this colleague work better, and make sure that we're all succeeding?’ Typically, the combination of ambition and kindness typically gets you folks that are the right types of people to build great things.

Any last things you want to share? 

If you got this far, we’re actually offering Cerebral Valley readers a limited-time 2 month free trial of our device fingerprinting solution, which companies like Replit and Hex use for fraud prevention. We’ve never actually offered free trials before, but we’ve seen firsthand how PLG AI companies often struggle to protect their AI resources, so are excited to have more people try our product and see what it can do.

You can read a bit more about Stytch’s fraud prevention tools in our docs, but we don’t make it self-serve (as we do with our auth products) to provide an additional layer of obfuscation for attackers. If you want to try them out, fill out this form by May 17 to sign up for the free trial.

Conclusion

To stay up to date on the latest with Stytch, follow them on X and learn more about them at Stytch.

Read our past few Deep Dives below:

If you would like us to ‘Deep Dive’ a founder, team or product launch, please reply to this email ([email protected]) or DM us on Twitter or LinkedIn.