- Cerebral Valley
- Posts
- InterviewGuard is Making Interview Fraud Impossible π‘οΈ
InterviewGuard is Making Interview Fraud Impossible π‘οΈ
Plus: Founder & CTO Michael Kuczynski and Brandon Bowsky on building the only fraud detection tool that doesn't guess, why Fortune 500s can't afford to ignore interview cheating, and how a $150K scam sparked the idea...

CV Deep Dive
Today, we're talking with Michael Kuczynski, Founder and CTO of InterviewGuard, alongside Brandon Bowsky, who launched the company through Bowsky Ventures and is deeply involved in its go-to-market and strategic direction.
InterviewGuard is a real-time interview fraud detection platform β the only product on the market that gives you a definitive answer on whether a candidate is cheating, not just a guess. While existing tools rely on heuristic signals like eye movement and speech patterns, InterviewGuard works at the system level β detecting AI tools, remote desktop takeovers, deepfakes, VPN usage, and more, even when they're hidden, minimized, or running in stealth mode. Think of it as an anti-cheat for job interviews, built the same way anti-cheats work in competitive video games.
The problem they're solving is accelerating fast. Gartner projects that 1 in 4 candidate profiles will be entirely fake by 2028. Google and McKinsey have gone back to mandatory in-person interviews to counter AI-assisted cheating. Tools like Cluely and InterviewCoder now use invisible screen overlays that are undetectable by standard screen sharing β and a $20/month subscription to one of these tools can land someone a $150K engineering job. InterviewGuard is built for the Fortune 500s, defense contractors, and enterprise teams who can't afford to let bad hires slip through at scale.
Michael built InterviewGuard's detection architecture from the ground up, drawing directly from the video game anti-cheat world. His background in system-level security and reverse engineering β specifically dismantling the same techniques game cheats use to evade detection β became the foundation for InterviewGuard's approach. During development, he stress-tested the product with feedback from senior engineers and managers at AWS, Oracle, Fifth Third, and other enterprise teams to find the sweet spot between comprehensive detection and candidate privacy.
Brandon is a serial entrepreneur who previously built the first generative conversational AI product in the market and hired the first full-time prompt engineer β all pre-ChatGPT, back when the world was still on GPT-2. He's generated hundreds of millions in lead generation which led to billions in insurance premiums across his career and built VAgents, an AI voice agent platform for telephony, sales, and customer service. When Michael brought the InterviewGuard concept to him β shortly after Brandon lost $150,000 to a fraudulent DevOps hire β he greenlit it immediately.
In this conversation, Michael and Brandon walk us through why they built InterviewGuard, how the technology actually works under the hood, and why the return to in-person interviews is just a Band-Aid on a much bigger problem.
Letβs dive in β‘οΈ
Read time: 8 mins
Our Chat with Michael & Brandon π¬
Michael, Brandon β welcome to Cerebral Valley! What made you look at the hiring process and say "this is broken"?
Brandon: A couple of things. We have a ton of great developers that are always thinking of good ideas, and I like to partner with my team when they come up with something strong. This was very timely.
Something that happened to me in the past was I hired somebody to do DevOps. If you know anything about DevOps, there's an absurd amount of fraud in freelance DevOps where people will take on 10 gigs at once, do an hour or two of work, and say "look, I did it." It's even worse now with AI. I hired a guy that pretty much took me to the cleaners for about $150,000 and I ended up suing him on principle.
So when Michael brought this idea to me, I was like β this is great. Nobody's doing this. The only other products we could find in the market were making heuristic guesses. What do the eye movements look like? How's their speech pattern? You could be penalized for thinking. If you look to the left or right when you think β some people just do that. Or some people pause when they speak. One of the most brilliant people I know takes a very long time to process information. She's a brilliant engineer, but she'll sit there for 20 seconds and you're like, "did she hear me?" Then she responds and you're like β okay, yeah, she's still brilliant.
That would be flagged as cheating with these other products. The Googles, the Metas β everyone's using tools that just guess as opposed to tools that really tell you what's going on. We are the only product on the market that doesn't guess.
You previously built the first generative conversational AI product and hired the first full-time prompt engineer β all pre-ChatGPT. How has that technical DNA carried over into building InterviewGuard?
Brandon: As you build a team and everyone grows and constantly builds new things, it gets faster. We have probably $20 million of shelved IP that we've never even released to the world just because we like to build stuff.
Michael and I, along with some other team members, built the first generative conversational AI product on the market. We hired the first ever full-time prompt engineer. This was mid-to-late 2022 β pre-ChatGPT, pre-AI boom. The world was at GPT-2 at the time. I was taking tens of millions of phone calls and trying to turn those into models where we could respond to any insurance-related question for customer service and qualification purposes.
I've always built things out of necessity. I've built a ton of companies in different spaces β everything from 7 and 8 figure service-based businesses to hundreds of millions in lead gen which generated billions in insurance premiums. Every time I build something, I know it's valuable to me. So people like me will find it valuable.
This space is a nightmare of a cat-and-mouse game. How are you staying ahead of the constantly evolving cheating landscape?
Michael: It's actually really interesting. The way we designed this is we didn't want to go the route of targeting one specific tool β like, "here's something that detects people using ChatGPT." We don't actually care what tool people use. We want to look at how these tools act, what they do, what they do differently β and just mark that.
That's how we're able to be all-encompassing. I see every single day people signing up with free trial accounts, and when I check back on them, they're clearly testing their own cheating tools against us. There are tools that have existed for a day β someone came in, tried it out, and it immediately gets caught.
Brandon: These are companies we don't even know exist. We find out they exist because they try to use the tool.
For our technical audience β how does InterviewGuard's system-level monitoring work differently from tab-switching detection or browser lockdowns?
Michael: Are you familiar with how anti-cheats work in the video game space? InterviewGuard, for all intents and purposes, is an anti-cheat. From the ground up. The exact same way that anti-cheats are designed and how they work β that's how InterviewGuard works.
Most of the tools that are meant to get around browser lockdowns and tab-switching detection β they go the exact same route that video game cheats go. Which works great. Unless you have something like this, you're free and clear to do whatever you want and no one will ever know. Probably 90% of the tools in this space follow the exact same formula that video game cheats follow.
Brandon: Think of the interview as a video game. You have two players and one of them's trying to cheat. The one that cheats without being detected is going to win. The one that cheats and gets kicked out for being a cheater is going to lose.

One of the more striking features is detecting when keyboard input is coming from a remote desktop β meaning someone else is literally typing for the candidate. How did you arrive at that signal?
Michael: I actually know a number of people who are professional interviewers for other people. That's half of what they do β they bring in an extra $5,000 a month just sitting in and clearing technical interviews for people. All the person has to do is sit there, pretend to be typing, do the motions, and someone else is connected through a remote desktop doing the actual work. You can make $500 for a single interview, and people pay cash.
Some of the people on the team have experience reverse-engineering video game anti-cheats, and that's where we initially got the idea. There was detection on alternate inputs from a few years back. The more we looked into it, we realized we can categorize any input that comes in. We know where it came from β whether it's your AI tool injecting characters, a remote desktop, your physical keyboard, or even a secondary keyboard. Even if you had someone under the desk with another physical keyboard attached, we can detect which one it's coming from.
Brandon: The number one driver of new features is user feedback. As we speak to more big companies with more unique problems, we've figured out everyone has an experience that becomes a catalyst. "I know this guy cheated and it cost me money." It could be $200K, $500K, or more. Everybody's experienced it β they hired somebody they shouldn't have, and that person had to have cheated.
You've taken a privacy-first approach with no audio or video recording. That's unusual for a monitoring tool. What's the impact been on adoption?
Michael: From day one, privacy was the number one thing. I would never want to build something that I wouldn't personally feel comfortable using. Throughout the entire development process, I was getting feedback from senior engineers and managers at AWS, at Oracle, at Fifth Third, and a number of other places. The question was always: what are the immediate blockers? How much are we able to detect without going overboard? Where's that sweet spot?
At the end of the day, if it's not something I would personally feel good using, I would never feel good putting it in front of other people.
Brandon: The interviewer side is entirely web-based β you don't have to download anything. On the interviewee side, they do download something, but we have very clear privacy terms. We don't store any data, we don't capture any audio, we don't capture any video. All we do is log events of things that occur during the interview. Once the interview is over, we're done. We don't access their files, their folders, anything of that nature. None of their confidential information discussed during the interview would ever be obtained by us. That's super important to every big tech company we've talked to.
Gartner says 1 in 4 candidates will be fake by 2028. Google and McKinsey have gone back to in-person interviews. Who's coming to InterviewGuard first?
Brandon: The Fortune 500s and enterprise β those are the first two. We thought more startups would want protection, but what we learned is that small teams don't really need this. A recruiting agency told us: "This product is amazing, but we're working with three-person, five-person teams. They intimately know that person. There's no risk there because that person can't get away with cheating."
When you look at the Fortune 500, even Fortune 1000 companies, they're so big that they have no way to police it. The bigger problem isn't just their size β it's that they're recruiting for so many different roles at any given time that the number of wasted man-hours screening candidates who are cheating is enormous.
One of the interesting dynamics with our product is that when people see they have to use it, they know what's coming. They can look at InterviewGuard and think, "I'm not going to be able to cheat my way through this." Sometimes they just don't show up β which actually saves real human hours.
It's kind of like an insurance policy for your interviews. Whether you use it or not, it benefits you. If candidates don't show up and you don't pay us, it still benefits you. If they do show up and you pay us, it benefits you.
"Detects all 'Undetectable' AI Tools, Audio/Video Augmentation or Deepfake Tools, GeoLocation/VPN detection, Sandbox Environments, Remote Desktops, Proxy Candidate controls, and tampering. Every attack vector a candidate can make on their machine is covered."
The economics massively favor cheating right now β $20/month for a tool that could land you a $150K job. How do you position InterviewGuard as the necessary fix?
Brandon: When you look at $5 an interview and you're going to put 15 to 20 candidates through to hire one, and they go through five interviews β that's a maximum of 100 interviews. You're spending $500 to save a quarter-million to half-million dollar average risk. It's a no-brainer for anybody.
There are a few different types of fraud we see. There's wage arbitrage β people living in Arkansas or Mississippi where things are cheap, using a VPN to say they live in California, and collecting a $300K California wage when it would have been $130K where they actually live. That happens a lot.
Then you have bad actors from adversarial countries or adversarial agendas. Trade secrets, intellectual property, destruction of property β these are real things happening every day. It's cyber terrorism in the workplace.
Then there's the sheer cost of a bad hire. You train them, onboard them, pay a recruiter, do all the right things. Six months later you find out they're not qualified. Now you fire them, pay unemployment, deal with the exposure β and the most important thing: the opportunity cost and the morale of the team. The team is going to be pissed that this person doesn't deliver. Everyone's afraid to fire people because of lawsuits.
Some companies are responding by simply going back to in-person interviews. Is that a threat to your business?
Brandon: Let's take a step back and discuss why they're going back to in-person interviews. They're going back because of cheating.
When you have in-person interviews, your talent pool is limited. You lose the ability to save money on wages. You lose the ability to attract top talent. Some of the best talent in our country are geo-locked. Maybe they're getting out of school, maybe they came here on a visa, maybe they have a family, maybe they're in an area where they're happy and they don't want to be in SF. There are real reasons why people don't want to be there.
Your talent pool is limited only to the people that know you exist. It's like being a home services business in tech β you have to stay in a radius for your business to survive. In tech, that's not the future. The in-person interview process at this point is a Band-Aid. Post-COVID, everything's remote. The only reason people have gone in-person is because of this problem. Solving this problem allows people greater access to a bigger talent pool, a better talent pool, and more freedom and flexibility within their business.
Michael: There's no reason to give up all the velocity, all the convenience, and the limitless talent pool that comes with remote hiring. There's no reason to give that up.
AI agents talking to each other during early-stage screening is already happening. How are you thinking about that shift?
Brandon: The original vision when I built my first conversational AI product was that everybody would eventually have a virtual agent that acts on their behalf. "Your people calling my people" turns into your AI talking to my AI. That truly is the future.
Our world is so fast-paced now. 25 years ago we had landline phones. Here's an interesting stat: people born after 1998 do not answer the phone as often. People before 1998 answer the phone more often. People before 1990 most often answer the phone. Why? Because caller IDs didn't exist until about 1998β2000. If you grew up before that, you had to answer the phone and say "hey, who's this?"
Now we exist in a time where AI is advancing so rapidly β it's going to make the last 20 years of growth look like 20 minutes. But the playing field gets reset when the real human has to come to the table, so we don't see AI agent interviewers as a threat.
What feels different about building InterviewGuard versus previous projects? And what's on the roadmap for the next 12 months?
Brandon: Building this is interesting because there are two types of blue oceans. There are proven blue oceans where there's just no saturation yet, and there are unproven blue oceans. We exist in the almost-proven stage β people know there's a problem, there's nothing solving the problem, but people aren't problem-aware yet. They're just starting to become aware and starting to look for a tool like this.
We're finding billion-dollar and multi-billion-dollar companies on a regular basis reaching out and saying "this is the first thing I found." We're like, cool β you won't find anything else because we're the only ones doing it.
Never in any other project have we had something we haven't had to sell. We literally show it to people, talk about it, and nobody says no because it just doesn't make sense to say no. The cost is so low and the value is so high. We've never had a product where people are instantly like, "I need this and nothing else exists."
Michael: It's really nice to be able to show people this cool thing we built and have it be immediately obvious the second you see it. There's no long back-and-forth. There's a problem. This solves it and it solves the problem well.
Brandon: We're the first automobile in a world where people are still riding horses.
As for the roadmap β a lot of the feedback we've been getting is about pre-interview validation. Our product only works when they get to the interview, but pre-interview there are so many screening calls, so many candidates that move through and have to be evaluated. We're building tools as a value-add to our customers to make them more efficient during the screening process β so that by the time they get to InterviewGuard, we've already wiped out a bunch of people that could be bad.
We'll be able to tell you a clearer picture of who the candidate is before you even talk to them. You get on the call, start asking them about the relevant experience they claimed β experience we've already proven to be false β and suddenly you're in a significantly different position during the interview.
Other than that, just watching the industry evolve and seeing what's next. We build out of necessity, and client feedback is the number one thing that creates new features. As we have more clients and they have more requests, we'll build more things.
Stay up to date on the latest with InterviewGuard, follow them here.
Read our past few Deep Dives below:
If you would like us to βDeep Diveβ a founder, team or product launch, please DM our chatbot here.