Most companies think a webcam is enough. It’s not. Candidates have figured out exactly where the gaps are. That’s why they use cheating practices like angled phones, printed notes just below the frame and second monitors sitting out of view. A webcam alone sees a face and it doesn’t see what’s happening around that face.
Table of Contents
This post breaks down exactly how Xobin’s advanced AI-based proctoring features works layer by layer. So that you can understand what’s actually being monitored, what gets flagged, and why it matters for your hiring process.
If you’re curious about the specific techniques candidates use to bypass webcam monitoring, that context makes the solutions here land harder.
TL;DR – Key Takeaways!
- EyeGazer tracks gaze direction in real time and flags off-screen focus beyond a set threshold.
- Screen sharing and browser activity monitoring capture every tab switch, shortcut, and unauthorized app access.
- Webcam recording with AI analysis detects multiple faces, absenteeism, and unauthorized devices.
- Audio analysis monitors ambient sound for signs of external assistance or dictation.
- A lockdown browser + copy-paste prevention creates a sealed test environment at the OS level.
- Non-Googleable and randomized questions remove the value of any content a candidate finds externally.
- A trust score generated per candidate gives recruiters a single integrity signal, not just raw flags.
A 2024 systematic review published in the Journal of Academic Ethics found that online exam cheating was self-reported by 54.7% of students during the period of remote assessments, making assessment integrity tools a direct business need, not a nice-to-have.
Why a Single Webcam Isn’t a Proctoring Strategy
The assumption that recording a face equals preventing cheating costs companies real money. A 2023 study by the Association of Test Publishers found that 52% of candidates admitted to attempting some form of assistance during unproctored remote assessments (Association of Test Publishers, 2023). That’s not a fringe problem; that’s the majority.
Webcam-only setups have three obvious blind spots. They don’t see the screen and see what’s off-camera. And they don’t analyze anything in real time. A candidate can stare straight at the camera while reading from a phone propped against their keyboard. The webcam records it all and flags nothing.
The methods candidates use to cheat in assessments have evolved well beyond a second phone propped against a keyboard. And if you want the full picture of why cheating in assessment is rising, the trend data is worth reading before you evaluate any proctoring tool. Eye position. Browser state. Audio environment. Screen content. Device presence. When these signals run together, a single workaround stops working.
How Xobin’s EyeGazer Actually Works
Gaze tracking is one of the hardest cheating signals to fake. You can’t control where your eyes move as naturally as you can control where your face points.
Xobin’s EyeGazer runs directly from the candidate’s webcam using AI-based facial landmark detection. It maps the candidate’s eye position relative to the screen and establishes a baseline within the first few seconds of the test. From that point, every significant deviation gets logged.
If a candidate looks down repeatedly (toward a phone), glances to the left or right (toward a second screen or notes), or holds their gaze off-screen for more than a defined threshold, the system flags it. Not just once, but as a pattern. A single glance away isn’t a flag. Ten in twenty minutes surely is.
This matters because it removes subjectivity from reviewers. Instead of watching hours of video hoping to spot something off, recruiters receive a structured gaze-deviation report with timestamps. You know exactly when and how often a candidate’s attention left the screen.
What Screen Activity Monitoring Captures (It’s More Than You Think)
Here’s a question worth asking: what’s happening on the candidate’s screen while you’re watching their face?
Xobin’s screen monitoring works through a browser extension that captures and streams the candidate’s screen in real time. It logs every tab switch, every keyboard shortcut, every attempt to open DevTools, every right-click, and every copy-paste action. All of it. Timestamped.
The copy-paste prevention isn’t just a UI restriction. It operates at the JavaScript level, blocking the clipboard API and disabling standard keyboard shortcuts during the assessment window. Even if a candidate finds a workaround, the attempt itself gets logged and flagged.
Browser activity monitoring goes one level deeper. It detects whether the candidate opened a new window, minimized the browser, or attempted to access restricted content. Each violation generates an automated warning to the candidate and a logged entry in the recruiter’s proctoring report. The final report shows exactly how many violations occurred, when they happened, and what category they fell into.
Recruiters using Xobin’s browser monitoring report a 40% reduction in post-hire skill mismatch compared to assessments run without behavioral proctoring (Xobin Internal Customer Data, 2024).
Multiple Detection Layers: Face, Audio, and Device Signals
Xobin’s webcam recording does more than capture video. The AI layer running on top of that footage is doing real work.
Multiple users detection uses object recognition to identify when more than one face appears in the camera frame. A friend leaning in to help, a coach standing behind the candidate, or a second person off to the side all trigger this flag. It also addresses a different fraud type, proxy test takers, where a different person entirely sits for the assessment on the candidate’s behalf.
Absentee detection identifies when the candidate leaves their seat or becomes inactive. It’s not a simple motion detector. It uses facial presence detection to confirm whether the test-taker is physically present and engaged.
Unauthorized device detection is where things get specific. The AI scans the video feed for smartphones, tablets, books, secondary screens, and similar objects within the camera’s field of view. It doesn’t rely on the candidate to declare their environment honest. It checks.
For a deeper look at the identity layer specifically, see how Xobin detects multiple faces, devices, and identity fraud.
Audio analysis runs in parallel. The system monitors ambient audio for conversations, whispered responses, and background noise that suggests external assistance. Real-time monitoring means the flag happens during the test, not after it’s submitted.
The Question Bank Design That Makes External Resources Useless
Even the best monitoring can be beaten if the questions are searchable. That’s why Xobin attacks the problem from both sides.
Xobin’s non-googleable question bank is built with proprietary, scenario-based questions that don’t exist anywhere on the public internet. No Stack Overflow thread, no Chegg answer, no Reddit post. These questions require applied reasoning, not recall. A candidate finding the question online isn’t going to find the answer.
Question randomization means no two candidates receive the same test order, and in many cases, the same question pool. This eliminates coordination between candidates who’ve already taken the test and those who haven’t. It also prevents screenshot-sharing of the full question set.
Combine these with the lockdown browser environment, and the external research window effectively closes. The lockdown browser restricts the OS layer, not just the browser tab. Candidates can’t switch applications, open file explorers, or run background tools. They’re sealed inside the assessment.
Xobin’s assessment library includes over 180,000+ validated questions. Rotating from this library means the same role can be tested across dozens of hiring cycles without meaningful question repetition, eliminating the risk of question leakage over time.
How Xobin Generates a Trust Score (And What Recruiters Do With It)
All those monitoring signals like gaze data, browser events, audio flags, face detections, and device alerts feed into a single output: a trust score per candidate.
Recruiters don’t have to watch hours of footage or comb through raw logs. The Trust Score tells them where to focus. A candidate with a score below a set threshold gets flagged for manual review. One with a clean score moves forward.
The Trust Score makes that review process fast and defensible. Every flag has a timestamped log entry and a corresponding video or screenshot. If a candidate or legal team challenges an outcome, the evidence chain is complete.
Does This Work at Scale? What High-Volume Teams Need to Know
Running 50 assessments a month is manageable with manual review. Running 5,000 is not. Xobin’s proctoring architecture is built for volume.
The platform supports 1,000+ concurrent candidates in the same assessment window, a feature designed specifically for hackathons and campus hiring drives. All proctoring signals run in parallel across every concurrent session. Nothing is deferred or queued.
Large-scale deployments also benefit from Xobin’s data center infrastructure. Separate servers in the EU, UK, USA, Middle East, and India mean proctoring data stays within the required jurisdiction. GDPR compliance isn’t an afterthought. It’s built into where the data lives.
For enterprises working across geographies, the platform’s 15-language support extends to the assessment content itself. Proctoring works the same way regardless of the language the test is delivered in.
That’s the architecture behind a system that eliminates blind spots rather than adding more cameras to watch them.
If you want the complete breakdown of how each layer addresses every known workaround, read how Xobin’s AI proctoring stops every cheating method.
See Xobin’s Multi-layer Proctoring in Action!
Cheating in online assessments isn’t a fringe behavior. It’s systematic. Candidates have spent time figuring out exactly where the gaps are in single-layer webcam monitoring, and they exploit those gaps.
Xobin’s approach doesn’t try to out-police every possible workaround. It closes the room. Eye movement tracking, screen monitoring, audio analysis, device detection, and non-googleable questions work together so that gaming one layer doesn’t give a candidate a free pass on the others.
The result is a hiring process where your assessment data actually reflects what a candidate can do. That’s the whole point of testing.
Your next hire deserves an honest assessment. If candidates are gaming your current setup, you’re making decisions on bad data.
Book a personalized demo with Xobin and see exactly how EyeGazer, screen monitoring, audio analysis, and the Trust Score work together in a live session. No slides. No generic walkthrough. A real look at the platform built for how your team actually hires.
Frequently Asked Questions
Does EyeGazer flag candidates who wear glasses or move naturally?
Xobin’s EyeGazer establishes a personalized baseline for each candidate in the first 30 seconds of the session. Natural eye movement within that baseline doesn’t trigger flags. The system looks for patterns, not isolated events. Glasses and minor head movement don’t affect accuracy.
Can candidates use a virtual machine or remote desktop to bypass the lockdown browser?
Xobin’s lockdown browser includes OS-level process detection that identifies virtual machine environments and remote desktop sessions. Attempts to use these tools are flagged automatically and can be configured to terminate the session.
How long is proctoring data retained, and who can access it?
Proctoring data is retained according to your organization’s configured data retention policy, with default settings aligned to GDPR requirements. Access is role-based. Only authorized administrators and hiring managers see flagged session data. Xobin holds SOC2 Type-2 and ISO 27001 certifications, and external penetration testing occurs every six months.
Does Xobin’s proctoring work for non-technical roles, not just coding assessments?
Yes. Proctoring runs on every assessment type on the platform, including psychometric tests, communication assessments, video interviews, and aptitude tests. Over 3,400 role-specific assessments across engineering, finance, marketing, HR, and sales are all proctoring-compatible.
What happens when a candidate is flagged? Does it automatically disqualify them?
Flagging doesn’t equal disqualification. Xobin surfaces the trust score and the individual flags to the recruiter or hiring manager for review. The final decision always stays with your team. This gives you evidence to act on, not an automated verdict. High-stakes hiring decisions benefit from human judgment applied to machine-generated evidence, not replaced by it.