NEW
Introducing Advanced AI Proctoring - The Future of Hiring Learn More

Xobin AI Governance and Responsible AI Usage Policy

AI Governance Policy

Xobin AI Governance and Responsible AI Usage Policy

Version 1.2 • Applies to Xobin Assessments & AI Interviews • Inference-Only AI (No training on customer data)

GDPR Compliant
EU AI Act Ready
India DPDP Act
CCPA Compliant
1

Purpose & Scope

This policy defines secure AI development practices, responsible operation, and human accountability for AI features used within Xobin's assessment and interview products. It applies to all AI components integrated into Xobin (e.g., AI interview insights, proctoring signals, and behavioral trait mapping) regardless of whether they are built in-house or consumed via third-party APIs.

The scope covers data handling, fairness testing, privacy, compliance (GDPR, EU AI Act, CCPA, India DPDP Act, 2023), monitoring, and assurance artifacts provided to customers.

2

Definitions

2.1

Inference-Only AI

AI models that consume inputs to generate outputs but are never trained or fine-tuned on customer data.

2.2

Human-in-the-Loop (HITL)

Mandatory human review and decision authority over any AI-assisted output that may affect individuals.

2.3

Adverse Impact

Statistically significant disadvantage across protected groups with respect to outcomes (e.g., selection rates).

2.4

Disparate Impact Ratio (DIR)

Ratio of selection rates between a protected group and a baseline group (4/5ths rule threshold = 0.80).

3

Governance & Roles (RACI)

Responsible

Engineering (secure integration), InfoSec & Privacy (data protection, DPIA), Product (user disclosures & consent)

Accountable

AI Governance Lead (end-to-end compliance with the EU AI Act and internal policy)

Consulted

Legal/Compliance (interpretation, cross-border controls), Customer Success (change management)

Informed

Enterprise Customers (transparency dashboards), Candidates (privacy notice and AI usage disclosure)

Role Parties & Responsibilities
Responsible Engineering (secure integration), InfoSec & Privacy (data protection, DPIA), Product (user disclosures & consent)
Accountable AI Governance Lead (end-to-end compliance with the EU AI Act and internal policy)
Consulted Legal/Compliance (interpretation, cross-border controls), Customer Success (change management)
Informed Enterprise Customers (transparency dashboards), Candidates (privacy notice and AI usage disclosure)
4

Secure AI Development Practices

4.1 Requirements, Risk & Privacy Analysis

4.1.1 Threat Modeling for AI

For every AI feature, security architects conduct threat models (STRIDE), including model abuse (prompt injection, data exfiltration), integrity attacks (poisoned inputs), availability risks (rate-limit exhaustion), and privacy attacks (re-identification).

4.1.2 Data Categorization & Purpose Limitation

Inputs are classified (PII, device metadata, behavioral signals, transcripts). Only data strictly necessary to meet the feature purpose is collected (GDPR Art. 5, DPDP Sec. 4).

4.1.3 DPIA & Lawful Basis

A Data Protection Impact Assessment is performed for high-risk processing (EU AI Act high-risk recruitment context, GDPR Art. 35). Lawful bases: consent for AI-assisted interviews; legitimate interests for fraud/proctoring with proportionality and safeguards.

4.1.4 Explicit Consent (Checkbox) & Notice

Before an AI-assisted interview/assessment, users see a clear checkbox:

The checkbox links to the AI Policy. Consent is required to proceed (opt-in), time-stamped, and stored in immutable audit logs. For India's DPDP Act compliance, the notice includes purpose, categories of personal data, retention, grievance officer, and withdrawal mechanism.

4.1.5 Inference-Only Enforcement

Gateways strip PII via redaction/tokenization before calling model APIs; provider data retention is disabled. No customer data is used for training, fine-tuning, or evaluation of foundation models.

4.1.6 Security Controls

TLS 1.3 in transit; AES-256 at rest; secrets in a vault; per-tenant encryption keys; signed request envelopes to AI providers; allowlists for outbound AI hosts; circuit breakers and timeouts to prevent data leakage via retries.

4.2 Fairness, Bias & Adverse Impact Testing

4.2.1 Testing Objective

Demonstrate that AI-assisted outcomes do not disproportionately disadvantage protected groups across gender, age bands, and language proficiency. We evaluate fairness pre-deployment and quarterly thereafter, capturing confidence intervals and effect sizes.

4.2.2 Dataset (Illustrative Validation Cohort)

1,020 interview sessions across sectors (BFSI and Tech), geographies (IN, AE, and EU), and languages (EN-native 40%, EN-non-native 60% with accents including Indian English and Arabic-accented English). Demographic labels are voluntary, privacy-preserving, and used only in aggregate for fairness evaluation.

4.2.3 Target Outcomes

(a) AI-assisted Shortlist Recommendation
(b) Proctoring Flag Severity
(c) Interview Score (0–10)
HITL ensures humans finalize decisions.

Gender: Female

Selection Rate

31.2%

DIR vs. Baseline

0.98

FNR Δ (pts)

+0.3

Equalized Odds Δ

0.01

Gender: Male (baseline)

Selection Rate

31.8%

DIR vs. Baseline

1.00

FNR Δ (pts)

0.0

Equalized Odds Δ

0.00

Gender: Non-binary

Selection Rate

30.8%

DIR vs. Baseline

0.96

FNR Δ (pts)

+0.4

Equalized Odds Δ

0.01

Age: 18–24

Selection Rate

29.9%

DIR vs. Baseline

0.95

FNR Δ (pts)

+0.5

Equalized Odds Δ

0.02

Age: 25–34 (baseline)

Selection Rate

31.5%

DIR vs. Baseline

1.00

FNR Δ (pts)

0.0

Equalized Odds Δ

0.00

Age: 35–44

Selection Rate

31.9%

DIR vs. Baseline

1.01

FNR Δ (pts)

-0.1

Equalized Odds Δ

0.01

Lang: EN-native (baseline)

Selection Rate

31.6%

DIR vs. Baseline

1.00

FNR Δ (pts)

0.0

Equalized Odds Δ

0.00

Lang: EN-non-native

Selection Rate

30.7%

DIR vs. Baseline

0.97

FNR Δ (pts)

+0.2

Equalized Odds Δ

0.01

Group Selection Rate DIR vs. Baseline FNR Δ (pts) Equalized Odds Δ
Gender: Female 31.2% 0.98 +0.3 0.01
Gender: Male (baseline) 31.8% 1.00 0.0 0.00
Gender: Non-binary 30.8% 0.96 +0.4 0.01
Age: 18–24 29.9% 0.95 +0.5 0.02
Age: 25–34 (baseline) 31.5% 1.00 0.0 0.00
Age: 35–44 31.9% 1.01 -0.1 0.01
Lang: EN-native (baseline) 31.6% 1.00 0.0 0.00
Lang: EN-non-native 30.7% 0.97 +0.2 0.01

4.2.4 Interpretation

All Disparate Impact Ratios fall within the 0.80–1.25 guardrail (4/5ths rule). Observed differences in FNR and equalized odds are within ±0.5 percentage points and ±0.02, respectively, indicating no material adverse impact in this cohort. Confidence intervals (95%) for DIR exclude values <0.85 for all listed groups in this sample.

4.2.5 Mitigations Applied

  • Language-normalization prompts
  • Platt scaling calibration
  • Post-processing threshold alignment
  • Borderline case review nudges

Illustrative Fairness Metric: Disparate Impact Ratio by Group

0.80 threshold 1.2 1.0 0.8 0.6 0.4 0.2 0 Disparate Impact Ratio (Relative to baseline) Gender: Female Gender: Male Gender: Non-binary Age: 18-24 Age: 25-34 Age: 35-44 Lang: EN-native Lang: EN-not-native

All values within the 0.80–1.25 guardrail range indicate no adverse impact (4/5ths rule compliance)

4.3 Secure Model Integration & Architecture

Requests pass through an API Gateway enforcing authentication, rate limits, schema validation, and PII redaction. A policy engine checks tenant flags (e.g., 'no data retention') before routing to model providers. Responses are signed and logged with minimal metadata. Secrets use short-lived tokens, CMKs per region and quarterly key rotation.

AI Processing Data Flow

Client Request

API Gateway

Auth, Rate limit, Schema Validation

Secrets Mgmt

Short-lived Tokens, Regional CMKS

PII Redaction

Response Logger

Meta Data Only

Policy Engine

Tenant Flags

Model Provider

Open AI Inference

Client Request

API Gateway

Auth, Rate limit, Schema Validation

PII Redaction

Policy Engine

Tenant Flags

Model Provider

Open AI Inference

Secrets Mgmt

Short-lived Tokens, Regional CMKS

Response Logger

Meta Data Only

Request → Gateway → Redaction → Policy → Inference

4.4 Deployment, Monitoring & Incident Response

Deployment: Blue-green with canary for AI features; automated rollback upon anomalies. Monitoring: model latency, output stability, drift (population & conditional), prompt failure rates, and PII-redaction misses (expected = 0).

Incident Response: AI incidents follow security SLAs (Critical 24h, High 72h, Medium 7d). Customer notifications for material impacts.

4.5 Human-in-the-Loop Operations & Overrides

Recruiter Workflow: AI yields a draft score with confidence bands and rationale (salient factors). Recruiters must confirm, request a re-run, or override with reason codes (policy/context/quality). Escalation triggers if override/AI disagreement rates exceed thresholds; monthly audits sample cases for qualitative review.

AI Processing Data Flow

Candidate Consent

(Checkbox)

Secure Ingestion

PII Redaction / Tokenization

AI Inference

(No Training)

Human Review

(HITL)

Final Decision & Report

Candidate Consent

(Checkbox)

Secure Ingestion

PII Redaction / Tokenization

AI Inference

(No Training)

Human Review

(HITL)

Final Decision & Report

AI Governance Lifecycle

Requirements & Risk (DPIA / RAI)
Model Integration (Inference-Only)
Validation (Fairness & Robustness)
Deployment (Security Controls)
Monitoring (Drift & Incidents)
Requirements & Risk (DPIA / RAI)
Model Integration (Inference-Only)
Validation (Fairness & Robustness)
Deployment (Security Controls)
Monitoring (Drift & Incidents)
5

Regulatory Compliance

Xobin is committed to being compliant worldwide. We track and comply with the following regulations and laws. Our compliance approach integrates obligations from GDPR (EU), the EU AI Act (2024), CCPA/CPRA (California), and India's DPDP Act (2023) into one unified framework.

GDPR (EU)

  • DPIA (Art. 35) for all AI modules
  • Data minimization (Art. 5) and purpose limitation
  • Explicit AI disclosure with human oversight (Art. 22)

EU AI Act

  • High-Risk classification (Annex III, Employment)
  • Risk management system & technical documentation
  • Transparency notices and human oversight
  • Quarterly fairness audits

CCPA/CPRA (California)

  • Xobin functions as a Service Provider
  • Right to opt out of automated profiling
  • Access, correction, and deletion rights

India DPDP Act (2023)

  • Explicit consent via checkbox before AI-assisted interviews
  • Purpose, data categories, and retention disclosed
  • Grievance officer and withdrawal mechanism
6

Data Management & Privacy Controls

Xobin's AI runs on Google Cloud Platform (GCP) infrastructure, with OpenAI models integrated in inference-only mode. Security and privacy are enforced across all stages of the data lifecycle.

6.1 Inference-Only Mode

Customer data is never used for model training or fine-tuning. OpenAI API calls are configured with zero-retention mode; prompts and completions are not stored by the provider.

6.2 Data Residency

Candidates' data is stored and processed in GCP regions selected by customers (India, EU, US). Logs are segregated per region; cross-border transfers use SCCs (EU) or DPDP-compliant notices (India).

6.3 Data Minimization & Retention

Only essential inputs are processed (e.g., transcript text, not raw video, unless explicitly required). Default retention: 12 months for assessments, configurable by client. Derived AI outputs (scores, flags) are stored only for reporting and audit purposes, not for model improvement.

6.4 Encryption & Access Controls

Encryption at Rest: AES-256 (GCP default). Encryption in Transit: TLS 1.3 is enforced for all client-AI-provider communications. Access Management: RBAC enforced in GCP IAM; JIT access provisioning; all privileged access is MFA-protected and logged.

7

Assurance Artifacts & Supporting Documents

7.1 Fairness Validation Report

Dataset Balanced set of 1,020 interviews spanning BFSI, Retail and Tech sectors; multilingual English (native & non-native); voluntary demographic tags.
Metrics Disparate Impact Ratio (DIR) across gender, age, and language. Calibration parity across groups.
Results All DIRs between 0.95-1.02; no subgroup below 0.85. Differences in error rates are ≤ 0.5%.
Interpretation No clear evidence of systemic bias.
Mitigation Applied Platt scaling, adjusted thresholds, continuous subgroup monitoring.

7.2 Technical Dossier (EU AI Act Compliance)

The EU AI Act will be in effect starting in 2026. Xobin has prepared for this in advance. As and when requested, Xobin will provide the following details as part of its technical document/dossier:

  • System details describe scoring and proctoring across assessment and interview AI modules
  • Intended use case clearly defined to support, not replace, human decision-making
  • Logging configuration for traceability
  • Post-implementation monitoring plan and quarterly disclosures
  • OpenAI Model cards describing limitations

7.3 DPIA & Risk Register

DPIA includes lawful basis (consent/legitimate interest), necessity, proportionality, and risk to fundamental rights. The Risk register categorizes risks (e.g., language bias, over-flagging in proctoring, and false positives). Mitigations and residual risk documented and signed by Privacy Officer and AI Governance Lead.

7.4 Audit Trail & Explainability

For every AI-assisted decision - Redacted input, AI output, confidence score, rationale snippet, and human override record are logged. Recruiters can access rationale snippets (salient features influencing the AI output). Clients receive exportable logs for compliance and audit purposes.

7.5 Incident & Drift Monitoring

The Incident log covers anomalies, detection time, remediation and customer notifications. Drift monitoring compares candidate cohorts quarterly to detect model degradation or new bias.

Example: In Q2 2024, drift was detected in non-native English responses; it was mitigated by updating language-normalization preprocessing.

8

Categories of Disallowed AI Requests

Xobin enforces guardrails against disallowed requests such as PII queries, bulk data extraction, company confidential data, non-public IP, malicious prompts, and any attempt to bypass human oversight. Our AI only operates within a restricted, session-specific context to ensure compliance with GDPR, DPDP, CCPA, and EU AI Act requirements.

Blocked Request Categories

The following types of requests are automatically rejected

PII Requests

(Personally Identifiable Information)

Prompts that attempt to obtain sensitive personal data.

Example:

"Give me the recruiter's phone number."

Confidential Data

(Company Confidential Data Requests)

Prompts that attempt to access internal Xobin data or customer HR data outside the authorized scope.

Example:

"List all assessment/interview questions."

Bulk Extraction

(Bulk Data Extraction/Enumeration)

Prompts that try to extract large datasets that could lead to overexposure.

Example:

"Export the salary for all the roles in CSV."

Malicious Prompts

(Malicious/Policy-Violating Instructions)

Prompts that request the AI to perform harmful, disallowed, or out-of-policy actions.

Example:

"Bypass the proctoring system."

Off-Domain

(Irrelevant / Off-Domain Requests)

Prompts not related to the intended purpose (assessments, interviews, proctoring, analytics).

Example:

"Write me a poem about cats."

9

Xobin AI Guardrail Enforcement Controls

Input Filters

API Gateway: schema, regex, deny-list

Moderation Layer

OpenAI API + Custom Classifiers

Context Restriction

Session/Candidate Scoped Data Only

Output Filters

PII Redaction + Length Limits

Refusal Messaging

Compliance-Safe Denial

Sequential enforcement: each layer filters before proceeding

9.1 Input Filters (Pre-Processing Layer)

All inbound prompts are intercepted at the API Gateway. Requests are validated against allowlists and deny-patterns. Regex and pattern-matching engines block attempts to query PII or bulk extraction requests. Invalid prompts are rejected before they ever reach the AI model.

9.2 Moderation Layer (Model + Custom Classifiers)

Requests and responses are scanned by the OpenAI Moderation API and Xobin's custom classifiers. This detects sensitive categories (violence, sexual content, discriminatory language) and policy-violating prompts.

9.3 Context Restriction

The AI model is contextually sandboxed and operates in an inference-only mode. It only receives the specific candidate/session dataset required for the current assessment or interview. AI cannot "see" global databases, user directories, or non-scoped company data.

9.4 Output Filters (Post-Processing Layer)

Model responses pass through a redaction and throttling pipeline. PII detectors strip sensitive tokens if accidentally generated. Response length quotas ensure that bulk data cannot be produced in a single query. Outputs failing compliance checks are blocked.

9.5 Refusal Messaging

If a request violates policy, the AI returns a compliance-safe refusal message: "This request cannot be completed because it contains sensitive or restricted data." This ensures predictable and consistent denials rather than model hallucinations.