← Wiki

Assessment Innovation

Overview

Most assessment methods (CVs, interviews, personality questionnaires) were built for a world with a direct link between the applicant and their output. Generative AI breaks that link.

The Problem

A candidate today can:

  • Produce a CV optimised to mirror a job description with precision no human editor could match
  • Rehearse interview answers with AI that generates structured, plausible behavioral responses
  • Complete personality questionnaires by asking an LLM how to respond

Result: we are no longer measuring capability — we are measuring how well people use AI. Pre-seed investors report that pitches are starting to look alike. Polished language, impeccable structure, differentiation quietly gone.

This is not about dishonesty. It is structural. AI makes the differences between us less visible.

The Industry Response: Skills-Based Hiring

The shift toward assessing what someone can do rather than what they say they can do:

  • Work sample tests have meaningfully better predictive validity than personality questionnaires (Schmidt & Hunter, 1998)
  • Meta-analyses confirm the pattern holds (Sackett et al., 2022)

Limitations of Current Skills-Based Assessment

  • Either focuses on a narrow task (e.g., coding challenge) or requires expensive full-day assessment centers
  • Not easily deployed at scale
  • Still potentially gameable with AI assistance

Behavioral Measurement

Methods that observe cognition directly rather than relying on self-report:

  • Eye-Tracking and Decision Making — gaze patterns reveal decision processes that cannot be faked
  • Micro-behavioral analysis during real decision tasks
  • Scalable, resistant to AI gaming, and predictive of real-world performance

The Self-Report Collapse

Traditional self-report limitations were documented as early as Crowne & Marlowe (1960). AI has accelerated the collapse by making it trivial to produce socially desirable responses at scale.