Research

Applied AI research.

Technical publications on model evaluation, behavioral analysis, adversarial resilience, and the infrastructure needed to deploy AI responsibly in government and enterprise environments.


01
Behavioral Evaluation · 2026

Behavioral Fingerprinting of Large Language Models: Beyond Benchmark Scores to Operational Characterization

Traditional benchmarks measure capability but fail to characterize behavioral tendencies that determine real-world fitness. We propose a 16-dimension behavioral fingerprinting framework that captures model personality through adversarial probing, multi-turn escalation, and cross-condition delta analysis.

02
Agreement Bias · 2026

The Sycophancy Problem: Measuring and Mitigating Agreement Bias in Large Language Models

Sycophancy represents the most prevalent behavioral failure mode in deployed AI. We characterize it across four distinct subtypes and demonstrate that RLHF training systematically amplifies this behavior, with implications for every domain deploying language models.

03
Federal Compliance · 2026

Bridging AI Evaluation and Federal Compliance: Toward ICD 203-Aligned Model Assessment Frameworks

Federal agencies adopting AI lack evaluation frameworks aligned with existing compliance mandates. We map AI evaluation dimensions to ICD 203, NIST AI RMF, and NIST 600-1, creating a unified methodology that satisfies both technical rigor and regulatory requirements.

04
Adversarial Testing · 2026

Adversarial Resilience Testing for Production AI Systems: A Graduated Escalation Methodology

Most evaluation frameworks test only binary pass/fail resilience against adversarial attacks. We propose a graduated escalation methodology that measures failure thresholds, failure mode characteristics, and recovery behavior across prompt injection, jailbreak, and system prompt extraction vectors.

05
Context Analysis · 2026

Context Window Degradation in Extended AI Interactions: Quantifying Instruction Decay and Behavioral Drift

As conversations extend, models exhibit systematic degradation in instruction compliance, factual consistency, and behavioral stability. We quantify this context drift phenomenon and examine its implications for agentic AI applications and long-running workflows.

06
Evaluation Infrastructure · 2026

The Case for Vendor-Agnostic AI Evaluation Infrastructure in Government and Enterprise

Organizations deploying AI from multiple vendors lack standardized infrastructure to compare models on equal footing. We argue for open evaluation infrastructure that standardizes model comparison through uniform execution, consistent scoring, and behavioral profiling.

07
Research Brief · 2026

The Persuasion-Accuracy Tradeoff: What the Largest AI Persuasion Study Means for Model Evaluation

A research brief analyzing Hackenburg et al.'s landmark 77,000-participant study on AI persuasion. We examine how their finding -- that optimizing for persuasion systematically degrades factual accuracy -- validates behavioral evaluation frameworks and underscores the limits of capability benchmarks.