AI Hints and Time to Completion
AI Hints and Time to Completion
Role:
Lead / Quant UX Researcher
Methods:
Randomized A/B experiment, telemetry logging; mixed-effects logistic regression
Type:
Experimental pilot
Focus:
Human–AI interaction, hint design, speed vs accuracy trade-offs, decision support
Timeline:
Fall 2025
I am running a small quantitative study to answer a simple product question: do tiny AI hints help people finish tasks faster without hurting accuracy. I treat completion time as time-to-event data, randomize participants to hints or control, and instrument start, end, and correctness for each micro task.
My analysis centers on inference, not anecdotes. I estimate the effect with a Cox proportional hazards model and report the hazard ratio with a 95% confidence interval. I sanity-check with Kaplan–Meier curves, use a mixed-effects logistic model for accuracy, and add a nonparametric robustness check. Before data collection I simulate power and minimal detectable effects to choose a sensible sample size. Everything is reproducible in a Python notebook that uses pandas, lifelines or statsmodels, and a small Bayesian cross-check with weakly informative priors.
This is work in progress. I am iterating on hint format, task ambiguity, and logging granularity, then updating the result card as estimates stabilize. The goal is a practical guideline for teams: when to surface a hint, how short it should be, and what trade-offs to expect in speed and accuracy.