ROCKIN' M LAB
Working in the Reliability Gap
RESEARCH
Our research focuses on understanding what these systems are doing behind the scenes and how that behavior affects the people who use them. Artificial Intelligence systems are built from data that carries uncertainty. Training can reduce some forms of uncertainty, but many aspects remain embedded in the system and influence how it behaves. It becomes part of the model, part of its internal process, and part of the experience people have when they rely on it.
Our goal is to study that uncertainty in a way that helps people stay in control. We look at how systems behave in real environments and how that behavior changes over time. We pay attention to where people misunderstand what the tools are doing, where visibility is missing, and where better understanding can improve the way these systems are used.
This work is not about policy or high level principles. It is closer to the ground. We are interested in the practical questions that shape everyday use. What information do people need in order to make good decisions. What signals reveal how the system arrived at its output. What gaps exist between what the system produces and what the user assumes. How can we close those gaps in ways that feel natural inside real products and workflows.
Three ideas guide our approach.
We study uncertainty as a feature of the system, not a flaw, so people understand what the model can tell them and where it may be less stable.
We treat explanations as working tools that help people interpret what the system has done, not as decorative outputs.
We focus on methods that can be used in practice, inside real tools, by real teams, with real constraints.
This research is intentionally connected to our work in Training and Product. Questions that arise here often shape what we teach and how we design. For example, when we explore how deterministic a system can be, the implications inform both how we train people to work with that behavior and how we help teams place intelligence inside a product in ways that respect its limits.
The goal of all this work is simple. We want people to understand enough about what is happening inside these systems that they can use them safely, reliably, and with confidence. Research exists to make the invisible parts of Artificial Intelligence more visible, so people can make better decisions with the tools they already depend on.