How Object Detection Supports Museum Visitors' Observational Inquiry
Challenge: Museums provide direct access to scientific tools like microscopes, but visitors struggle to use them productively. As AI increasingly transforms scientific practice, we lack understanding of how to design AI that supports (rather than replaces) observational inquiry.
Outcome: Developed interaction analysis insights revealing how Object Classification Systems mediate visitor sensemaking during scientific inquiry, with design recommendations for AI-enhanced microscope exhibits.
Science museums enable direct observation and experimentation by making authentic scientific tools accessible to the public. However, visitors often find it challenging to use instruments like microscopes productively—they struggle to pose meaningful questions, identify what they're seeing, or interpret complex phenomena.
As artificial intelligence increasingly transforms scientific practice, it holds promise for supporting public engagement with scientific inquiry. However, currently, HCI researchers lack understanding of:
This led us to ask:
The larger team designed and studied an Object Classification System (OCS)-integrated microscope exhibit that helps visitors observe live microorganisms:
Physical Setup:
AI Functionality:
Note: This exhibit is located at the Exploratorium in San Francisco and is part of ongoing research on AI-enabled informal learning.
Sample: 35 visitor dyads (pairs), randomly selected
Approach: Recruited museum visitors to use the exhibit and encouraged them to:
Note: This recruitment and session facilitation was led by the broader team at the Exploratorium.
Captured three simultaneous data streams for each dyad:
This multi-modal approach enabled reconstruction of each dyad's complete interaction journey—linking what they said, what they did, and how the AI responded.
Note: Data Collection was led by the broader team at the Exploratorium.
I conducted interaction analysis to examine how visitors and the OCS jointly shaped moments of inquiry:
Step 1: Identify Significant Events
Defined significant events as user journeys where visitors conducted observational inquiry by:
Step 2: Code OCS Activation
Step 3: Identify Patterns
What worked: Think-alouds with multi-modal data (audio, video, screen, clickstream) created rich data for nuanced analysis. Interaction analysis helped specify a concrete unit of analysis and revealed dynamic visitor-AI co-construction in that unit. Real-world exhibit context revealed challenges not apparent in controlled settings.
Rapid A/B tests and prototyping helped us stick to out timeline while providing quick checks for how changes affect visitors.
Challenges: Rapid prototyping also meant shallow engagement with some findings. Single exhibit system limits generalizability to other AI-enhanced contexts.
All analysis was grounded in verbal talk, so if the visitors did not talk aloud, it was not coded. As such, this work is exploratory and not exhaustive.
I identified four primary roles that OCS plays in microscope-based observational inquiry:
Example question: "What is this round thing?"
How OCS helps: Provides labels for unfamiliar objects, enabling visitors to name and discuss what they see.
Inquiry dimension supported: Discovery—identifying elements on the screen
Example question: "Where is the algae?"
How OCS helps: Highlights specific elements visitors are seeking, directing attention in complex visual fields.
Inquiry dimension supported: Discovery—locating specific phenomena
Example question: "Am I right that this is a plastic bead?"
How OCS helps: Confirms or challenges visitor hypotheses, building confidence or prompting revision.
Inquiry dimension supported: Correction/clarification—verifying observations
Example question: "How are algae and beads different?"
How OCS helps: Provides distinct classifications that prompt visitors to examine differences and similarities.
Inquiry dimension supported: Hypothesis testing and theory formation
I also identified three contexts where OCS may limit or be superfluous to observational inquiry:
When OCS incorrectly identifies objects, it can leave visitors confused or under-confident in their own observations. Visitors may over-trust the AI even when the AI is wrong.
The active classification done by OCS may be irrelevant to visitors' current inquiry context—for example, when they're exploring movement patterns but OCS highlights object types.
In some cases, static media (like exhibit labels) may be more accessible or comprehensible than real-time OCS feedback, particularly for novice visitors.
For designing AI-integrated museum exhibits:
Make the erroneous nature of AI classifications transparent. Show confidence scores, explain how the model works, and encourage visitors to question AI outputs.
Ensures that AI supports visitor-driven exploration rather than dictating the inquiry path. Allow AI to systematically deepen engagement.
Position OCS as one tool among many (labels, diagrams, human facilitators). Acknowledge the strengths of real-time AI feedback while preserving other scaffolding methods.
For HCI & Human-AI Interaction Research:
For Museum Practice:
Current Analysis:
Paper under review for HCI conference (2026)
Additional publications forthcoming