AI Scaffolding for Scientific Inquiry

How Object Detection Supports Museum Visitors' Observational Inquiry

Timeline: 2025 (Ongoing)
Organization: Exploratorium, San Francisco
My Role: HCI Researcher
Type: Product Research
At a Glance

Challenge: Museums provide direct access to scientific tools like microscopes, but visitors struggle to use them productively. As AI increasingly transforms scientific practice, we lack understanding of how to design AI that supports (rather than replaces) observational inquiry.

Outcome: Developed interaction analysis insights revealing how Object Classification Systems mediate visitor sensemaking during scientific inquiry, with design recommendations for AI-enhanced microscope exhibits.

The Challenge

Science museums enable direct observation and experimentation by making authentic scientific tools accessible to the public. However, visitors often find it challenging to use instruments like microscopes productively—they struggle to pose meaningful questions, identify what they're seeing, or interpret complex phenomena.

As artificial intelligence increasingly transforms scientific practice, it holds promise for supporting public engagement with scientific inquiry. However, currently, HCI researchers lack understanding of:

This led us to ask:

Research Question:What roles can AI technologies play in supporting observational inquiry in museums and how can Human-AI interaction design best support inquiry?
My Role
Exhibit Prototype

The larger team designed and studied an Object Classification System (OCS)-integrated microscope exhibit that helps visitors observe live microorganisms:

Physical Setup:

  • Microscope with live specimen (marine rotifers, algae, microplastic beads)
  • Camera capturing microscope view
  • Touchscreen displaying magnified image with interactive UI

AI Functionality:

  • Visitors activate OCS during exploration
  • Machine learning model identifies elements in view (rotifer, rotifer parts, poop, algae, microplastic beads)
  • System displays identifications with confidence scores
  • Visitors can explore, question, and test the AI's classifications

Note: This exhibit is located at the Exploratorium in San Francisco and is part of ongoing research on AI-enabled informal learning.

Research Process
Participants & Recruitment

Sample: 35 visitor dyads (pairs), randomly selected

Approach: Recruited museum visitors to use the exhibit and encouraged them to:

  • Talk with each other naturally
  • Think aloud about what they observe
  • Ask questions and explore freely

Note: This recruitment and session facilitation was led by the broader team at the Exploratorium.

Data Collection (Multi-Modal)

Captured three simultaneous data streams for each dyad:

  • Audio/video recordings: Visitor conversations and physical interactions
  • Screen recordings: Visual record of what appeared on the touchscreen
  • Clickstream data: Log of all user interactions with the interface

This multi-modal approach enabled reconstruction of each dyad's complete interaction journey—linking what they said, what they did, and how the AI responded.

Note: Data Collection was led by the broader team at the Exploratorium.

Analysis Method: Interaction Analysis

I conducted interaction analysis to examine how visitors and the OCS jointly shaped moments of inquiry:

Step 1: Identify Significant Events

Defined significant events as user journeys where visitors conducted observational inquiry by:

  • Identifying various elements on screen (rotifers, algae, plastic beads, etc.)
  • Asking questions about what they see (descriptive, comparative, explanatory)

Step 2: Code OCS Activation

  • Noted whether OCS was activated during each significant event
  • If activated, inductively coded the role it played
  • Wrote memos connecting patterns across dyads

Step 3: Identify Patterns

  • Analyzed recurring practices of how visitors used OCS classifications
  • Identified both supporting and limiting contexts
  • Synthesized into distinct roles and design implications
Method Reflections

What worked: Think-alouds with multi-modal data (audio, video, screen, clickstream) created rich data for nuanced analysis. Interaction analysis helped specify a concrete unit of analysis and revealed dynamic visitor-AI co-construction in that unit. Real-world exhibit context revealed challenges not apparent in controlled settings.

Rapid A/B tests and prototyping helped us stick to out timeline while providing quick checks for how changes affect visitors.

Challenges: Rapid prototyping also meant shallow engagement with some findings. Single exhibit system limits generalizability to other AI-enhanced contexts.

All analysis was grounded in verbal talk, so if the visitors did not talk aloud, it was not coded. As such, this work is exploratory and not exhaustive.

Preliminary Findings: Roles of AI in Inquiry

I identified four primary roles that OCS plays in microscope-based observational inquiry:

Role 1: Identification

Example question: "What is this round thing?"

How OCS helps: Provides labels for unfamiliar objects, enabling visitors to name and discuss what they see.

Inquiry dimension supported: Discovery—identifying elements on the screen

Role 2: Finding

Example question: "Where is the algae?"

How OCS helps: Highlights specific elements visitors are seeking, directing attention in complex visual fields.

Inquiry dimension supported: Discovery—locating specific phenomena

Role 3: Validation

Example question: "Am I right that this is a plastic bead?"

How OCS helps: Confirms or challenges visitor hypotheses, building confidence or prompting revision.

Inquiry dimension supported: Correction/clarification—verifying observations

Role 4: Comparison

Example question: "How are algae and beads different?"

How OCS helps: Provides distinct classifications that prompt visitors to examine differences and similarities.

Inquiry dimension supported: Hypothesis testing and theory formation

AI Limits

I also identified three contexts where OCS may limit or be superfluous to observational inquiry:

1. Erroneous Classifications

When OCS incorrectly identifies objects, it can leave visitors confused or under-confident in their own observations. Visitors may over-trust the AI even when the AI is wrong.

2. Irrelevant Context

The active classification done by OCS may be irrelevant to visitors' current inquiry context—for example, when they're exploring movement patterns but OCS highlights object types.

3. Less Accessible Than Alternatives

In some cases, static media (like exhibit labels) may be more accessible or comprehensible than real-time OCS feedback, particularly for novice visitors.

Design Implications

For designing AI-integrated museum exhibits:

1. Clarify Uncertainty

Make the erroneous nature of AI classifications transparent. Show confidence scores, explain how the model works, and encourage visitors to question AI outputs.

2. Design for User-AI Collaborative Iquiry

Ensures that AI supports visitor-driven exploration rather than dictating the inquiry path. Allow AI to systematically deepen engagement.

3. Complement, Don't Replace

Position OCS as one tool among many (labels, diagrams, human facilitators). Acknowledge the strengths of real-time AI feedback while preserving other scaffolding methods.

Contributions & Impact

For HCI & Human-AI Interaction Research:

For Museum Practice:

Ongoing Work & Next Steps

Current Analysis:

Related Publications

Paper under review for HCI conference (2026)

Additional publications forthcoming