At a Glance
Challenge: The Explainable AI (XAI) research and design community lacks an understanding of what makes explanations of civic AI systems meaningful to diverse stakeholders affected by these tools.
Outcome: Developed the 'Good Enough Explanations' framework—a conceptual model defining four essential qualities of effective civic AI explanations.
23
Stakeholder Interviews
The Challenge
Civic AI systems are increasingly informing civic decision-making such as deploying police forces, allocating social services, or determining bail decisions. Yet these systems remain largely invisible and inaccessible to the publics who bear their consequences.
While the XAI community has made significant advances in technical explainability (model cards, feature importance, etc.), there remain four critical gaps:
- Not pluralistic: Failed to consider diverse values, knowledge systems, and lived experiences
- Narrowly technical: Focused on algorithmic internals while ignoring surrounding social systems
- One-time and passive: Treated explanation as a single transaction rather than ongoing process
- Trust-focused: Aimed to build confidence rather than enable critical evaluation and action
As such, this projects asks:
Research Question:What qualities underlie effective public explanations of civic predictive systems?
My Role
As lead researcher on this project, I was responsible for:
- Research conceptualization, design, and methodology development
- Participant recruitment across 7 diverse stakeholder groups
- Conducting all 23 semi-structured interviews
- Qualitative analysis (thematic analysis, affinity mapping)
- Framework synthesis
- Stakeholder presentations and dissemination
Research Process
Participants & Recruitment
I conducted semi-structured interviews with 23 participants across 7 stakeholder groups, all with direct experience thinking about, writing about, or acting on AI justice issues:
- AI researchers and academics (n=7)
- AI activists and advocacy organizations (n=5)
- Journalists covering tech and AI (n=3)
- Community and neighborhood leaders (n=3)
- Civic society organizations (n=2)
- Policymakers (n=2)
- Legal scholars (n=1)
Interview Approach
Each 30-90 minute interview explored:
- What, if any, is the role of citizen-centered AI transparency in the design and deployment of just civic predictive tools? What is at stake and why? What are the challenges? Why?
- What is needed to promote meaningful citizen-centered transparency? Is partial understanding enough? Why? Why not? What does democratic control over AI look like?
- Do you or your association think about AI use by cities? Why? Why not? Do you think there is a need to do that?
- What do you know about the use of AI for public safety in your neighborhood? What questions do you have? What forums exist to offer information on the use of AI by cities?
Analysis Methods
I used multiple qualitative analysis techniques:
- Thematic analysis: Coded transcripts to identify patterns in explanation needs
- Affinity mapping: Clustered insights across stakeholder groups
- Literature integration: Connected findings to XAI, HCI, and STS scholarship
- Framework synthesis: Distilled core qualities through iterative refinement
Key Insights: The "Good Enough Explanations" Framework
Through this research, I developed the concept of "good enough explanations"—explanations that may not be complete or universal, but are good enough to support publics in critically engaging with AI systems. The framework identifies four essential qualities:
1. Situated in Diverse Publics' Lives
Effective explanations ground technical concepts in the lived experiences, local knowledge, and existing concerns of specific communities. Rather than generic tutorials, they connect AI workings to familiar places, problems, and power structures.
"What is needed in terms of transparency is always a function of what people are trying to accomplish...transparency needs to be molded in very specific ways so that people are being provided with particular pieces of information that are useful."
— Philosopher, Academic
2. Explain Complex Socio-Technical Systems
Explanations must go beyond the "black box" algorithm to reveal the assemblages surrounding AI: who collects data and why, how historical biases become embedded, which institutions benefit, how predictions affect different communities.
"What data is being fed into the system?.. How does that impact the predictions?"
— Case worker, Innocence Project
"Who is impacted by these tools and how? What is the cost of incorrect predictions and who bears those costs"
— Data scientist, non-profit, focused on human rights violation
3. Support Ongoing and Partial Processes
Understanding AI isn't a one-time event. Effective explanations enable continuous investigation, allow for partial knowledge, and create opportunities for collective sensemaking over time.
"I think viewing them (predictive tools) as procedures that you can assess without knowing how the nuts and bolts of everything work, that is important."
— Philosopher, Academic
4. Empower Public Action
Explanations should enable people to act—whether through advocacy, regulation, redesign, or resistance. The goal isn't just understanding, but supporting democratic oversight and intervention.
"“I think there is a little bit of like false promise of transparency.. you absolutely have to have some of that in order to even start but that.. it’s sort of like the starting point rather than the final product” [P4]"
— Sociologist, Academic
Method Reflections
What worked:
- Diverse and intentional stakeholder sampling (7 groups) revealed broad context-dependent needs
- Inductive approach allowed stakeholders' own language to shape framework
Challenges:
- Audio and video was not recorded for participant comfort. This limited documenttaion but was supplemented by automatic transcript generation and note-taking
- Breadth over depth meant fewer interviews per group
- Focus on criminal justice systems; generalizability to other domains unclear
Impact & Outcomes
- Published at CHI 2023 (Late Breaking Works)
- Published at DIS 2023 (Doctoral Consortium)
- Cited in subsequent XAI and AI transparency literature
Future work
- Include more end-users (citizens directly affected by civic AI) in addition to intermediaries
- Conduct follow-up interviews to validate the framework
- Explore how explanation needs differ across different types of civic AI (not just predictive policing) and stakeholder type.