Good Enough Explanations

Investigating Citizens' Civic AI Explanation and Transparency Needs

Timeline: 2021-2023
Organization: Georgia Institute of Technology
My Role: Lead Researcher
Type: Exploratory Research
At a Glance

Challenge: The Explainable AI (XAI) research and design community lacks an understanding of what makes explanations of civic AI systems meaningful to diverse stakeholders affected by these tools.

Outcome: Developed the 'Good Enough Explanations' framework—a conceptual model defining four essential qualities of effective civic AI explanations.

23
Stakeholder Interviews
7
Stakeholder Groups
3
Publications
The Challenge

Civic AI systems are increasingly informing civic decision-making such as deploying police forces, allocating social services, or determining bail decisions. Yet these systems remain largely invisible and inaccessible to the publics who bear their consequences.

While the XAI community has made significant advances in technical explainability (model cards, feature importance, etc.), there remain four critical gaps:

As such, this projects asks:

Research Question:What qualities underlie effective public explanations of civic predictive systems?
My Role

As lead researcher on this project, I was responsible for:

Research Process

Participants & Recruitment

I conducted semi-structured interviews with 23 participants across 7 stakeholder groups, all with direct experience thinking about, writing about, or acting on AI justice issues:

  • AI researchers and academics (n=7)
  • AI activists and advocacy organizations (n=5)
  • Journalists covering tech and AI (n=3)
  • Community and neighborhood leaders (n=3)
  • Civic society organizations (n=2)
  • Policymakers (n=2)
  • Legal scholars (n=1)

Interview Approach

Each 30-90 minute interview explored:

Analysis Methods

I used multiple qualitative analysis techniques:

Key Insights: The "Good Enough Explanations" Framework

Through this research, I developed the concept of "good enough explanations"—explanations that may not be complete or universal, but are good enough to support publics in critically engaging with AI systems. The framework identifies four essential qualities:

1. Situated in Diverse Publics' Lives

Effective explanations ground technical concepts in the lived experiences, local knowledge, and existing concerns of specific communities. Rather than generic tutorials, they connect AI workings to familiar places, problems, and power structures.

"What is needed in terms of transparency is always a function of what people are trying to accomplish...transparency needs to be molded in very specific ways so that people are being provided with particular pieces of information that are useful."
— Philosopher, Academic
2. Explain Complex Socio-Technical Systems

Explanations must go beyond the "black box" algorithm to reveal the assemblages surrounding AI: who collects data and why, how historical biases become embedded, which institutions benefit, how predictions affect different communities.

"What data is being fed into the system?.. How does that impact the predictions?"
— Case worker, Innocence Project
"Who is impacted by these tools and how? What is the cost of incorrect predictions and who bears those costs"
— Data scientist, non-profit, focused on human rights violation
3. Support Ongoing and Partial Processes

Understanding AI isn't a one-time event. Effective explanations enable continuous investigation, allow for partial knowledge, and create opportunities for collective sensemaking over time.

"I think viewing them (predictive tools) as procedures that you can assess without knowing how the nuts and bolts of everything work, that is important."
— Philosopher, Academic
4. Empower Public Action

Explanations should enable people to act—whether through advocacy, regulation, redesign, or resistance. The goal isn't just understanding, but supporting democratic oversight and intervention.

"“I think there is a little bit of like false promise of transparency.. you absolutely have to have some of that in order to even start but that.. it’s sort of like the starting point rather than the final product” [P4]"
— Sociologist, Academic
Method Reflections

What worked:

Challenges:

Impact & Outcomes
Future work
Related Publications

1. Shubhangi Gupta and Yanni Alexander Loukissas. 2023. Making Smart Cities Explainable: What XAI Can Learn from the "Ghost Map". In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA '23)

2. Shubhangi Gupta. 2023. Mapping the Smart City: Participatory approaches to XAI. Doctoral Consortium at Designing Interactive Systems (DIS) 2023.

3. Shubhangi Gupta. 2024. Good Enough Explanations: How Can Local Publics Understand and Explain Civic Predictive Systems? PhD Dissertation, Georgia Institute of Technology.