Designing Partial Explanations of Civic AI with Local Publics
Challenge: Civic AI systems are rapidly becoming more complex and opaque. This presents an urgent challenge for the XAI community to develop explanations for AI systems and their surrounding socio-technical assemblages.
Outcome: Showcased the role of diverse publics can play in generating explanations of civic AI systems and their effects. Designed a generative toolkit that helps conceptualize recommendation for public generated AI explanations.
Existing pursuits towards transparency are overwhelmingly technical and disregard how algorithms interact with broader networks of materials, relations, cultures, institutions, and histories to affect societies in unjust and harmful ways. A meaningful examination of an AI system requires us to engage with the socio-technical assemblages in which it is placed.
There is a need to (1) overcome the epistemic barriers presented by opaque ‘black-boxed’ algorithms, and (2) situate algorithmic systems in broader networks of spaces and environments that affect and are affected by the algorithm
As lead researcher, I owned the entire research process:
Research Through Design Approach
I employed a 'research through design' methodology, using the process of designing and facilitating participatory workshops to investigate how we can generate explanations of broad socio-technical AI systems.
Conducted 2 pilot workshops at Georgia Tech Demo Day and GVU Research Showcase with ~20 participants to test initial concepts.
Key learning: Participatory mapping and "speculative personation" (asking participants to make predictions as an AI would) effectively prompted critical questions and embodied discomfort with algorithmic decision-making.
I conducted 5 workshops with diverse stakeholder groups in Atlanta, GA, bringing together different perspectives on predictive policing and algorithmic systems:
Organization supporting people experiencing poverty, substance use, mental health concerns
Regional civic planning agency using data-driven methods
Open call: neighborhood leaders, violence prevention workers, civic researchers
Organization funding local community-centered projects across Georgia
Teachers from nonprofit focused on educator collaboration
Workshop Structure (90 minutes)
Workshops were audio/video recorded. I also gathered pre-workshop surveys and post-workshop feedback.
Participants engaging with the interactive mapping toolkit during workshops
I analyzed workshop data using multiple qualitative methods:
While my initial goal was to identify key dimensions and related questions where explainability is desired, I found that local publics are not just in need of AI explanations, but they are also well positioned to partially explain how algorithms interact with society to affect local contexts.
Design implication: Design tools and features that collect, maintain, and organize public generated partial explanations of AI systems.
When attempting to understand and explain AI systems, participants drew on their relationships with:
Design implication: Consider how tools can help locals identify their expertise within these domains, thereby prompting them to generate grounded AI explanations.
The mapping interface, and supporting workshop protocol, successfully provided:
What worked:
Challenges:
1. Good Enough Explanations: How Can Local Publics Understand and Explain Civic Predictive Systems? Shubhangi Gupta. PhD Dissertation. Georgia Institute of Technology.
2. Making Smart Cities Explainable: What XAI Can Learn from the "Ghost Map". Shubhangi Gupta, Yanni Loukissas. Late Breaking Works paper at CHI 2023.
3. Mapping the Smart City: Participatory approaches to XAI. Shubhangi Gupta. Doctoral Consortium at Designing Interactive Systems (DIS) 2023.