CRAFT sessions bring together academics of all disciplines and people representing different communities of practice (including journalism, advocacy, activism, organizing, education, art, spirituality, public authorities) in the spririt of reflection and response. CRAFT aims for interaction among participants, and pprioritize formats that allow participants to share and explore starting assumptions, prior experiences, or competing values and to foster community building, collaborative knowledge production, and future engagement.

Track 1:

Inspecting algorithms in social media platforms: bridging regulators and independent auditors

Presented by Jenny Brennan and Aparna Surendra (Ada Lovelace Institute)

New legislation is paving the way for regulatory inspection of social media platforms, including through the UK’s Online Harms Bill and the EU’s Digital Services Act. A regulator with audit powers will fundamentally reshape the inspection ecosystem, and create new opportunities for external auditors – such as civil society, journalists and academic researchers (many represented in the FAccT community) – to access platform data in reliable, legal ways. This may include the need for regulators to inspect the underlying algorithms that power these systems, such as recommendation engines, ad delivery systems, automated moderation detection systems, and others.

As of now, no regulator has conducted an algorithm inspection of this kind, which raises a number of questions: - What could a regulatory algorithm inspection look like? - How could it be informed by lessons from external, independent actors who have conducted these types of inspections? - Moving forward, how could a regulator further support these external actors?

Through these questions, the workshop will begin to develop requirements for a regulatory algorithm inspection. Following expert lightning talks, participants will respond to a case study and set of prompts in interactive breakout groups. Participants will draw on their own experiences to identify the specifications for - and hurdles to - a robust regulatory algorithm inspection.

There’s an app for that - how can digital contact tracing apps promote fair, inclusive, and just health technologies?

Presented by Richard McKay (U of Cambridge), Alexa Hagerty (U of Cambridge), Alexandra Albert (University College London), Stephanie Hare, Yewande Okuleye (U of Leicester), Andrew Trathen (London Borough of Hackney), Hattan Sallam (London Borough of Hackney), and Zoe Tyndall (London Borough of Hackney)

This panel addresses the use and effectiveness of COVID-19 contact-tracing apps. Issues of data privacy and security in the design and development of automated contact-tracing technologies have generated much discussion; less attention has been paid, however, to how their implementation may affect communities and public health efforts.

A well-established public health practice, contact tracing rapidly emerged as a critical response to COVID-19. Countries around the world, including the UK, embraced automated contact-tracing technologies to manage the first pandemic of the algorithmic age. Yet contact-tracing apps raise profound issues of access, privacy, security, and surveillance, and their potential risks and harms are unequally distributed across society.

Taking a sociotechnical (rather than techno-solutionist) standpoint, we approach automated contact tracing as a socially-contextualised and socially-situated practice. Automated contact tracing must be understood as embedded within a broad ecosystem of pandemic responses and contextual factors, including traditional contact-tracing efforts, historical precedents, and local and national governance.

Our panel focuses on the London Borough of Hackney as a case study to consider contact tracing within these rich contextual frameworks. Hackney is an ethnically and linguistically diverse London borough, with high levels of poverty and inequality, and a significant digital divide. Hackney initially experienced high rates of COVID-19 infection and has undertaken innovative approaches to meet the pandemic’s public health challenges.

In the context of contact tracing, the concept of trust is fundamental, yet multilayered. It can refer to an individual’s trust in an app, or, in a pandemic where Black Britons are 4 times more likely to die than White Britons, to minoritised groups’ fractured trust in healthcare and political systems. It can also involve tensions between local and national governments. In England, the early centralisation of contact-tracing efforts was criticised for decontextualising data, making it less useful and trustworthy. Only later, to boost the centralised NHS Test & Trace service’s struggling performance, were public health workers in local authorities like Hackney granted access to data relating to their residents. These ebbs and flows of trust and power must also be examined within longer political and economic timeframes. Relevant factors include the austerity response to the 2007-08 financial crisis, which saw declines in local authority funding and budgetary cuts for Public Health England, the agency created in 2013 to lead on infectious disease threats.

The panel brings together local public health leaders, community members, and academic researchers. Together, we will explore how digital contact-tracing apps are inextricably linked with a range of other factors; explain what local pandemic-response technologies are being developed on the ground; and consider what Hackney's experiences with COVID-19 contact tracing can teach us about fair, inclusive, and just health technologies more generally.

Participants will leave the event with a clearer understanding of how contact-tracing apps are embedded in complex social and historical contexts. They will be able to articulate several challenges facing the deployment of such apps in a public health crisis. Furthermore, they will gain conceptual tools to analyse the importance of social context in the deployment of other health technologies.

Narratives and Counternarratives on Data Practices in the Global South

Presented by Rediet Abebe (U of California, Berkeley), Abeba Birhane (U College, Dublin), George Obaido (U of Johannesburg), and Roya Pakzad (Taraaz)

This session is an interactive workshop that uses storytelling to challenge current narratives about data practices in the countries and communities that are often grouped as “the Global South.” During the workshop, the organizers will provide various theme-based stories that are informed by common narratives around data practices and ask participants to challenge these narratives from various angles including historical contexts and cultures, legal limitations, accessibility, impact assessments, accountability mechanism, and more. A few potential topics that inform the stories include: transnational technology companies presence and allocation of resources in a developing nation; participatory methods for designing and developing Machine Learning-based services; Data-for-Good hackathons; data sharing practices between international NGOs, private foundations, and governments; private and public sector open data practices; policies around localization and sovereignty of data infrastructure.

Bridging Research and Policy: An Interactive Workshop on How to Use AI and Policy to Uplift Economic Opportunity in Communities of Color

Presented by Vincent Le and Serena Oduro (The Greenlining Institute)

How do we create algorithmic transparency policy that is rooted in racial equity, but also technologically feasible and aligned with legal frameworks? How do business and policy interests and norms make it hard to attempt to outrightly focus on helping communities of color? This interactive workshop will particularly focus on the harms that banking algorithms present to communities of color in the USA. We will start the session with a presentation that delves into the challenges of translating the Greenlining Institute’s racial equity perspective and AI fairness standards into policy and legal frameworks. After the presentation, participants will be able to use GLI’s racial equity framework to explore what kind of fairness, accountability and transparency policies could be created to protect and uplift communities of colors through breakout sessions and a large group conversation. We want this to be an opportunity for participants to use a racial equity lens to challenge norms in AI, academia, and policy that harm communities of color. By the end of the session, we aim for our participants to not be overwhelmed or dispirited by the complexities between policy, racial equity, and technology. Instead, it is to get comfortable with these complexities; As long as they are ignored, communities of color will continue to be harmed by AI and emergent technologies.

Track 2:

An Equality Opportunity: Combating Disability Discrimination in AI

Presented by Lydia X. Z. Brown, Hannah Quay-de la Vallee, and Stan Adams (Center for Democracy & Technology)

As the FAccT community continues its work on bias and discrimination in AI-based systems, we hope to add another dimension to these efforts: discriminatory impacts for people with disabilities. Disability raises several unique challenges for identifying, assessing, and mitigating bias, in addition to presenting significant intersectional challenges. In this interactive workshop session we will outline the particular challenges presented by auditing for disability discrimination. We will invite attendees to consider research questions such as: How do you test for and mitigate bias in the absence of data about a protected characteristic? Are there technical solutions for the classification challenges raised by the unique complexities of disability data, and for data reflecting the experience of people who are multiply marginalized? We will use structured breakout sessions to discuss these questions and consider directions for developing solutions for these challenges.

Contesting and Rethinking Demographic Data Infrastructures for Algorithmic Fairness

Presented by Sarah Villeneuve and McKane Andrus (Partnership on AI)

Most current algorithmic fairness techniques require access to data on a “sensitive attribute” or “protected category” in order to make performance comparisons and standardizations across groups. In practice, however, data on demographic categories that inscribe the most risk of mistreatment (e.g. race, sexuality, nation of origin) are often unavailable, due in part to a range of organizational barriers and concerns related to antidiscrimination law, privacy policies, and the unreliability of self-reported, proxied, or inferred demographics.

Beyond these practical constraints, however, FAccT, HCI, and Critical Studies scholars have surfaced many other issues with how technical work conceptualizes identities and communities in recent years, looking at categories such as race, gender, and disability. A key part of this work is exposing the ways in which these categories do not simply exist in nature--they are co-constructed and reproduced by the sociotechnical infrastructure built around them. Exploring this process of reproduction is thus key to understanding how, if at all, we should be infusing demographics into fairness tools. Additionally, work around issues such as data justice, big data abolition, and Indigenous Data Sovereignty has sought to center the ways in which data collection and use are wielded to exploit and further disempower individuals and their communities. These critiques point to the ways in which data centralization and ownership allows just a few individuals to determine what narratives and economic or political projects the data will be mobilized to support. While these types of work do not center around demographic data or algorithmic fairness specifically, these perspectives can help identify largely unexamined risks of algorithmic fairness’ data requirements.

The goal of this workshop is to confront current demographic data collection practices, as well as the implications of continuing to design fairness interventions that presuppose demographic data availability. Through various narrative and speculative exercises, participants will build out a picture of the various underlying technical, legal, social, political, and environmental infrastructures necessary to support most proposed demographic-based algorithmic fairness techniques.

Participatory Methods in AI research (a.k.a. everything you always wanted to ask about using participatory methods in your work)

Presented by Alexandra Albert (U College, London) and Alexa Hagerty (U of Cambridge)

Participatory Methods in AI research--a.k.a. everything you always wanted to ask about using participatory methods in your work--is an interactive workshop. We aim to bring together those interested in, or already using, participatory methods in AI research to share and explore assumptions and experiences in this area, and to collaboratively challenge competing values around using participatory methods. In doing so, we hope to foster community building around the important issues that participatory methods can give rise to, and to foster cooperative knowledge production, and the potential for future engagement on the subject.

This interactive workshop is intended to deepen conversation and practice about how participatory methods and citizen science can be utilized to bring citizen voices to a range of ML fairness issues in meaningful, sustainable, and equitable ways. This includes not only questions of design, but issues of problem-framing, such as deciding which technological systems ought to be developed; countering narrow logics of technological solutionism when addressing complex social problems; and assessment processes, such as community tracking of impacts of technological systems. In addition, we will explore community participation in “meta” ethics discussions of the kind that happen at FAccT.

We will prioritize an opportunity for participants to share and explore starting assumptions around participatory methods. The workshop invites participants to share their experiences with participatory methods and/or their ideas for applying participatory methods to their work, and interrogate their research process in an open, supportive and collaborative forum.

We ask workshop participants to come with concrete projects and problems in mind, as this is not meant to be an abstract conversation. Rather, our activities and discussions will have a strong on-the-ground focus, while remaining contextualized in high-level considerations of inclusion and justice. Activities will focus on collaborating with other workshop participants on problem-solving strategies their participatory approaches face. As part of the workshop, we will offer our “emojify project” as a case study and discuss our insights and lessons learned in designing a public-facing, interactive website that utilises citizen social science to engage the public in learning about emotion recognition systems.

By the end of the workshop, participants will have a more nuanced understanding of participatory processes and citizen social science. For example, they will understand the “spectrum of participation” and applications of “citizen science” and “citizen social science.” They will also have a clearer understanding of dynamics of power and potential limitations of participatory methods. They will be familiar with case studies and examples of how participatory processes have been applied to better understand and address complex social issues. Furthermore, they will have considered in detail some of the challenges their own participatory projects face, and have collaborated with other workshop participants on problem-solving strategies. Finally, we hope that workshop participants will continue to collaborate with each other and stay in conversation moving forward.

Civic empowerment in the development and deployment of AI Systems

Presented by Renée Sieber and Ana Brandusescu (McGill U)

Because AI/ML is labelled opaque and unknowable to the overwhelming percentage of the population, it can be rendered “beyond scrutiny” for any meaningful civic engagement. Current methods in AI Ethics/Responsible AI and within the FAccT community envision citizen participation as largely stakeholder engagement, which can be highly reductionist because it can substitute representation for influence. How do we move beyond stakeholder engagement, individual manipulation (nudging), or public provision of feedback? It is relatively easy to get a member of a civil society organization onto an AI oversight board. It’s far more challenging for citizens to leverage political power to exert influence over AI development and deployment. For example, how do we actively engage civil society in influencing AI decisions and technically empower them to possibly even use AI to counter such policy?

This interactive workshop offers a primer on participatory theory from the social sciences, discusses the role of tech-enabled community advocacy, and considers the political power dynamics in using tech to influence AI development and tech policy. Our goal is to draft a questionnaire that will allow AI developers, governments, and civic tech organizations to gauge the level of civic empowerment in an AI system. This can further serve as a set of questions to augment AI certification, audit, and risk assessment tools.

In addition to primers and talks, the workshop will use a digital whiteboard and guided breakout rooms, where participants will answer a series of questions and reflect on the discussions. Our hope is that, by the end of the workshop, participants will learn about the components of civic empowerment and actively include civil society/tech organizations as AI systems are developed and deployed.

Track 3:

Accountable AI for disaster risk management - FAccT considerations on risk models, geospatial data, and inclusivity of lower- and middle-income countries

Presented by Caroline Gevaer (U of Twente), Yola Georgiadou (U of Twente), and Robert Soden (U of Toronto)

This session will introduce a new field of research and practice, disaster risk management (DRM), to the FAccT community. Presentations and discussion during the session will explore the consequences of the role of AI in DRM, focusing in particular on questions relating to geospatial data, hidden biases, and the participation of people from low- and middle-income countries (LMIC) in decisions that impact public safety.

Data Cards Playbook: Participatory Activities for Dataset Documentation

Presented by Andrew Zaldivar, Mahima Pushkarna, and Daniel Nanas (Google Research)

In the context of developing, deploying, and using datasets, a critical aspect in maintaining a good relationship between stakeholders (from data practitioners to the public) is for them to understand the importance of transparency to that relationship. Without that, it is impossible to secure an ethically appropriate measure of social transparency in dataset practices, as well as good faith efforts by stakeholders to embrace all aspects of responsible transparency. A lack of consensus and evolution of transparency efforts could pose major risks to society as these dataset documentation practices serve as the foundation for data technologies that impact the quality of people’s lives.

Transparency, which we define as sharing information about system behavior (casual or observed) and organizational process that is concise, understandable, and articulated with plain language explanations, is captured in the form of boundary objects such as documentation and summarized reports. These objects typically provide descriptive, structural and sometimes statistical information about a dataset. But details that go beyond metadata, including provenance, representation, usage and fairness-informed evaluations—context stakeholders require for making responsible decisions about dataset use—are often not included or unavailable. As a consequence, stakeholders are ill-equipped to make informed decisions around dataset release and adoption.

In this session, to address current limitations in this practice and obtain more desirable documentation outcomes, participants will learn about and use the Data Cards Playbook (“Playbook”) to establish a foundation for producing, publishing, and measuring transparent documentation of datasets. The Playbook is a framework-agnostic, human-centered, participatory approach to dataset documentation we created that offers a consistent way for stakeholders to extract knowledge distributed across the many individuals involved in creating datasets. In turn, this knowledge acquisition process encapsulates the unique information needs of consumers and reviewers of a dataset, beyond metadata. Altogether, the Playbook provides guidance on gathering this information with usefulness, thoughtfulness and measurability in mind.

Over the course of the workshop, participants will go through a set of group activities and engaging discussions that will assist them in creating, completing, and customizing templates for different types of dataset documentations. Using a carefully crafted vignette to facilitate in-depth and multi-faceted explorations of complex dataset documentation creation, participants will work in small groups with an appointed facilitator. First, groups will identify stakeholders and their information needs in relation to our vignette material. This activity will involve participants developing an understanding of our taxonomy of stakeholders to align on agents—stakeholders who use, evaluate, or determine how the dataset is used. Next, participants will draft details on what their dataset documentation will need to capture and how it might be organized. Then, participants will assemble a straw-man proposal to allow groups to critically examine questions generated in the previous activities so as to find gaps and opportunities for proper documentation. Thereafter, participants will formulate a plan for producing answers to their agreed-upon questions, followed by a recap on all the design principles introduced throughout this session.

By the end of this session, participants will acquire a clear understanding of the actions and commitments required to create dataset transparency reports that are accessible to diverse stakeholders, with foundational knowledge and design principles, as well as artifacts created throughout this session, necessary to apply this approach in their respective domain.