Day 1

Ethics on the Job: A Tech Worker’s Role in Upholding Fairness, Accountability, and Transparency in Computing Systems (In-person)

Js Tan (MIT/Collective Action in Tech), Nataliya Nedzhvetskaya (UC Berkeley/Collective Action in Tech), Kristen Sheets (Collective Action in Tech), and Clarissa Redwine (Collective Action in Tech)

Extensive technical research has been done to mitigate bias and increase fairness and transparency in computing systems. But not all ethical issues can be addressed by technical means. While the role of states, corporations, international organizations, and the idea of expertise itself, has been extensively theorized in how we hold our computing systems accountable, the role of workers has received comparatively little attention.

This “unworkshop” is a collaborative examination of the role that tech workers can play in identifying and mitigating harms. The types of harms we address are those that result from normative uncertainty around questions of safety and fairness in complex social systems. They arise despite technical reliability and are not a result of technical negligence or lack of expertise. These can range from the harms of large language models, as raised by Dr. Emily Bender, Dr. Timnit Gebru et al. in “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big”, or the ethical issues involved in the creation of military technology, such as with Project Maven at Google.

With data from the Collective Action in Tech archive, we find that worker organizing is a dynamic way to address issues of accountability, fairness and transparency that aren’t addressable in other ways due to an imbalance of power within the creation of computing and AI systems. Our goal is to link the lived experience of practitioners with case studies of collective action. To build non-exploitative, non-predatory computing systems, it is critical to have an understanding of the landscape of power that encourages the development of harmful systems. Through our experience archiving instances of collective action in tech, we see direct insights on how to combat abuses of power hampering ethical AI and ML via the long tradition of labor activism.


Communication Across Communities in Machine Learning Research and Practice (In-person)

Adriana Romero Soriano (META AI Research), Caglar Gulcehre (DeepMind), Jessica Forde (Brown), Levent Sagun (META AI Research), Negar Rostamzadeh (Google AI Research), Samuel Bell (Cambridge), Seyi Olojo (UC Berkeley), and Stefano Sarao Mannelli (UCL)

Communication Across Communities will critically examine what it means to build participatory methods in contemporary machine learning.

Crossing disciplinary boundaries, this collaborative CRAFT session will focus on extant and idealized communication practices between machine learning’s broad and seemingly disparate set of stakeholders, including researchers and practitioners, creators and artists, social scientists and legal scholars, policy-makers and citizens, users and bystanders. Folding these voices into conversation with one another, we aim to highlight commonalities, conflict and moments of creative tension.

Designed with participation in mind, our whole CRAFT session will be a collective effort, broken out into three stand-alone yet interrelated sub-sessions. Session 1 will consider and critique how art communicates concepts surrounding ML to broader audiences. Session 2 will explore shared and divergent vocabulary through real-time collaborative development of a machine learning lexicon. Finally, session 3 will present an informal and open-floor panel session on the discordance between standardized evaluation and contextual sensitivity.


Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT) (In-person)

Video Video

Jung-Woo Ha (NAVER CLOVA), Hwaran Lee (NAVER CLOVA), Matthias Galle (NAVER Labs Europe), Sangchul Park (Seoul National University) and Meeyoung Cha (Institutie for Basic Science)

Extremely large-scale pretrained generative language models (LMs), called hyperscale LMs, such as GPT-3, PanGu-α, Jurassic-1, Gopher, and HyperCLOVA show astonishing performances in various natural language generation tasks under in-context few-shot or zero-shot settings. However, although hyperscale LMs largely contribute to various aspects of both research and real world business, many researchers have concerns on their severe side effects as well. In aspects of FAccT, in particular, many researchers argue that hyperscale LMs include the potential risks of fairness, accountability, and transparency on AI ethics such as data and model bias, toxic content generation, malicious usages, and intellectual property-related legal issues. Our CRAFT, entitled “Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT)” addresses these limitations and potential risks to develop hyperscale LMs and apply them into real-world applications for users. The ultimate goal of HyperFAcct CRAFT is to explore the solutions to what efforts we need to make for solving these issues. For achieving this goal, we prepare our CRAFT as an interactive in-person panel including three presentations, panel discussion, and live QA session with the audiences. Experts with diverse backgrounds, such as machine learning, software engineering, business, law, AI ethics, and social computing, participate in our CRAFT as contributors. Three presentations deal with legal and ethical issues, bias problems, and data transparency in hyperscale LMs for twenty minutes each. Seven panels discuss the presented topics in depth and seek the solutions to alleviate the potential risks in the viewpoint of both research and application deployment. In particular, we aim to derive the detailed execution policy and action items for better and safer hyperscale LM applications. The discussion of our CRAFT will be a helpful reference for many other research groups and AI companies that want to leverage hyperscale AI in the world.


Designing Data in Data Science (Online)

Michael Muller (IBM), Lora Aroyo (Microsoft), Melanie Feinberg (UNC), Helena Mentis (Maryland), Samir Passi, Shion Guha (Toronto), and Heloisa Candello (IBM)

In data science, we often assume that our dataset is a fixed, authoritative objective resource. However, each of the presenters has shown that datasets are human products, reflecting human data-work to make the data fit-for-purpose. As Bowker (2006) noted, “’Raw data’ is both an oxymoron and a bad idea” (see also Gitelman’s 2013 book by the same title). If the data are not raw, then our question becomes: How do responsible people work diligently to “cook” their data, to make those data ready for analysis or other usage?

FAccT has begun to examine datasets critically (Barabas et al., 2020; Miceli et al., 2021). However, less attention has been paid to individual data records and their wranglings, perhaps because “Everyone wants to do the model work, not the data work” (Sambasivan et al., 2021). Workshops have gone further, urging that we “interrogate” our data-work practices (Muller et al., 2020; Tanweer et al., 2022). The goal of the panel is to co-explore with the audience, how humans design the data that we analyze, and to work together toward better work practices the necessary activities of designing data.

Session taking place in external Zoom room: https://utoronto.zoom.us/j/5615762973


What Could Possibly Go Wrong? Speculative Practice Towards Anticipating the Negative Consequences of Humanitarian AI (Online)

Robert Soden (Toronto), Aleks Berditchevskaia (Nesta), Erin Coughlan de Perez (Tufts), Manveer Kalirai (Toronto), Shreyasha Paudel (Toronto), Isabel Stewart (Nesta), Saurav Poudel (CCI), and Sakun Joshi (Nepal Red Cross)

As is the case with numerous domains, the adoption of artificial intelligence (AI) technologies in disaster risk management (DRM) and humanitarian response has been growing in recent years. While implementation of the technologies is still at an early stage, numerous areas of applications, across several functions of disaster research and practice, are currently being explored. In domains where AI tools have already been deployed, the introduction of these tools has raised a number of concerns, ranging from privacy, to bias, and reduced transparency, explainability, and accountability. Some of these potential harms are in direct opposition to humanitarian principles. Considering the negative effects of AI tools that have been documented in the fields of criminal justice, healthcare, finance, and human resources, there is a need to critically and thoroughly evaluate the risks involved in deploying AI tools in disasters. Several characteristics of AI for disasters set it apart from analogous tools in other domains that have been more widely studied by the FAccT community, which makes anticipating negative impacts additionally challenging. To address these challenges, our session aims to use speculative design practice to envision potential consequences of AI in disasters, and then explore them through social and ethical lenses. In this workshop, participants will explore the humanitarian AI problem space through the development of futuristic or alternative scenarios. Building scenarios around a case study in humanitarian AI, the Collective Crisis Intelligence project, participants will attempt to uncover obscure values and unintended consequences therein. The activities of this workshop will enable us to learn more about the role of speculative design in anticipating negative consequences of technology design and deployment, develop a more thorough understanding of the potential impacts of adopting AI in humanitarian work, and grow the community of researchers and practitioners working in this space.


Responsible Innovation “On the Ground”: Lessons Learned from Accountable ML Efforts within Industry (Online)

William Isaac (DeepMind), Iason Gabriel (DeepMind), Rumman Chowdhury (Twitter), Luca Belli (Twitter), and Kristian Lum (Twitter)

There has been much public discussion and research within the FAccT community recently in relation to the responsibilities of internal ethics teams to balance both demands from organizational and public stakeholders. While these critiques of institutional ethics and responsible innovation efforts are important, an often under-explored question is how do internal teams navigate these myriad of tradeoffs in practice? To date there has been little documented on what it means for internal “ethics owners” to implement these often opposing mandates in practice and what practical tradeoffs and limitations are considered when working through concrete cases.

This session seeks to begin to address this gap by achieving two goals: We will first present a set of retrospective case studies from teams across industry. The retrospective presentations will be designed to move beyond discussion of published artefacts and aim to provide a more holistic view of how each team seeks to “do ethics” within the context of their specific organization as well as elaborate on open challenges and questions related to the larger industry-wide practices.

The second goal is to have a facilitated discussion regarding the current efforts by internal ethics efforts to serve as internal accountability mechanisms in addition to considering what these teams should consider when seeking to balance responsibilities to multiple stakeholder groups. The discussion will aim to be both a space for critical reflection as well as active construction around a set of core questions: What does it mean for internal ethics teams to do "good" or impactful research? To what extent are current efforts limited by existing regulatory frameworks and (lack of) norms? How can current efforts prevent the broader concerns around “ethics-washing” and public disclosure of ethical concerns? How can external stakeholders such as civil society and academia be meaningfully integrated into internal accountability mechanisms?


I Audited My Algorithm and All I Got Was This Harms Report: Imagining a Collaborative Approach between Companies and Communities on Developing Harm Remediation Strategies (In-person and online)

Irene Font Peradejordi (Twitter), Liz Marquis (Michigan), Kyra Yee (Twitter), and Tomo Lazovich (Twitter)

Video

There are significant efforts in the responsible ML community to establish systematic processes to identify algorithmic harms. While identifying algorithmic harms is challenging, understanding why these harms occur and how companies can mitigate them is a significantly more difficult question to answer and one that is not discussed as frequently. Take, for example, the recent study on algorithmic amplification of political content on Twitter. The audit identified “what” is happening but not “why” it is happening. Twitter identified that certain political content is amplified on the platform; however it is unclear if the preferential treatment of a specific group is a function of how the algorithm is constructed or the interactions people have with it. Further root cause analysis is required to determine what changes are needed, if any, to the system.

At the same time, there are multiple points of view on how to make the outcome of an algorithmic system more “fair”, and no one solution that fits them all. When issues such as these arise, we believe that affected communities and other external stakeholders, such as policy experts, activists, academic and independent researchers, should have a say in how these systems are impacting them and how they should be fixed. However, we acknowledge that there are legal and privacy challenges to opening the systems for public inspection. While these may be a significant roadblock for collaboration, it is one that we believe can be overcome.

This workshop will bring together policy experts, activists, academic and independent researchers, and practitioners working in industry in two panels to help answer two big questions. First, how might companies collaborate with communities to develop remediation strategies to the surfaced algorithmic harms? And second, how might companies be held accountable for the implementation of such proposed remediation strategies?


Collaboratively Developing Evaluation Frameworks for Queer AI Harms (In-person)

Arjun Subramonian (Queer in AI and UCLA), Anaelia Ovalle (UCLA), Luca Soldaini (Queer in AI and Allen Institute for AI), Nathan Dennler (University of Southern California), Zeerak Talat (Digital Democracies Institute, Simon Fraser University), Sunipa Dev (UCLA), Kyra Yee (Twitter), William Agnew (Queer in AI and University of Washington), Irene Font Peradejordi (Twitter), Avijit Ghosh (Queer in AI, Northeastern University, and Twitter (intern))

AI systems are becoming increasingly ubiquitous in the human experience through their deployment in a wide variety of contexts, from individual home appliances to large-scale distributed systems such as search engines. However, these systems also suffer from a wide variety of social biases and produce discriminatory outcomes that can have significant downstream impacts. A core concern here is the lack of trust “In AI systems and their creators” as AI continues to automate systems of marginalization (QueerinAI et al., 2021; Noble, 2018). AI historically has and continues to inequitably serve and harm marginalized communities, often in subtle ways. For instance, the normative nature in which data is collected for AI systems often erases communities at intersections of gender and sexuality, leaving them especially vulnerable to harms propagated by these systems. Even in areas of AI that try to rebuild trust through techniques in algorithmic fairness, these often egalitarian-based measures fall short because they focus on physically-observed characteristics; thus, unobserved characteristics such as gender and sexuality present a challenge (Tomasev et al., 2021). Towards identifying and exposing the harms of AI systems, it is critical to include marginalized individuals. Hence, Queer in AI, UCLA NLP, and Twitter’s Bias Bounty Working Group are hosting the first CRAFT for working towards developing harm evaluation frameworks to identify, measure, and categorize AI biases against intersectional queer communities.


Digital Resilience and Pathways to Long-Term Community Investment (In-person)

Christine Phan (Greenlining Institute) and Vinhcent Le (Greenlining Institute)

Through perpetuating long-standing discrimination through algorithmic bias and accelerating systemic racism, the tech industry has long struggled with engaging with impacted communities and especially communities of color. Rather than consulting or engaging with community members or partners on a one-off basis, what does it mean to digitally invest in our communities? What does it mean for communities impacted by the pitfalls of emerging technologies to have agency, resiliency and capacity to decide what technical tools they actually need?

Over the last year, The Greenlining Institute has developed the Town Link program, a digital inclusion partnership between 15+ organizations to close the digital divide in Oakland by addressing broadband affordability, digital literacy, and workforce development. Through our work on economic and climate justice, including the digital divide and algorithmic bias, we’ve worked on multiple forms of community engagement and importantly, have developed standards on equitable community investment. These standards emphasize focusing on race-conscious solutions, prioritizing multi-sector approaches, delivering intentional benefits, building community capacity, being community-driven at all stages, and establishing paths towards wealth building.

In this workshop, we will explore the difference between harmful, ineffective, and beneficial practices in community engagement and investigate the relationship between various technical efforts and impacted communities. We will then discuss how to invest in and affirm community capacity to engage in a technical landscape, using frameworks and standards broadly used in advocacy and community safety. These will include ideas of consent in data, community education, and The Greenlining Institute's community investment standards. Participants will learn how to build deliberate, long-standing relationships with community partners and increase community capacity to collaborate with technologists and engage in digital challenges and solutions.

Mapping Data Stories in the Surveillance Ecosystem (In-person)

Jennifer Lee (ACLU of Washington)

Video

It is difficult for many people to understand the complexity and interwoven nature of the surveillance ecosystem, given that we are surveilled by a host of private companies and government agencies in both physical and digital spaces, and by a staggering range of technologies. This workshop has been designed to be useful for individuals who may feel overwhelmed by this complexity, and might have questions such as, “How does surveillance affect me?” or “What should I be most worried about?” This workshop can also provide a useful framework for understanding what interventions (individual or collective) might be effective in protecting people’s privacy. In this workshop, participants will first learn about the different elements of the surveillance ecosystem (e.g., watchers, actions, surveillance technologies, etc.), read pre-written stories about surveillance, identify the different elements in those stories, physically map the connections between these elements, and discuss different types of interventions (e.g., policy interventions).

Mapping Automated Decision Making in Social Services in Australia and the Asia Pacific: towards regional transparency and accountability (In-person)

Lyndal Sleep (ADM+S CoE, UQ), Rino Nugroho (University’s Seneca’s Maret, Indonesia), Dang Nguyen (ADM+S CoE, RMIT), Jennifer Min (UQ), Gemma Rodriguez (UQ), Brooke Ann Coco (ADM+S CoE, UQ), Ivan Jurko (Australian Red Cross), Simone (ACOSS, Australia Council of Social Services), and Paul Henman (ADM+S CoE, UQ)

This workshop reports and discusses the findings of a research project that aims to map ADM systems in the government funded social services in Asia Pacific, including South Korea, Australia, New Zealand, Vietnam, The Philippines, Indonesia, and Malaysia.

By the end of the event, participants will:

  • have a baseline knowledge of some ADM systems being used across the region, and the different (and similar) ways they are employed in different jurisdictional contexts;
  • Have discussed similarities and differences, as well as emerging patterns, among ADM and AI used in social service delivery across the Asia Pacific;
  • Shared some of their experiences with ADM and AI in social service in their jurisdiction;
  • Have had the opportunity to provide feedback on the report, which may be incorporated into the final version;
  • Discussed the possibility future mapping of ADM and AI across jurisdictions;
  • Learned about some of the research being undertaken in the Automated Decision Making and Society Centre of Excellence
  • Had the opportunity to form links with Automated Decision Making and Society Centre of Excellence researchers as well as researchers outside the Centre with similar interests
  • Begun building a network of scholars who focus on ADM in the social services in the region, and beyond.

Research, Meet Engineering or: How i stopped worrying about scaling and love the process (In-person)

Luca Belli (Twitter) and Aaron Gonzales (Twitter)

Ever wondered what responsible ML means in practice at a big company? In this session we are going to take a peek under the hood of Twitter’s META team, which owns Responsible ML development at Twitter. By showing a real world example of co-development between the two parts, and highlighting difficulties and lessons learned along the way, we hope to shed some light on the inner workings of the team and share practical areas for education, training, and product strategy.

New Models for Engagement and Deployment Strategies for FAccT AI (In-person)

Chang D. Yoo (KAIST), Micheal Veale (University of College London), Yoon Kim (Saehan Ventures), Gwangsu Kim (KAIST), Ji Woo Hong (KAIST), Alice Oh (KAIST)

The advancement of AI technology creates new social and economic opportunities to enhance people's lives worldwide, from education to business to healthcare. Doing so also creates various challenges regarding fairness, accountability, and transparency (FAccT). These challenges ultimately relate to the crucial issue of how to live in harmony with AI, which involves developing FAccT-based AI technology along with regulations and policies for social justice. Despite a lack of deep understanding and regard for the long-term deployment consequences, businesses are incorporating AI somewhat blindly. This session invites four different stakeholders in AI technology to discuss the emerging issues concerning the development, deployment, and governance of fair AI. The stakeholders will discuss the following issues: (1) Incorporation and deployment of the delayed impact of fairness in business and law, (2) adaption to fluctuating societal values and consensus on fairness and protective attribute due to the erratic political climate and world issues, and (3) disentanglement of causation, accountability, and fairness. After the presentations and panel discussion, the session hopes to have elevated the understanding of the complex FAccT issues surrounding the incorporation and deployment of advanced fairness algorithms under fluctuating conditions.



Day 2

Big Data and AI in the Global South (Online)

Sharifa Sultana (Cornell), Mohammad Rashidujjaman Rifat (Toronto), Syed Ishtiaque Ahmed (Toronto), Ranjit Singh (DSRI), Julian Posada (Toronto), Azra Ismail Georgia Tech, Yousif Hassan (York), Seyram Avle (Massachusetts Amherst), Nithya Sambasivan (Google), Rajesh Veeraraghavan (Georgetown), Priyank Chandra (Toronto) and Rafael Grohmann (Unisinos)

Recently Big Data and AI, among other data-driven applications, are becoming increasingly popular in the Global South. Research works have shown that the challenges of developing and deploying Big Data- and AI- based systems in the Global South are different from those in the North due to sociocultural norms, infrastructural limits, colonial histories, and continuing oppressions and social stratification based on race/caste/tribe, among others. In this workshop, we invite researchers across the world who investigate and contend with these challenges. The objectives of this workshop are to identify the challenges of AI and Big Data in the Global South, come up with designs, policies, and methods to develop a decolonizing praxis in this area, connect with existing scholarship in HCI4D and AI4SG, and develop an active HCI community to advance these goals.


Collaboratively Developing Evaluation Frameworks for Queer AI Harms (Online)

Arjun Subramonian (Queer in AI and UCLA), Anaelia Ovalle (UCLA), Luca Soldaini (Queer in AI and Allen Institute for AI), Nathan Dennler (University of Southern California), Zeerak Talat (Digital Democracies Institute, Simon Fraser University), Sunipa Dev (UCLA), Kyra Yee (Twitter), William Agnew (Queer in AI and University of Washington), Irene Font Peradejordi (Twitter), Avijit Ghosh (Queer in AI, Northeastern University, and Twitter (intern))

Please see session on Day 1 for abstract.


Emerging Problems: New Challenges in FAccT from Research, to Practice, to Policy (Online)

Kathy Baxter (Salesforce) and Chloe Autio (Cantellus Group)

The field of Fairness, Accountability, and Transparency (FAccT) has been critical to uncovering and understanding core issues with the design, development, and deployment of artificial intelligence and emerging technologies. Enhanced awareness of these issues created by the research and findings of the FAccT community has informed further work and shaped four key areas in FAccT: applied FAccT methods and practices, organizational and cultural approaches to FAccT, regulation and public policy, and of course, concepts and research to further understand this field. As the FAccT community evolves from focusing on exposing issues with emerging technologies to centering on solution finding, novel problems are emerging across the domains of research, practice, culture, and policy.

This CRAFT session will be comprised of lighting talks by 15 experts from industry, academia, and NGOs organized into four breakout groups. The goal of this workshop is to provide participants with a deeper view into the emerging problems within each of these spaces as shared in talks from and discussion with leading experts across academia, industry, corporate governance, and public policy. Participants will take away a more thorough understanding of new challenges in the dynamics, institutions, concepts, and approaches unique to domains across the FAccT space. It is our hope that enhanced awareness of these challenges will equip attendees to more quickly and more comfortably be able to recognize and confront these challenges and give them a baseline or framework they can use to begin thinking about solutions. To enable as many speakers and participants to attend around the world, this is organized as a virtual session.

Session delivered by external zoom link: https://us02web.zoom.us/j/86396378096

Day 3

ML Assisted System for Identifying At-risk Children: A Multidisciplinary Approach to Seek a Better Way to Prevent Child Abuse (In-person)

Moon Choi (KAIST)

The Korean government launched a computing system to identify at-risk child, using big data on school attendance rates, vaccination status and other relevant information in 2018. This system has been developed from a proactive approach aiming to identify households with a high risk for child abuse. The system provides a list of households with the highest risk, and frontline social workers visit their homes to do proactive interventions. There has been a controversy over the fairness and transparency as well as other ethical issues related to the system. However, discussions across disciplines and sectors have been hardly observed.

In this CRAFT session, panelists from different disciplines and sectors will address the issue of fairness, accountability and transparency in this computing system. The panelists are four: (a) researcher in social welfare and technology policy, (b) researcher in industrial and systems engineering, (c) government officer in charge of this system and (d) frontline social worker. Each panelist will speak for approximately 7 minutes, then a floor discussion will be followed.

The primary goal of this session is not just to criticize the existing system but to navigate alternatives and policy suggestions to build a better program and system to prevent child abuse. Participants will be able to actively participate in discussion and share their thoughts and experiences.


A Sociotechnical Audit: Evaluating Police Use of Facial Recognition (In-person)

Evani Radiya-Dixit (Cambridge)

Our session first discusses the lack of accountability for police use of facial recognition technology, which has yielded numerous human rights dilemmas. Police forces are designing and deploying the technology to different standards, often without evaluating if the technology should even be used.

Next, we share a sociotechnical auditing tool that establishes a consistent mechanism to assess the extent to which police use of facial recognition is ethical and accountable. Developed for the England and Wales jurisdiction, the audit integrates technology-focused considerations (e.g. about training, accuracy, bias) with societal considerations (e.g. about harms, human decision-making, community oversight).

Our aim is for participants to use the audit to inform advocacy and policy (e.g. for a ban, mandated standards, or greater oversight on police use of facial recognition). Participants can also extend the audit to other jurisdictions and sociotechnical systems.

Additionally, through a Q/A and breakout group discussion, we plan to explore and inspire conversations on pushing the boundaries of accountability research. During and after the session, participants will have the opportunity to co-author a manifesto for organizers, activists, and researchers in the space of accountability and auditing of sociotechnical systems.

Fairer Working Futures: Co-Developing a Roadmap For Labor-Centric FAccT Research (In-person)

Dan Calacci (MIT Media Lab, Human Dynamics Group)

Opaque and unaccountable algorithms are replacing and augmenting work, managers, and bosses across industries. How will labor react, and how can researchers in the FAccT community help shape the future of work and algorithmic management? This session will prompt the FAccT community to reflect on the real-world impact algorithms have at work. It will be held in two parts. First, an open panel discussion and Q&A session with workers, organizers, and researchers will help ground the session in what has and has not worked in researcher-organizer collaborations, the relationship between researchers, organizers, and workers, and the major issues facing workers and organizers related to algorithms, surveillance, and data. Then, participants will break out into groups, each of which will produce some guiding principles for researcher involvement in labor efforts. The session will result in a community-created sketch of a research agenda and methodology that researchers and workers can use to guide impactful, community-driven work that helps build capacity in the labor movement.



Day 4

Applying a Justice Framework to Natural Language Processing (NLP): Decentering Standard Language Ideology in Pursuits of Fair and Equitable NLP (Online)

Genevieve Smith, Julia Nee and Ishita Rustagi (UC Berkeley)

This workshop introduces a linguistic justice framework for designing, developing and managing Natural Language Processing (NLP) tools and provides opportunities for participants to apply the framework through hands-on activities. In the workshop, we ask: what can be gained through using a justice lens to examine unquestioned assumptions while building and critiquing NLP tools? Often when evaluating whether a tool is or is not fair, we leave unexamined a host of assumptions related to language, and particularly which language varieties are assigned privilege, status, and power. This results in NLP tools that (1) have differential performance and opportunity allocation for users of different language varieties, and (2) advance linguistic profiling.

In this workshop, we will delve into the ways standard language ideology is currently embedded within NLP systems and how a justice framework can help us understand the hidden values and assumptions built into NLP systems that reinforce standard language ideology. We will engage participants in hands-on activities allowing them to work together to apply the framework and identify paths forward through hypothetical scenarios related to NLP development and management. Lastly, we will co-create a plan of action for the community to advance justice in NLP tool research and development.


Communication Across Communities in Machine Learning Research and Practice (Online)

Adriana Romero Soriano (META AI Research), Caglar Gulcehre (DeepMind), Jessica Forde (Brown), Levent Sagun (META AI Research), Negar Rostamzadeh (Google AI Research), Samuel Bell (Cambridge), Seyi Olojo (UC Berkeley), and Stefano Sarao Mannelli (UCL)

Please see session on Day 1 for abstract.


Addressing Ageism in AI (Online)

Robin Brewer (Michigan/Google), Clara Berridge (Washington), Mark Díaz (Google), and Christina Fitzpatrick (AARP)

In the panel, we will discuss (1) how lack of representation and misrepresentation in data, algorithms, and annotation practices are crucial to understanding ageism in AI and (2) how aging intersects with other identities that have also been discussed in FAccT literature. As such, we bring together a group of experts across disciplines to share their experiences with ageism in AI through their work. Specifically, we will engage industry, academic, and non-profit experts in aging, accessibility, computing, and policy to ask questions about fair and equitable AI experiences that better represent older adulthood, and how to do so in meaningful ways. We will apply critical gerontology and feminist approaches to also ask panelists how to make older adulthood more visible in our socio-technical systems and in what contexts hypervisibility poses risks. Lastly, we will invite attendees to contribute to this discussion and engage with central questions of AI and ageism.

The primary goals of this panel are to:

  • Increase visibility of how ageism manifests in AI design, development, and implementation
  • Provide guidance on how to address ageism in AI development broadly
  • Examine how ageism connects to other identity-based characteristics within the FAccT community
  • Discuss how ageism may manifest and be addressed in attendees’ research areas or practices
  • Brainstorm solutions to AI-based ageism

Application Denied: A Global Coalition to Expose and Resist Discrimination in Automated Decisions (Online)

Coordinators: Ushnish Sengupta (Toronto), Peaks Krafft (University of Arts London). Presenters: Florence Okoye (Afrofutures_UK), Teanna Barrett (Howard University), Gülșen Güler (Researcher), Rob Davidson (Independent Researcher), Nicole Hendrickson (Independent Researcher), Ivana Feldfeber (DataGénero)

There is an information asymmetry, and power dynamics resulting in power imbalance that exists between organisations developing and deploying algorithms, and individuals and communities impacted by these algorithms. Often there is an information asymmetry where the individuals impacted by algorithms do not know what algorithms they are subject to.

This workshop will build on previous events by the Tech & Power Network designing a Register of Algorithmic Registers. The volunteer-led Tech & Power Network previously organised a two-part advocacy workshop to bring together a global coalition of social and racial justice groups to develop a resource for exposing automated decision-making systems in public use that are causing systematic social harms (e.g., in credit and banking, immigration, carceral systems, applicant recruiting software, gig work platforms, and more).

Our FAccT workshop will showcase a prototype and discuss processes required to further build out a register of algorithmic registers. Previous events have identified four different areas of focus for initial applications: Child Welfare, Gender Based Violence, Race and Data collection, & Facial and Image Recognition. This workshop will elicit feedback from participants on prototypes of algorithm register, particularly with respect to their own use cases. They will be invited to join the coalition and design process beyond the session.

The participants will get hands-on experience of co-designing the Register of Algorithmic Registers, learn practical tips on researching what algorithmic surveillance and decision-making are being used in their localities without public disclosure and share successful algorithm detection and analysis methods . We are particularly interested in holistic feedback from a Human Rights perspective.

At the end of the FAccT workshop, participants will be more informed about different types of Algorithm Registers, will have participated in co-designing a Algorithm Register, and will taken an interest in further development of Algorithm Registers.


Mapping Data Stories in the Surveillance Ecosystem (In-person)

Jennifer Lee (ACLU of Washington), Micah Epstein

It is difficult for many people to understand the complexity and interwoven nature of the surveillance ecosystem, given that we are surveilled by a host of private companies and government agencies in both physical and digital spaces, and by a staggering range of technologies. This workshop has been designed to be useful for individuals who may feel overwhelmed by this complexity, and might have questions such as, “How does surveillance affect me?” or “What should I be most worried about?” This workshop can also provide a useful framework for understanding what interventions (individual or collective) might be effective in protecting people’s privacy. In this workshop, participants will first learn about the different elements of the surveillance ecosystem (e.g., watchers, actions, surveillance technologies, etc.), read pre-written stories about surveillance, identify the different elements in those stories, physically map the connections between these elements, and discuss different types of interventions (e.g., policy interventions).