«Critiquing and Rethinking Fairness, Accountability, and Transparency» (CRAFT) is a dedicated track to build bridges from the conference to people who contend with computing systems from many different angles, from journalism and organizing, art and education, to advocacy, governance and beyond. Critique, reflection, and power are at its core, and it provides a unique opportunity within an academic conference to center the impact of technology on communities and the policy implications that arise from that impact.

The “Digital Trade” Sneak Attack on AI Oversight & Regulation Underway Today

Sarah Myers West (AI Now), Lori Wallach (Rethink Trade), Ben Winters (Electronic Privacy Information Center), Jai Vipraa (AI Now), Patrick Woodall (AFL-CIO Technology Institute)

With policymakers increasingly focused on the oversized role that Big Tech has within society and the economy, some dominant tech industry interests are now quietly attempting to use international trade negotiations as a backdoor means to thwart the United States and other governments from enforcing policies to confront the problems caused for workers, competing businesses, consumers and democracy itself by a largely unregulated digital sphere. Of special concern is the industry demand for special secrecy guarantees for algorithms and source codes that could outlaw proactive monitoring and otherwise undermine attempts to safeguard against discriminatory, obtrusive or otherwise harmful misuses of AI. Many algorithmic accountability bills with broad support — and the Biden administration’s own AI Bill of Rights — could run afoul of these potential trade rules. With far-reaching new “digital trade” agreements being negotiated this year, this panel features leading experts in both trade policy and AI policy.


Resisting the New Jim Code in the Old South: Lessons from the Field

Puneet Cheema (NAACP Legal Defense Fund), Amalea Smirniotopoulos (NAACP Legal Defense Fund), David Moss (NAACP Legal Defense Fund), Clarence Okoh (Center for Law and Social Policy)

A Black man, Randal Reid, was recently arrested and detained in a jail for almost a week after a false match through facial recognition technology led to his arrest by police in Louisiana in an incident involving the Jefferson Parish Sheriff’s Office and the Baton Rouge Police Department. Other predictive policing programs, such as those used by the Pasco County Sheriff’s Office, lead to increased surveillance and officer presence. Such algorithmic “risk-assessment” technologies, reproduce structural racial disadvantages, enabling systemic discrimination against Black people. This session will discuss real-world examples of how the “New Jim Code” impacts Black communities. We will also discuss the movements that Black communities are building to resist algorithmic racism, and how data scientists and other actors can support these efforts. We will present case studies of the NAACP’s Legal Defense Fund work, alongside our partners, in challenging predictive policing programs in Pasco County, FL and enforcing a fair lending audit of fintech machine learning systems. We will highlight how communities and their advocates identify and combat algorithmic harm and identify areas where further collaboration between grassroots activists, advocacy organizations, technologists, and researchers can empower those affected by algorithmic racism.


The Ethics of Impact: Lived Experiences of “Reform Tech” in Criminal Legal and Immigration Systems

Emmett Sanders (MediaJustice), Karim Golding (Freedom to Thrive), James Kilgore, (MediaJustice)

In the wake of the Covid-19 pandemic, calls for decarceration have increased. Jurisdictions have turned to “reform tech” such as electronic monitoring (EM), various AI, and other forms of E-Carceration. For those impacted, the difference between digital and traditional brick and mortar detention is one of degree rather than kind. Our research shows that tech does not address antiquated notions of criminalization and carcerality embedded in the system but exacerbates long-existing systemic racial and class disparities. Centering the lived realities of those impacted, this session explores “reform tech” in criminal legal and immigration systems, and its devastating impacts on family, health, and economic security. We also explore ethical conundrums regarding due process, coercion, and wholesale data collection posed by “reform tech”. We close with an exercise in radical reimagining how tech might become a vehicle of true systemic reform, led by individuals impacted by electronic monitoring.


Investigating and Challenging Algorithmic Welfare Systems [ONLINE ONLY]

Divij Joshi (University College London), David Nolan (Amnesty International), Eva Constantaras (Lighthouse Reports)

Algorithmic systems are being adopted by governments around the world in social security and welfare contexts, used in decision-making contexts from fraud detection to calculating benefit amounts. In many cases, these systems harm those in the most marginalised and precarious positions, leading to punitive surveillance, arbitrary decision-making metrics and adding layers of opacity to existing welfare bureaucracies. The goal of this workshop is to share methods and learnings from investigations of algorithmic systems used by governments in welfare administration across different contexts. The rise of ‘algorithmic’, data-driven methods in public administration has been well documented, but interrogating, analysing and opposing the use and effects of these systems remains a challenge. This workshop will bring together researchers, civil society organisers, lawyers and journalists who have conducted investigations into algorithmic welfare systems in various contexts, to share methods and findings relating to the use and operation of these systems.


From Research Insight to Policy Impact - How You Can Engage in Current AI Policy Debates

Alexandra Givens (Center for Democracy & Technology), Ridhi Shetty (Center for Democracy & Technology), Brian Chen (Data & Society Research Institute), Sorelle Friedler (Haverford University, former White House Office of Science & Technology Policy), Sarah Myers West (AI Now, former Senior Advisor on AI at the Federal Trade Commission)

Now is an incredibly active time for AI policy and regulation, and FAccT researchers have much to contribute. How can you plug in? This interactive session will provide an overview of major AI policy-making efforts currently underway in the US and EU, and give guidance / build connections for researchers who want to get more involved. The session leads are advocates and former government staff who are working on policy efforts ranging from the AI Act in Europe, to next steps for the AI Bill of Rights, NIST AI Risk Management Framework, and other various agency and state initiatives in the U.S. Participants will leave the workshop with specific ideas of how they might engage in policy processes, and tools, guidance and relationships to act on those ideas. Attendees already working on these and other policy efforts are warmly welcomed to share their own experiences and build connections.


Towards an India-first Responsible AI research agenda [ONLINE ONLY]

Divy Thakkar (Google & City, University of London), Avijit Ghosh (Northeastern University), Shachi Dave (Google), Hari Subramonyam (Stanford), Balaraman Ravindran (IIT Madras), Ameen Jauhar (Vidhi Legal), Hari Subramonyam (Stanford), Divya Vaid (Jawaharlal Nehru University), Divya Siddharth (Microsoft)

We will host a series of lightning talks and an interactive workshop that will enable dialogue across social scientists, public policy, CS researchers. The goal will be to work together to define concrete research areas that are critical to developing an India-first Responsible AI agenda. The talks will include a focussed dialogue to layout areas where emergence and use of AI poses critical questions around responsibility. We aim to spark a dialogue that bridges the gap between multidisciplinary FaaCT researchers and paves the way for collaborative RAI research.


The Road to the Table: Laying the Foundation for a Black Feminist Impact Assessment

Serena Oduro (Data & Society Research Institute)

This session invites technologists, practitioners, policy experts, academics, and activists to co-create the foundation for a Black feminist impact assessment. Algorithmic impact assessments and similar tools are often called for in legislation in the USA and abroad as a key tool to bring about algorithmic equity. However, with these tools being in their infancy, there is much room for opportunity and error; the legislation, procedures, and regulatory norms that surround impact assessment practices can either be constructed in a manner that protects those most at risk of algorithmic discrimination, or that leads to industry capture. In this workshop, we will explore the critical intellectual and practical contributions Black feminism can bring to the development of AIA’s and the movement for algorithmic accountability. How does identity and the nature of systemic discrimination impact defining harm? How may a Black feminist lens enrich the AI development process? And how can a Black feminist lens help us protect and uplift Black women? The session will conclude with a brainstorming session that will lay out the ideas and unanswered questions to build a Black feminist algorithmic assessment.


Community-collaborative visions for computing research

Sucheta Ghoshal (University of Washington), Angela D.R. Smith (University of Texas at Austin), Marisol Wong-Villacres (Escuela Superior Politécnica del Litoral), Lauren Wilcox (Google), Sheena Erete (University of Maryland College Park), Calvin A. Liang (University of Washington), Emily Tseng (Cornell University), Akeiylah DeWitt (University of Washington), Yasmine Kotturi (Carnegie Mellon University)

Community-collaborative approaches (CCA) promise a more just and societally impactful future for computing, by encouraging more equitable participation in technology research and design --- indeed, FAccT has had an increasing interest in participatory governance of algorithmic systems. But how can we ensure that such work builds knowledge WITH communities as meaningful partners, rather than conducting research activities ON them? And what would it take for computing research to go a step further, towards helping communities counter structural oppression? This session brings to FAccT the expertise of five researchers with a collectively rich wealth of experience enacting, critiquing, and navigating community-based research in computing. Our panelists work not only in human-centered and responsible ML/AI, but also broadly in human-computer interaction (HCI), computer-supported collaborative work (CSCW) and social computing. First, our panelists will discuss the structural challenges of CCA for communities, research institutions, and individual researchers, with a focus on how the values of computing research do and do not align with community-collaborative work. Attendees will then delve into breakout discussions with panelists, and come away equipped to reflect on their own structural and practical challenges to enacting CCA in computing.


Mapping the Risk Surface of Text-to-Image AI: A Participatory, Cross-Disciplinary Workshop [ONLINE ONLY]

Borhane Blili-Hamelin (AI Risk and Vulnerability Alliance), Ezinwanne Ozoani (Hugging Face), Giada Pistilli (Hugging Face), Nathan Butters (AI Risk and Vulnerability Alliance), Alexandra Sasha Luccioni (Hugging Face)*, Nima Boscarino (Hugging Face), Subhabrata Majumdar (AI Risk and Vulnerability Alliance)

Text-to-image (TTI) generative AI (such as Stable Diffusion, DALL-E, MidJourney) inherits many of the risks that come with large-scale models that can be adapted for a wide variety of downstream tasks—like algorithmic monoculture and the difficulty of anticipating use cases. Yet relative to the enormous scrutiny received by the failures of large language models, shared knowledge of the novel harms and risks of TTI models. How can we build boundary objects that support meaningful collaboration between researchers, impacted communities, and practitioners in mitigating the harms of models that pose novel risks? Our hands-on workshop tackles this issue through a strategy inspired by model documentation and cybersecurity best practices (NIST CVE, MITRE ATT&CK). Our aim is to combine practical experience with the potential and limitations of TTI models, building open resources that minimize harmful misuses of ML, and supporting cross-disciplinary efforts to make open-source models and datasets less harmful to impacted communities.


User Engagement in Algorithm Testing and Auditing: Exploring Opportunities and Tensions between Practitioners and End Users

Wesley Hanwen Deng (Carnegie Mellon University), Shivani Kapania (Google), Ken Holstein (Carnegie Mellon University), Motahhare Eslami (Carnegie Mellon University), Lauren Wilcox (Google Research), Su Lin Blodgett (Microsoft Research Montreal), Danaë Metaxa (University of Pennsylvania), Nicholas Diakopoulos (Northwestern University), Karrie Karahalios (University of Illinois), Shubhanshu Mishra (Instacart), Christo Wilson (Northeastern University)

Recent years have seen growing interest from researchers and practitioners in engaging end users in testing and auditing algorithmic systems for problematic behaviors. However, research points to tensions between industry AI practitioners and end user auditors, including lack of trust and accountability, asymmetric power, and potential burdens on users. In this CRAFT session, we will gather researchers and industry practitioners across diverse sectors and disciplines to examine such tensions and explore opportunities for effective forms of user engagement in algorithm auditing. We aim to explore several themes around power dynamics and accountability, user empowerment and advocacy, and safeguarding user-practitioner collaboration by discussing questions such as: What role should users play in algorithm auditing throughout the stages of designing, developing, and overseeing AI products and services? How can we support and protect both users and practitioners who are genuinely committed to mitigating or preventing problematic algorithmic behaviors in AI systems?


Responsibly Working with Crowdsourced Data

Mark Díaz (Google Research), Vinodkumar Prabhakaran (Google Research), Emily Denton (Google Research), Rachel Rosen (Google Research), Rachel Rosen (Google Research), Mahima Pushkarna (Google Research)

Crowdsourced datasets are commonly used to develop a range of machine learning tools, such as hate speech classifiers and object detection systems. At the same time, scholars have pointed out a range of related ethical concerns, ranging from the conditions in which raters work and data is collected, to the geographic distribution of crowdworkers, to the role of sociodemographic characteristics in shaping raters’ lived experiences and data judgments. In this session, a panel discussion will kick start a discussion of ethical issues underlying the development and use of crowdsourced datasets and a breakout activity will guide participants in 1) unpacking crowdsourcing tensions in their own domain, 2) identifying the strengths of different dataset development frameworks in addressing these tensions, and 3) identifying tensions that remain unaddressed. The session is open to those who are new to thinking about crowdsourced data as well as those experienced in working with crowdsourced data.


Digital Apartheid and the Horn of Africa

Timnit Gebru (DAIR Institute), Meron Estefanos (DAIR Institute), Asmelash Teka (Lesan AI), Richard Mathenge (former content moderator)

Just like the impacts of oil spills are first felt by people in marginalized groups, the equivalent of an oil spill in our information ecosystem is first felt by countries like those in the horn of Africa. From genocides orchestrated on social media, to transnational repression, to recruitment by human traffickers, the internet is a hotbed of mis/disinformation and hate speech, while containing little locally relevant information to various East African communities. On the other hand, those cleaning up the equivalent of this oil spill to benefit Western societies at the cost of their own wellbeing, are also primarily located in countries like Kenya. This is what our colleague El Mahdi El Mhamdi calls digital apartheid. In this session, we bring a mix of activists, journalists, scientists and data workers from Ethiopia, Eritrea and Kenya to discuss the ways in which people from the horn of Africa experience a digital apartheid.


#DragVsAI: Exploring Facial Recognition Technologies through Embodied Algorithmic Tinkering

Janet Ruppert (CU Boulder), Joy Buolamwini (Algorithmic Justice League), José Ramón Lizárraga (Algorithmic Justice League and CU Boulder)

Get ready to drag AI! In this session, we will explore hands-on how makeup, accessories, and craft materials can be used to refuse AI-powered facial recognition technologies. This will follow a modified version of the Drag vs. AI workshop (https://ajl.org./drag-vs-ai/), created by the Algorithmic Justice League. Please bring a laptop with a webcam. Bring your own makeup (BYOM) encouraged!


Theories of Change in Responsible AI

Daricia Wilkinson (Microsoft Research), Michael Ekstrand (Boise State University), Janet A. Vertesi (Princeton University), Alexandra Olteanu (Microsoft Research); speakers being finalized

People in and adjacent to the FAccT community hold many different and sometimes competing goals and theories of change that motivate and guide their work. For example, one perspective is that many technologies should be dismantled, not made fair or accountable. In contrast, other perspectives focus on providing just treatment to members of society by seeking to mitigate harms in existing, deployed technologies, sometimes by participating in their development. Some perspectives emphasize the need for radical changes, while others see more incremental approaches as more effective in achieving enduring change. Some believe that the tech industry and its economic structures are unfit for developing Responsible AI, and the only way to do so is from outside of a capitalist framework. The theories of change that our community holds, however, often remain only partially or implicitly stated, or even not stated at all. In this CRAFT session we want to provide a discussion forum to help unpack different ways in which these assumptions and perspectives manifest: 1) in visions for what the desired future of responsible computing technology looks like, and 2) in strategic approaches or pathways for moving towards a desired future.


“AI Art” and Its Impact on Artists

Timnit Gebru (DAIR), Eva Toorenent (Artist), Oscar Araya (Artist), Reid Southen (Artist), Johnathan Flowers (CSUN), Mehtab Khan (Yale Law), Harry Jiang (Independent Researcher), Jessica Cheng (Artist), Abhishek Gupta (Montréal AI Ethics Institute)

Many popular commercial machine learning (ML)-based “generative AI Art” products with the ability to output high quality images have entered the market in the past year. However, the corporations that have proliferated these image generators have harmed artists by profiting off their work without their consent or compensation. This session will be a panel discussion intended to highlight the voices of artists who have been impacted by the proliferation of image generation models, so that they can directly speak to the ML fairness and AI ethics community and articulate their needs in their own voices. Unfortunately, this community has not prevented these very tangible harms that artists are facing. We discuss this failure of the ML and FAccT community, and highlight the issue of data laundering, among others, where datasets that were advertised as serving academic needs as used for profit, misleading the public and circumventing some regulation.


Swee Leng Harris (Luminate, The Policy Institute of King's College London), Katarzyna Szymielewicz (Fundacja Panoptykon)*, Thomas Streinz (NYU Law, Guarini Global Law & Tech)

Courts and regulators in Europe have challenged the legality of the business and data practices of big tech companies, particularly under the General Data Protection Regulation. Legal accountability is crucial for fair and accountable technology, and the EU has passed a wide range of laws in this space (beyond GDPR). Yet barriers of resources and transparency remain, limiting the efficacy of Europe’s laws. The new Digital Services Act will allow greater access to data for regulators and researchers, but public interest oriented implementation will be key. The first part of this panel will be presentations situating EU regulation of data and digital platforms in global contexts, and setting out key legal developments. The second part will open for discussion with the audience to explore international and multi-disciplinary perspectives on the impact of these laws on data and digital platforms, and alternative visions of our digital future.


Humanitarian AI for the Global South [ONLINE ONLY]

Christina Last (Massachusetts Institute of Technology, AQAI), Prithviraj Pramanik (National Institute of Technology Durgapur, AQAI), Avijit Ghosh (Northeastern University, Twitter), Subhabrata Majumdar (AI Risk and Vulnerability Alliance), Vipul Siddharth (UNICEF), Rockwell Clancy (Virginia Tech), Mahealani Kauahi (Ancestral Knowledge and Hawaiian Languages)

We intend to bring together the FAccT community around the topic of AI for Humanitarian Development, in order to enable deep exploration of the challenges of developing, implementing, and regulating AI for the Global South. We will explore the above topic through a case study of a social enterprise supported by an international non-governmental organization (NGO) who are building AI to highlight the environmental and health challenges faced by children. The case study will discuss the implementation of AI to measure children's exposure in the global south to harmful air pollutants that can lead to respiratory illness. We will discuss several implementation challenges for this case study, including ground truthing, privacy and autonomy, and infrastructure constraints. At the end of the session, we want participants to have a better shared understanding of the path forward in co-developing humanitarian AI solutions for the Global South while tackling the above challenges collaboratively.


Irene Solaiman (Hugging Face), Zeerak Talat* (Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)), Marie-Therese Png (Oxford University, DeepMind)

No known definitive resource exists for determining and evaluating social impacts of generative AI systems and their many modalities (text, image, audio, video, potential future modalities). Social impacts can be split into two measurable dimensions: evaluating a system itself (e.g. a base model) and evaluating impact on society. We provide a framework for social impacts of base systems and invite participants to contribute to the framework and an open-source evaluation suite. Participants will share their own work evaluating systems, highlighting considerations per modality. We will then examine impact on society, evaluations, and actions to mitigate harm. We conclude with a discussion about incentives and power structures leading researchers and developers to conduct or not conduct social impact assessments and whether assessments lead to meaningful change. We will synthesize takeaways from both parts of the session and provide feedback for a suite and an updated version of a social impact evaluation guide.


FUBU: Community and Care in Conferences and Research [ONLINE ONLY]

Emnet Tafesse (Data and Society Research Institute), Lauren Quigley (IBM Research), Adriana Alvarado Garcia, (IBM Research), Juana Becerra Sandoval (IBM Research), Ranjit Singh (Data and Society Research Institute)

Community care and inclusion is more than a DEI initiative and topic of discussion, but an opportunity for FAccT as a conference and community of researchers to deliver on the good intentions of our work in fairness, accountability and transparency. This interactive workshop will engage participants in reflective practice, provide an opportunity to connect among activists, artists, and justice-centered practitioners, and collaboratively explore ways to incorporate community care into FAccT research and conference experiences. This workshop will focus on creating space that is warm, inclusive, and caring for Black, Indigenous and People of Color as contributors to the field and the epistemologies and research topics that we are uniquely positioned to lead.


Automation of Elder Care

Moon Choi (KAIST), Taylor De Rosa (KAIST), Ern Chern Khor (KAIST)

With the ongoing trend of population aging, the provision of care is increasingly being automated. One such example of the "automation of elder care" is the deployment of AI-assisted phone call services for older persons living alone in South Korea. However, there is a dearth of research on these AI-assisted phone call services concerning its responsible use and risks of harm. The aim of this interactive workshop is to present on AI-assisted elder care services, contextualize their uptake, and explore urgent questions around their benefits, risks, and recommendations to better protect individuals’ rights and well-being. Participants will be grouped into four stakeholder roles: (a) engineer/developer, (b) social worker, (c) older person living alone, and (d) older person's adult child. Each group will engage in discussion about what they perceive to be the system’s benefits and risks, then issue recommendations about how to manage these risks in system design and use.


Bringing People In to High-Stakes System Design: Concrete Challenges and Emerging Lessons

David Robinson (OpenAI), Sarah Atwood (Center for New Democratic Processes), Kyle Bozentko (Center for New Democratic Processes), Kate Mays (Syracuse University), Baobao Zhang (Syracuse University)

Researchers at FAccT often highlight a need for more inclusive approaches to system design. But there have only been a few efforts to simultaneously inform and engage communities of non-experts in the development of high-stakes software. This session will offer three empirical perspectives on such efforts and address some key questions: What conditions are needed to make participation matter? What do lay people really know, and really need? How much does participation cost, and who bears those costs? How do different methods of participation balance tradeoffs associated with breadth, depth, and scale of participation? How can the expert work of explaining technical systems and forecasting their impacts be insulated from the powerful actors already designing a system? These considerations will seed a brainstorm and discussion with participants at the end of the session. We don’t suggest that our experiences are ideal. Rather, these efforts are valuable because they are *real,* and suggest practical lessons for practitioners and theorists alike.


AI ethics landscape in Chinese tech industry: regulatory policy, research, and practice

Jeff (Jianfeng) Cao, Senior Researcher, Tencent Research Institute; Xin Yao, Professor, Southern University of Science and Technology

With the rapid advancement of AI and other emerging technologies, China is proposing new regulations and guidelines for AI governance and tech ethics – at the national level via its National Tech Ethics Committee and other relevant Authorities, as well as via forthcoming guidelines and standards for industry to follow. The national tech ethics policy paper titled as “Opinion on Strengthening the Governance of Science and Technology Ethics” provides a comprehensive framework for the governance and implementation of tech ethics. New regulations concerning recommendation algorithms and deep synthesis (AI-generated media) also put forward tech ethics requirements for technology and service providers. Tech ethics and AI governance are increasingly becoming an important part of AI R&D in the Chinese tech industry. In this session, scholars and researchers from China who focus on AI ethics and governance will provide an overview of these regulatory developments, research insights and industry practices as to how tech companies like Tencent are adapting their AI development process to respond.


Language Models and Society: Bridging Research and Policy

Organizers: Stefania Druga (Microsoft Research), Rishi Bommasani (Stanford University), Mihaela Vorvoreanu (Microsoft), Ioana Baldini (IBM Research) Invited Speakers: Alex Engler (Brookings Institution), Bogdana Rakova (Mozilla Foundation), Gretchen Krueger (OpenAI), Irene Solaiman (Hugging Face)

We propose an interactive workshop to engage the multidisciplinary FAccT community in a discussion focused on enhancing the trustworthiness of LMs. We are particularly interested in actionable items that bridge the gap between LM research and policy. We will ground the conversation of the interactive workshop in questions such as:

  1. What is the FAccT community’s responsibility for advancing policy and governance of LMs? What and how can we best contribute? Whose voices do we represent? Whose voices might we be forgetting about?
  2. LMs may yield structural changes similar in scale to the changes brought by the Internet, social networks and increased computational power. How do we distinguish and disentangle positive societal change from potential harms?
  3. What does trustworthiness mean? Whose perspectives should we be taking into account, and how might a definition of trustworthiness evolve depending on who does the trusting?