Track 1: Technical/Methods


Translation Tutorial: Causal fairness analysis

Presented by Elias Bareinboim (Columbia U)

AI plays an increasingly prominent role in modern society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals’ incarceration, and the hiring of new employees, and it’s not hard to envision that soon they will underpin most of the society’s decision infra-structure. Despite the high stakes entailed by this task, there is still a lack of formal understanding of some basic properties of such systems, including due to issues of fairness, accountability, and transparency. In particular, we currently do not fully comprehend how to design systems that abide by the decision-constraints agreed by society, including to avoid perpetuating preceding prejudicial practices and reinforcing previous biases (that are possibly present in the training data). The growing interest in these issues has led to a number of criteria trying to account for unfairness, but choosing a metric that the system must adhere to be deemed fair remains an elusive matter, almost invariably made in an arbitrary fashion, without much justification or rationale.

In this tutorial, we plan to fill in this gap by providing a mathematical framework based on causal inference for helping the system’s designer to choose a criterion that optimizes fairness in a principled and transparent fashion. We’ll start by introducing the principles of causal inference that will allow us to give a formal dressing for the fairness analysis problem. One key observation is that the data that is usually the centerpiece of any fairness analysis is the output of different causal processes, which despite the inherent lack of observability, are the mechanisms in the real-world that bring about the observed disparities. This implies that the fairness analysis itself should not be concerned only with the data, which is a realization of these mechanisms, but about the mechanisms themselves. In fact, the multiple fairness metrics currently available capture different aspects of these underlying mechanisms. We’ll develop what we have been calling the fairness criteria map, which will allow one to understand the detection and explanatory power of each fairness metric individually, and the different tradeoffs existent between the different measures found in the literature. We’ll discuss the detection of biases due to discrimination, the measurement of the magnitude of these biases, and the possibility of correcting them both in the context of classification and decision-making.

Video

Sendhil Mullainathan (Invited tutorial)

Rediet Abebe (Invited tutorial)

Explainable ML in the Wild: When Not to Trust Your Explanations

Presented by Shalmali Joshi, Chirag Agarwal, Himabindu Lakkaraju (Harvard)

Machine learning (ML) and other computational techniques are increasingly being deployed in high-stakes decision-making. The process of deploying such automated tools to make decisions which affect the lives of individuals and society as a whole, is complex and rife with uncertainty and rightful skepticism. Explainable ML (or broadly XAI) is often pitched as a panacea for managing this uncertainty and skepticism. While technical limitations of explainability methods are being characterized formally or otherwise in ML literature, the impact of explanation methods and their limitations on end users and important stakeholders (e.g., policy makers, judges, doctors) is not well understood. We propose a translation tutorial to contextualize explanation methods and their limitations for such end users. We further discuss overarching ethical implications of these technical challenges beyond misleading and wrongful decision-making. While we will focus on implications to applications in finance, clinical healthcare, and criminal justice, our overarching theme should be valuable for all stakeholders of the FAccT community. Our primary objective is that such a tutorial will be a starting point for regulatory bodies, policymakers, and fiduciaries to engage with explainability tools in a more sagacious manner.

Video

Translation Tutorial: Data Externalities

Presented by Rediet Abebe (Harvard), Yuan Cui (Northwestern), Mihaela Curmei (U of California, Berkeley), Andreas Haupt (MIT), and Yixin Wang (U of California, Berkeley)

Externalities shape the data economy as we experience it. Besides uneven market shares, power asymmetries, and high levels of data sharing, little to no reimbursement to data subjects for their contributions are characteristics of this new market. Data externalities, recently theorized in Microeconomics, offer an explanation for the absence of significant reimbursements for data. In this tutorial, we will introduce models in which data externalities arise. Through a series of case studies, we will expose crucial aspects of the contracting environment that aggravate data externalities and allow participants to develop potential interventions. We aim to both translate insights from Microeconomics for the FAccT community and highlight opportunities for further research directions at the interface of market design, fairness, and accountability.

Video

Translation Tutorial: How to Achieve Both Transparency and Accuracy in Predictive Decision Making: an Introduction to Strategic Prediction

Presented by Benjamin L. Edelman, Chara Podimata, Yonadav Shavit (Harvard)

When an algorithmic system holds sway over a person’s life, the person has an incentive to respond to the algorithm strategically in order to obtain a more desirable decision. This is the mantra of the rapidly growing research area of strategic prediction. In short, as predictive decision-making algorithms become increasingly consequential, the standard ML assumption that data does not react to the model becomes increasingly false. Strategic adaptation to algorithms can be found all over society: e.g., a student memorizes the quirks of a standardized exam, a prospective homeowner lowers their credit card utilization, or a college tries to attract more applicants to decrease its acceptance rate.

Strategic prediction is a critical area of study because it holds the promise of making transparency more palatable to decision-makers. As those devoted to the project of just machine learning increas- ingly push for transparency and individual recourse as a means for ensuring humane predictive systems, they will continue to run into a fundamental resistance from decision-makers themselves. This is because decision-makers’ foremost priority is the accuracy of their system. Allowing someone to understand their system perfectly threatens to increase “gaming”, and thus undermine accuracy, so they calculate that it is better not to provide recourse at all. To this end, the field of strategic prediction has recently sprung up within computer science to answer a fundamental question: “how can we create machine learning systems that are accurate even when made transparent, and thus gamed by individuals?” And even opaque systems are rarely perfectly opaque, so strategic prediction is relevant in almost all predictive decision-making scenarios.

While the study of strategic prediction is diverse, there are three main strands of research we will cover in this tutorial: (1) Robustness perspective. How can we design robust prediction algorithms that remain accurate against strategic adaptation (as- suming all adaptation is undesirable gaming)? In other words, how can we avoid Goodhart’s law: “when a measure becomes a target, it ceases to be a good measure”? (2) Fairness and recourse. How can we make sure that these algo- rithms remain fair and give individuals recourse—the possibility of changing the model’s decision? (3) Causal perspective. Not all strategic responses are pure gaming— it is often the case that an individual’s strategic response to a pre- dictor actually leads to improvement in the decision-maker’s target outcome. From this perspective, when a decision-maker changes their prediction rule, they are performing a causal intervention. How can we design algorithms that optimize for accuracy and improved algorithms in this richer setting?

Video

Track 2: Philosophy / Law / Society


Translation Tutorial: Sociocultural diversity in machine learning: Lessons from philosophy, psychology, and organizational science

Presented by Sina Fazelpour (CMU) and Maria De-Arteaga (U of Texas, Austin)

Sociocultural diversity is a key value in democratic societies both for reasons of justice, fairness, and legitimacy and because of its ramifications for group performance. As a result, researchers in phi- losophy, psychology, and social and organizational sciences have worked to understand diversity’s varied meanings, develop appro- priate measures for quantifying diversity (in some specific sense), and specifying pathways by which diversity can be functionally beneficial to groups such as deliberative mini-publics, design teams, and scientific communities. Recently, there has also been a surge of interest in sociocultural diversity in machine learning (ML) research, with researchers proposing (i) diversity as a design desideratum in the construction pipeline of sociotechnical ML systems and (ii) team diversity as an organizational ideal for overcoming problems of algorithmic bias. With respect to (i), researchers have proposed various measures of diversity, developed methods for satisfying these measures via modifications in data processing and learning, and examined the interaction between diversity, accuracy, and fair- ness. With respect to (ii), there have been claims about the benefits of team diversity in countering biases in ML systems, with more recent efforts aiming to empirically test these purported benefits. Currently, however, there is a gap between discussions of measures and benefits of diversity in ML, on the one hand, and the philosoph- ical, psychological and organizational research on the underlying concepts of diversity and the precise mechanisms of its functional benefits, on the other. This gap is problematic because different con- cepts of diversity are based on distinct sets of cognitive, ethical, and political rationales that should inform how we measure diversity in a given context and why, as well as how considerations of diver- sity might interact with other desiderata such as performance and fairness. Similarly, lack of specificity about the precise mechanisms underpinning diversity’s potential benefits can result in espousing uninformative or easily falsifiable generalities, invalid experimental designs, and illicit inferences from experimental results.

This tutorial will bridge this gap from both angles—concepts and consequences. The first part will focus on discussions of concepts and consequences of sociocultural diversity, in philosophy, psy- chology, and social and organizational sciences. The second part will situate this understanding in and draw its implications for the discussions of sociocultural diversity in ML.

Video

Translation Tutorial: Power in Political Philosophy: Nature and Justification

Presented by Seth Lazar (Australian National U)

Humanities and social sciences disciplines have spent a lot of time thinking about power. There's a clear opportunity to draw on that background to inform discussions of AI. Equally importantly, we're in the middle of the most significant change in the way power is exercised since the birth of the administrative state (3). Enabling a discussion between theorists of power and scholars of AI should enable us to break new ground in thinking about power too. Many different perspectives on power could benefit members of the FAccT community. However, from at least Plato onwards, the central task of political philosophy has been to understand the nature of power, and to determine whether, when and how those who exercise power have a right to do so. Political philosophy is therefore in an excellent position to contribute to the broader interdisciplinary discussion on the nature and justification of power in AI. This is valuable in itself, and may also prompt other fields to present their own comparable normative and empirical toolkits.

Video

Sandra Wachter (Invited tutorial)

Carissa Véliz (Invited tutorial)

Translation Tutorial: Fairness and Friends

Presented by Falaah Arif Khan (NYU), Eleni Manis (Surveillance Technology Oversight Project), and Julia Stoyanovich (NYU)

Recent interest in codifying fairness in Automated Decision Systems (ADS) has resulted in a wide range of formulations of what it means for an algorithm to be “fair.” Most of these propositions are inspired by, but inadequately grounded in, scholarship from political philos- ophy. This tutorial aims to correct that deficit. We critically evaluate different definitions of fairness by contrasting their conception in political philosophy (such as Rawls’s fair equality of opportunity or formal equality of opportunity) with the proposed codification in Fair-ML (such as statistical parity, equality of odds, accuracy) to pro- vide a clearer lens with which to view existing results and to identify future research directions. A key novelty of this tutorial is the use of technical artwork to make ideas more relatable and accessible, based on our ongoing work on a responsible data science comic book series, available at https://dataresponsibly.github.io/comics/.

Video

Implications Tutorial: AI on the Ground Approach

Presented by Noopur Raval (NYU), Radhika Radhakrishnan (Internet Democracy Project), Ranjit Singh (Data & Society), and Vidushi Marda (Article19)

Drawing on our experiences as computational ethnographers in various global South contexts, we ask: what have we learned from adopting ethnographic approaches to AI? Beyond producing learnings and/or implications for technologists, how do we empower and broaden the ‘AI on the Ground’ research community? What have we learned about the limits, challenges, barriers to access and tacit knowledge of techno-bureaucratic institutions and social, political, and capitalist norms within which AI/ML objects operate? Some of us have already done foundational work to point to the extant issues with public health infrastructures, the colonial legacies, and logics inherent in the design of legacy information architecture but we recognize that this is a good time to pause, regather, and reflect on the unevenness of “the ground” in ethnographic AI research outside of the proverbial West. Importantly, there is an urgent need to demystify and share lessons with others who might be looking to conduct ethnographic, archival, legal, or other forms of qualitative AI research.

Some of the topics that we will cover in our tutorial include: (1) The politics of studying ‘up’ (negotiating power, establishing expertise, and gaining access to AI/ML developers and experts), (2) how to navigate trade secrets, institutional secrecy, non-disclosure agreements, threats, liability issues while attempting to study proprietary and public domain AI systems, (3) how to study/map/trace and where to collect data in highly distributed, networked systems (imbrications) and based on those choices, what kinds of visibilities and invisibilities accompany AI ethnographies and finally, (4) a discussion on the politics of AI Ethnography – what kinds of relationships do such researchers have to build and maintain with individuals, government departments and corporations and what implications does such positionality have on their ability to produce impactful knowledge?

Video

Track 3: Practice


Translation Tutorial: From Publishing to Practice: Bringing AI Model Monitoring to a Healthcare Setting

Presented by John Dickerson (U of Maryland), Liz O’Sullivan (Arthur), Brian Power (Humana), and Brent Sundheimer (Humana)

The FATE and robustness in AI/ML communities, including relevant sub-communities at FAccT, have developed and continue to develop general techniques for measuring and, in some settings, partially mitigating forms of bias in families of ML algorithms. Yet, translation of those general techniques to “boots on the ground” healthcare settings comes with terminology and vocabulary challenges, data quality and paucity issues, and legal and ethical barriers. Indeed, an ad hoc feedback loop is beginning to form between practitioners in healthcare and “traditional” AI/ML researchers, wherein the preferences and needs of practitioners are fed back into AI/ML research communities, who then adjust and build new tools that translate back into practice. Our proposed tutorial will add to this nascent movement, focusing on translational issues that arise between these two communities.

The main goals of our tutorial are threefold. (1) We aim to provide a solid, introductory overview of the basic components of machine learning systems as they might be deployed in healthcare systems, targeting practitioners in the healthcare and healthcare-adjacent settings. Two of our co-presenters are healthcare practitioners—one a medical doctor in leadership at Humana, a 46,000+ employee health-care company with over 20,000,000 active members, and the other a leader in connecting data scientists with in-house applications at Humana. (2) We will present direct translations of general concepts from the AI/ML literature, with an emphasis on the general concepts recently developed by the FATE and robustness in AI/ML communities, to the healthcare setting. Examples include instantiating model cards, data sheets, and fact sheets, using the “language” of healthcare, in healthcare-specific settings. We will lean heavily on the expertise of our two co-presenter practitioners to provide one direction “from” the healthcare setting, and on our two co-presenters from the AI/ML community, both with experience fielding AI/ML in large healthcare settings, to provide the other direction “from” the traditional AI/ML space. (3) We aim to create a lasting resource for the healthcare com- munity, one that continues to evolve after FAccT; toward that end, we will host appropriate content on GitHub that will allow for moderated, bottom-up, community-driven augmentation of the content we present.

Video

Translation Tutorial: Thinking through and writing about research ethics beyond “broader impact”

Presented by Kate Sim, Andrew Brown, and Amelia Hassoun (U of Oxford)

This tutorial aims to offer a conceptual and practical primer for engineers interested in thinking more expansively, holistically, and critically about research ethics. We include issues of “harm” and “bias” as components of wider discussion about research ethics as it has been conceptualized and operationalized in the interpretivist traditions of humanities and the social sciences. In the interpretivist paradigm, “doing ethical research is not as simple as following a set of rules”. Assessing the ethical impacts of research necessarily implicates the researcher’s standpoint [11, 13] in the social world structured by power relations, which, in turn, informs how she conducts her work. In other words, what we see and how we see are inextricably shaped by who we are and what we do as knowledge workers. Meaningful engagement with research ethics must thus extend beyond diagnosing harms or anticipating carbon emissions and delve into this tripartite relationship.

At the end of the workshop, we hope our participants will walk away with the following: (1) actionable steps for discussing positionality and limitations in their writing; (2) a practical primer for mapping out ethical dimensions during the research process; and (3) specific interpretivist concepts like “reflexivity” to challenge their thinking about research purpose and process.

Video

Implications Tutorial: Responsible AI in Industry: Lessons Learned in Practice

Presented by Krishnaram Kenthapadi (Amazon), Ben Packer (Google), Mehrnoosh Sameki (Microsoft), Nashlie Sephus (Amazon)

Artificial Intelligence (AI) is increasingly playing an integral role in determining our day-to-day experiences. Increasingly, the applications of AI are no longer limited to search and recommendation systems, such as web search and movie and product recommendations, but AI is also being used in decisions and processes that are critical for individuals, businesses, and society. With web-based AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. With many factors playing a role in development and deployment of AI systems, they can exhibit different, and sometimes harmful, behaviors. For example, the training data often comes from society and real world, and thus it may reflect the society’s biases and discrimination toward minorities and disadvantaged groups. For instance, minorities are known to face higher arrest rates for similar behaviors as the majority population, so building an AI system without compensating for this is likely to only exacerbate this prejudice.These concerns highlight the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, trans- parent, explainable, fair, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society.

In this tutorial, we will first motivate the need for responsible AI, highlighting model explainability, fairness, and privacy in AI, from societal, legal, customer/end-user, and model developer perspectives. Then,we will focus on the real-world application of such areas and tools, present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining ap- plications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, hiring, sales, lending, and fraud detection.

Video

Implications Tutorial: Using Harms and Benefits to Ground Practical AI Fairness Assessments in Finance

Presented by Grace Abuhamad (Element AI), Marc-Etienne Brunet (Element AI and Vector Institute), Lachlan McCalman (Gradient Institute), and Daniel Steinberg (Gradient Institute)

This tutorial introduces one of the first practical algorithmic fairness assessment methodologies to be implemented (as guidance) by a major regulator. The methodology is a process for judging an AI system’s alignment with the Monetary Authority of Singapore’s "FEAT" (Fairness, Ethics, Accountability, Transparency) Principles, which promote responsible adoption of AI in Singapore’s finance sector. The presenters, along with a larger group, have spent the last year developing this methodology in close collaboration with the Monetary Authority of Singapore (MAS) and Financial Services Institutions (FSIs), including applying it to in-production systems within two large banks. The proposed tutorial aims to introduce the audience to: (1) the fairness assessment methodology itself (2) two illustrative case-studies the authors conducted with FSI partners assessing AI systems for credit scoring and customer marketing (3) how the authors approached designing a methodology that is broadly applicable to different systems and organisations, but still practical to implement (3) the operational requirements and challenges for FSIs implementing the methodology (4) the work done to tailor the methodology to the unique Singaporean legal and societal context.

Video

Translation Tutorial: A Crash Course in Motivating, Supporting, and Expanding Ethical Thinking in the Tech Classroom

Presented by Casey Fiesler (U of Colorado, Boulder), Solon Barocas (Cornell U), Augustin Chaintreau (Columbia U), Jessie Smith (U of Colorado, Boulder), and Michael Zimmer (Marquette U)

Despite growing calls to integrate ethics into computer science education, it is unclear what such an education should entail and how educators should go about this process. Members of the FAccT community have initiated—or been called upon to do— much of this work. Though there has been growing work in this area, there are still limited resources available to help educators cultivate ethical reasoning among computer science, data science, and other technical students. This tutorial provides a space for experts to discuss strategies and share resources for integrating ethical training into their teaching, as well as a structured environment for attendees to share successes and failures, seek practical advice, and learn from each other.

This tutorial brings together a group of organizers with experience teaching tech ethics in the context of computer science and data science, as well as expertise in ethics pedagogy. The organizers also have backgrounds in a diverse set of disciplines (computer science, law, communication, media studies) that present an interdisciplinary perspective towards teaching. Drawing lessons from an unaffiliated pre-FAT* 2018 workshop on data science ethics education (which was co-organized by Barocas and Fiesler), this tutorial will be organized as a series of short presentations followed by a discussion/workshop period where attendees can discuss the practicalities of teaching and seek advice from workshop organizers and other participants.

Video