UTC+9: 4:00-4:30
UTC+2: 21:00-21:30
UTC-7: 12:00-12:30
ONLINE: Welcome and Conference Report
General Chairs
UTC+9: 4:30-6:00
UTC+2: 21:30-23:00
UTC-7: 12:30-14:00
ONLINE: CRAFT I and Doctoral Consortium I

CRAFT 124: CRAFT 48: Doctoral Consortium I
"What Could Possibly Go Wrong? Speculative Practice Towards Anticipating the Negative Consequences of Humanitarian AI" "Designing Data in Data Science
Robert Soden, Aleks Berditchevskaia, Erin Coughlan de Perez, Manveer Kalirai, Shreyasha Paudel, Isabel Stewart, Saurav Poudel and Sakun Joshi Michael Muller, Lora Aroyo, Melanie Feinberg, Helena Mentis, Samir Passi, Shion Guha and Heloisa Candello Doctoral Consortium Chairs
UTC+9: 6:00-6:30
UTC+2: 23:00-23:30
UTC-7: 14:00-14:30
Break
UTC+9: 6:30-8:00
UTC+2: 23:30-1:00
UTC-7: 14:30-16:00
ONLINE: CRAFT II and Doctoral Consortium II

CRAFT 124 (continued) CRAFT 130:
Doctoral Consortium II:
"What Could Possibly Go Wrong? Speculative Practice Towards Anticipating the Negative Consequences of Humanitarian AI " I Audited My Algorithm and All I Got Was This Harms Report – Imagining a Collaborative Approach between Companies and Communities on Developing Harm Remediation Strategies " Online Doctoral Consortium
Note DC Returns from Break at 6:05
Robert Soden, Aleks Berditchevskaia, Erin Coughlan de Perez, Manveer Kalirai, Shreyasha Paudel, Isabel Stewart, Saurav Poudel and Sakun Joshi Irene Font Peradejordi, Liz Marquis, Kyra Yee and Tomo Lazovich Doctoral Consortium Chairs
UTC+9: 7:30-8:00
UTC+2: 00:30-1:00
UTC-7: 15:30-16:00
In-Person Registration opens at COEX
UTC+9: 8:00-9:00
UTC+2: 1:00-2:00
UTC-7: 16:00-17:00
IN-PERSON AND ONLINE: Keynote I
Meeyoung Cha (Chaired by Alice Oh)
Room 202+203
UTC+9: 9:00-9:30
UTC+2: 2:00-2:30
UTC-7: 17:00-17:30
Break
UTC+9: 9:30-10:30
UTC+2: 2:30-3:30
UTC-7: 17:30-18:30
IN-PERSON AND ONLINE: Keynote Panel I
Implementing Intersectionality in Algorithmic Fairness
Cynthia Dwork, James Foulds, Youjin Kong, Olga Russakovsky, Yolanda Rankin (Chair: Alice Xiang)
Room 202
UTC+9: 10:30-11:00
UTC+2: 3:30-4:00
UTC-7: 18:30-19:00
Break
UTC+9: 11:00-12:00
UTC+2: 4:00-5:00
UTC-7: 19:00-20:00
IN-PERSON: Doctoral Consortium III + CRAFT III + Tutorials I
CRAFT 999: CRAFT 148: Doctoral Consortium III
"Mapping Data Stories in the Surveillance Ecosystem" "Fairness, Accountability, and Transparency in Hyperscale Language Models" Note DC Returns from Break at 12:05
Jennifer Lee, Micah Epstein Jung-Woo Ha, Hwaran Lee, Matthias Galle, Sangchul Park and Meeyoung Cha
Room 209 Room 205 Room 201
Tutorial (Case Study):
"An intersectional approach to model construction and evaluation in mental health care"
Marta Maslej, Laura Sikstrom, Darla Reslan, and Yifan Wang
Room 206
UTC+9: 12:00-13:00
UTC+2: 5:00-6:00
UTC-7: 20:00-21:00
Lunch
UTC+9: 13:00-14:00
UTC+2: 6:00-7:00
UTC-7: 21:00-22:00
IN-PERSON: Doctoral Consortium  IV + CRAFT IV + Tutorials II
CRAFT 999 (continued): CRAFT 148 (continued): Doctoral Consortium IV
"Mapping Data Stories in the Surveillance Ecosystem" "Fairness, Accountability, and Transparency in Hyperscale Language Models" Note the DC returns from Break at 12:05
Jennifer Lee, Micah Epstein Jung-Woo Ha, Hwaran Lee, Matthias Galle, Sangchul Park and Meeyoung Cha
Room 209 Room 205 Room 201
Tutorial (Case Study) (Continued):
"An intersectional approach to model construction and evaluation in mental health care"
Marta Maslej, Laura Sikstrom, Darla Reslan, and Yifan Wang
Room 206
UTC+9: 14:00-14:30
UTC+2: 7:00-7:30
UTC-7: 22:00-22:30
Break
UTC+9: 14:30-16:00
UTC+2: 7:30-9:00
UTC-7: 22:30-00:00
IN-PERSON: CRAFT V
CRAFT 145: CRAFT 147: CRAFT 167:
"Ethics on the Job: A Tech Worker’s Role in Upholding Fairness, Accountability, and Transparency in Computing Systems" "Digital Resilience and Pathways to Long-Term Community Investment" "Communication Across Communities in Machine Learning Research and Practice"
JS Tan, Nataliya Nedzhvetskaya, Kristen Sheets and Clarissa Redwine Christine Phan, Vinhcent Le, Patrick Messac

Adriana Romero Soriano, Caglar Gulcehre, Jessica Forde, Levent Sagun, Negar Rostamzadeh, Samuel Bell, Seyi Olojo and Stefano Sarao Mannelli


Room 206 Room 205 Room 202
CRAFT 998: CRAFT 138a:
"Research, Meet Engineering or: How I stopped worrying about scaling and learned to love the process" "Collaboratively Developing Evaluation Frameworks for Queer AI Harms "
Luca Belli and Aaron Gonzales Arjun Subramonian, Anaelia Ovalle, Luca Soldaini, Nathan Dennler, Zeerak Talat, Sunipa Dev, Kyra Yee and William Agnew
Room 201 Room 209
UTC+9: 16:00-16:30
UTC+2: 9:00-9:30
UTC-7: 0:00-0:30
Break
UTC+9: 16:30-18:00
UTC+2: 9:30-11:00
UTC-7: 00:30-2:00
IN-PERSON:  CRAFT VI
CRAFT 130: CRAFT 996: CRAFT 167 (continued):
"I Audited My Algorithm and All I Got Was This Harms Report – Imagining a Collaborative Approach between Companies and Communities on Developing Harm Remediation Strategies" "Mapping Automated Decision Making in Social services in Australia and the Asia Pacific: Towards Regional Transparency and Accountability" "Communication Across Communities in Machine Learning Research and Practice"
Irene Font Peradejordi, Liz Marquis, Kyra Yee and Tomo Lazovich Lyndal Sleep, Jennifer Min, Gemma Rodregues, Rino Nugroho

Adriana Romero Soriano, Caglar Gulcehre, Jessica Forde, Levent Sagun, Negar Rostamzadeh, Samuel Bell, Seyi Olojo and Stefano Sarao Mannelli


Room 203 Room 205 Room 202
CRAFT 995: CRAFT 138a (continued):
"New Models for Engagement and Deployment Strategies for FAccT AI" "Collaboratively Developing Evaluation Frameworks for Queer AI Harms "
Chang D. Yoo, Micheal Veale, Yoon Kim, Gwangsu Kim, Ji Woo Hong, Alice Oh Arjun Subramonian, Anaelia Ovalle, Luca Soldaini, Nathan Dennler, Zeerak Talat, Sunipa Dev, Kyra Yee and William Agnew
Room 201 Room 209
UTC+9: 18:00-21:30
UTC+2: 11:00-14:30
UTC-7: 2:00-5:30
Break
UTC+9: 21:30-22:00
UTC+2: 14:30-15:00
UTC-7: 5:30-6:00
ONLINE: Meet Someone!
UTC+9: 22:00-23:00
UTC+2: 15:00-16:00
UTC-7: 6:00-7:00
ONLINE: Proceedings 1
1A Fairness and Inclusivity in Theory and Practice 1B Human Difference and Audio/Visual Processing 1C Normative Considerations for Decisions and Recommendations
765 Critical Tools for Machine Learning: Working with Intersectional Critical Concepts in Machine Learning Systems Design
3 Interdisciplinarity, Gender Diversity, and Network Structure Predict the Centrality of AI Organizations
789 ABCinML: Anticipatory Bias Correction in Machine Learning Applications
30 Fairness Indicators for Systematic Assessments of Visual Feature Extractors
621 Gender and Racial Bias in Visual Question Answering Datasets
234 Language variation and algorithmic bias: understanding algorithmic bias in British English automatic speech recognition
58 Providing Item-side Individual Fairness for Deep Recommender Systems
706 Decision Time: Normative Dimensions of Algorithmic Speed
253 Fast online ranking with fairness of exposure
Session Chair: Yunfeng Zhang Session Chair: Borhane Blili-Hamelin Session Chair: Amanda Bower
UTC+9: 23:00-23:30
UTC+2: 16:00-16:30
UTC-7: 7:00-7:30
Break
UTC+9: 23:30-0:30
UTC+2: 16:30-17:30
UTC-7: 7:30-8:30
ONLINE: Proceedings 2
2A Responsible Data Management 2B Human-AI Collaboration: Problems and Solutions
106 Caring for Datasets: A Framework for Deprecating Datasets and Responsible Data Stewardship
95 Fair Data Sharing
718 Accountable Datasets: The Politics and Pragmatics of Disclosure Datasets
399 On the Fairness of Machine-Assisted Human Decisions
874 Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
9 A Data-driven analysis of the interplay between Criminiological theory and predictive policing algorithms
Session Chair: Muhammad Yusuf Khattak Session Chair: Suzanne Tolmeijer
UTC+9: 0:30-1:00
UTC+2: 17:30-18:00
UTC-7: 8:30-9:00
Break
UTC+9: 1:00-2:00
UTC+2: 18:00-19:00
UTC-7: 9:00-10:00
ONLINE: Proceedings 3
3A Fair and Equitable Data 3B Algorithmic Fairness and Discrimination 3C Risk, Responsibility, and Accountability
311 Achieving Fairness via Post-Processing in Web-Scale Recommender Systems
939 Data Cards: Purposeful and Transparent Documentation for Responsible AI
772 Data augmentation for fairness-aware machine learning: Preventing algorithmic bias in law enforcement systems
911 Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness
320 Imperfect Inferences: A Practical Assessment
157 An Outcome Test of Discrimination for Ranked Lists
116 Taxonomy of Risks posed by Large Language Models
174 Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement
366 Accountability in an Algorithmic Society
Session Chair: Lisa Kresege Session Chair: Karyen Chu Session Chair: Bogdana Rakova
UTC+9: 2:00-2:30
UTC+2: 19:00-19:30
UTC-7: 10:00-10:30
ONLINE: Meet Someone!

UTC+9: 4:00-4:30
UTC+2: 21:00-21:30
UTC-7: 12:00-12:30
ONLINE: Meet Someone!
UTC+9: 4:30-6:00
UTC+2: 21:30-23:00
UTC-7: 12:30-14:00
ONLINE: CRAFT VII
CRAFT 140: CRAFT 129: CRAFT 61:
"Emerging Problems: New Challenges in FAccT from Research, to Practice, to Policy" "Responsible Innovation “On the Ground”: Lessons Learned from Accountable ML Efforts within Industry "Big Data and AI in the Global South"
Kathy Baxter and Chloe Autio William Isaac, Iason Gabriel, Rumman Chowdhury, Luca Belli and Kristian Lum
Sharifa Sultana, Mohammad Rashidujjaman Rifat, Syed Ishtiaque Ahmed, Ranjit Singh, Julian Posada, Azra Ismail, Yousif Hassan, Seyram Avle, Nithya Sambasivan, Rajesh Veeraraghavan, Priyank Chandra and Rafael Grohmann
CRAFT 138b:
"Collaboratively Developing Evaluation Frameworks for Queer AI Harms"
Arjun Subramonian, Anaelia Ovalle, Luca Soldaini, Nathan Dennler, Zeerak Talat, Sunipa Dev, Kyra Yee and William Agnew
UTC+9: 6:00-6:30
UTC+2: 23:30-23:30
UTC-7: 14:00-14:30
Break
UTC+9: 6:30-8:00
UTC+2: 23:30-1:00
UTC-7: 14:30-16:00
ONLINE: CRAFT VIII
CRAFT 140 (continued): CRAFT 61 (continued):
"Emerging Problems: New Challenges in FAccT from Research, to Practice, to Policy" "Big Data and AI in the Global South"
Kathy Baxter and Chloe Autio Sharifa Sultana, Mohammad Rashidujjaman Rifat, Syed Ishtiaque Ahmed, Ranjit Singh, Julian Posada, Azra Ismail, Yousif Hassan, Seyram Avle, Nithya Sambasivan, Rajesh Veeraraghavan, Priyank Chandra and Rafael Grohmann
CRAFT 138b (continued): CRAFT 129 (continued):
"Collaboratively Developing Evaluation Frameworks for Queer AI Harms" "Responsible Innovation “On the Ground”: Lessons Learned from Accountable ML Efforts within Industry"
Arjun Subramonian, Anaelia Ovalle, Luca Soldaini, Nathan Dennler, Zeerak Talat, Sunipa Dev, Kyra Yee and William Agnew William Isaac, Iason Gabriel, Rumman Chowdhury, Luca Belli and Kristian Lum
UTC+9: 8:00-9:00
UTC+2: 1:00-2:00
UTC-7: 16:00-17:00
IN-PERSON AND ONLINE: Keynote II
Pascale Fung (Chair: Charles Isbell) 
 
Room 202+203
UTC+9: 9:00-9:30
UTC+2: 2:30-2:30
UTC-7: 17:00-17:30
Break
UTC+9: 9:30-10:30
UTC+2: 2:30-3:30
UTC-7: 17:30-18:30
IN-PERSON AND ONLINE: Keynote Panel II
Bossware and Algorithmic Management
Frederik Zuiderveen Borgesius, Min Kyung Lee, Wilneida Negrón, Rida Qadri (Chair: Seth Lazar)
 
Room 202 + 203
UTC+9: 10:30-11:00
UTC+2: 3:30-4:00
UTC-7: 18:30-19:00
Break
UTC+9: 11:00-12:00
UTC+2: 4:00-5:00
UTC-7: 19:00-20:00
IN-PERSON: Tutorials III
TUTORIAL (TRANSLATION): TUTORIAL (PRACTICE): TUTORIAL (TRANSLATION):
"Platform Governance" "Improving Platform Transparency through Citizen Data Donation Methods" "Fairness in Computer Vision: Datasets, Algorithms, and Implications"
Robert Gorwa Daniel Angus and Abdul Obeid Angelina Wang, Seungbae Kim, Olga Russakovsky and Jungseock Joo
Room 201 Room 205 Room 206
TUTORIAL (PRACTICE): TUTORIAL (IMPLICATIONS):
"A Stakeholder-Oriented Case Study for Assessing Bias in a Pretrial Risk Assessment Instrument" "What Is a Proxy and Why Is It a Problem?"
Emily Hadley, Rob Chew, Stephen Tueller, Megan Comfort, Matthew DeMichele and Pamela Lattimore Margarita Boyarskaya, Solon Barocas, Hanna Wallach, and Michael Carl Tschantz
Room 209 Room 210
UTC+9: 12:00-13:00
UTC+2: 5:00-6:00
UTC-7: 20:00-21:00
Lunch
UTC+9: 13:00-14:00
UTC+2: 6:00-7:00
UTC-7: 21:00-22:00
IN-PERSON: Proceedings 4
4A Inequity in Hiring and Public Services 4B Accountability and Governance Strategies 4C Bias Correction
281 Don't let Ricci v. DeStefano Hold You Back: A Bias-Aware Legal Solution to the Hiring Paradox
478 Tackling Algorithmic Disability Discrimination in the Hiring Process: An Ethical, Legal and Technical Analysis
138 Equitable Public Bus Network Optimization for Social Good: A Case Study of Singapore
267 Normative Logics of Algorithmic Accountability
716 At The Tensions of South and North: Critical Roles of Global South Stakeholders in AI Governance
1002 How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India
172 De-biasing "bias" measurement
513 An Algorithmic Framework for Bias Bounties
711 Don’t Throw it Away! The Utility of Unlabeled Data in Fair Decision Making
Session Chair: Aaron Horowitz Session Chair: Shalaleh Rismani Session Chair: Seth Lazar
Room 201 Room 202 Room 203
4D Human Factors: Bias and Agency
269 Selection in the Presence of Implicit Bias: The Advantage of Intersectional Constraints
336 Trucks Don’t Mean Trump: Diagnosing Human Error in Image Analysis
165 Causal Inference Struggles with Agency on Online Platforms
Session Chair: Divya Shanmugam
Room 205
UTC+9: 14:00-14:30
UTC+2: 7:00-7:30
UTC-7: 22:00-22:30
Break
UTC+9: 14:30-15:30
UTC+2: 7:30-8:30
UTC-7: 22:30-23:30
IN-PERSON:  Proceedings 5
5A Fairness Criteria and Tools 5B Technology and Public Policy 5C Proxies and Group Fairness
1094 Is calibration a fairness requirement? An argument from the point of view of moral philosophy and decision theory
272 Fairness for AUC via Feature Augmentation
775 On the Power of Randomization in Fair Classification and Representation
651 The Case for a Legal Compliance API for the Enforcement of the EU’s Digital Services Act on Social Media Platforms
154 Algorithmic Tools in Public Employment Services: Towards a Jobseeker-Centric Perspective(Co-Winner: Distinguished Student Paper Award)
33 FAccT-Check on AI regulation: Systematic Evaluation of AI Regulation on the Example of the Proposed Legislation on the Use of AI in the Public Sector in the German Federal State of Schleswig-Holstein
65 Minimax Demographic Group Fairness in Federated Learning
596 Multiaccurate Proxies for Downstream Fairness
1043 What is Proxy Discrimination?
Session Chair: Negar Rastamzadeh Session Chair: Orestis Papakyriakopoulos Session Chair: Irene Font Peradejordi
Room 201 Room 202 Room 203
5D Value Systems for AI 5E AI Case Studies
74 The Values Encoded in Machine Learning Research (Co-Winner, Distinguished Paper Award)
143 Social Inclusion in Curated Contexts: Insights from Museum Practices
956 Critical Evaluation Gaps in Machine Learning Practice
298 Towards Intersectional Feminist and Participatory ML: A Case Study in Supporting Feminicide Counterdata Collection (Co-Winner, Distinguished Student Paper Award)
744 Algorithmic Fairness and Vertical Equity: Income Fairness with Tax Audit Models
75 Limits and Possibilities of ""Ethical AI"" in Open Source: A Case Study of Deepfakes
Session Chair: Christina Lu Session Chair: Mikaela Meyer
Room 205 Room 209
UTC+9: 15:30-16:00
UTC+2: 8:30-9:00
UTC-7: 23:30-0:00
Break
UTC+9: 16:00-17:00
UTC+2: 9:00-10:00
UTC-7: 00:00-1:00
IN-PERSON: Tutorials IV
TUTORIAL (TRANSLATION/DIALOGUE): TUTORIAL (CASE STUDY): TUTORIAL (TRANSLATION/DIALOGUE):
"Shortcut learning in Machine Learning: Challenges, Analysis, Solutions" "The State of Online Censorship and Efforts at Accountability in India" "Purpose Limitation and Data Minimization in Data-Driven Systems"
Sanghyuk Chun, Kyungwoo Song, and Yonghan Jung Divyansha Sehgal, Torsha Sarkar, and Divyank Katira Asia Biega and Michèle Finck
Room 201 Room 202 Room 203
TUTORIAL (TRANSLATION): TUTORIAL (CASE STUDY):
"Ethical Theory for Machine Learning" "Building a Dataset to Measure Toxicity and Social Bias within Language: A Low-Resource Perspective"
Nick Schuster Won-Ik Cho
Room 205 Room 209
UTC+9: 17:00-17:15
UTC+2: 10:00-10:15
UTC-7: 1:00-1:15
Break
UTC+9: 17:15-18:15
UTC+2: 10:15-11:15
UTC-7: 1:15-2:15
IN-PERSON: Tutorials V  
TUTORIAL (TRANSLATION/DIALOGUE) (continued): TUTORIAL (CASE STUDY) (continued): TUTORIAL (TRANSLATION/DIALOGUE) (continued):
"Shortcut learning in Machine Learning: Challenges, Analysis, Solutions" "The State of Online Censorship and Efforts at Accountability in India" "Purpose Limitation and Data Minimization in Data-Driven Systems"
Sanghyuk Chun, Kyungwoo Song, and Yonghan Jung Divyansha Sehgal, Torsha Sarkar, and Divyank Katira Asia Biega and Michèle Finck
Room 201 Room 202 Room 203
 
 
 
 
UTC+9: 18:15-21:30
UTC+2: 11:15-14:30
UTC-7: 2:15-5:30
Break
UTC+9: 21:30-22:00
UTC+2: 14:30-15:00
UTC-7: 5:30-6:00
ONLINE: Meet Someone!
UTC+9: 22:00-23:00
UTC+2: 15:00-16:00
UTC-7: 6:00-7:00
ONLINE: Proceedings 6
6A Algorithmic Harms and Accountability Mechanisms 6B Power and Accountability: Technosocial Trends 6C Adversarial Techniques
631 The Algorithmic Imprint
243 Towards a multi-stakeholder value-based assessment framework for algorithmic systems
816 Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem
17 #FuckTheAlgorithm: algorithmic imaginaries and political resistance
195 Tech Worker Organizing
313 Making the Unaccountable Internet: The Changing Meaning of Accounting in the Design of the Early Internet
1031 The Spotlight: A General Method for Discovering Systematic Errors in Deep Learning Models
22 Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash
179 Promoting Fairness in Learned Models by Learning to Active Learn under Parity Constraints
Session Chair: Liz Bright Marquis Session Chair: Elizabeth Ann Watkins Session Chair: Sayash Kapoor
UTC+9: 23:00-23:30
UTC+2: 16:00-16:30
UTC-7: 7:00-7:30
Break
UTC+9: 23:30-0:30
UTC+2: 16:30-17:30
UTC-7: 7:30-8:30
ONLINE: Proceedings 7
7A Tech Literacy and Public Trust in Algorithms 7B Content Moderation 7C Novel Theoretical Approaches
270 From Demo to Design in Teaching Machine Learning
915 Testing Concerns about Technology's Behavioral Impacts with N-of-one Trials
790 Algorithms Off-Limits?: If Digital Trade Law Restricts Access to Source Code of Software then Accountability Will Suffer
188 The Model Card Authoring Toolkit: Toward Community-centered, Deliberation-driven AI Design
345 How are ML-Based Online Content Moderation Systems Actually Used? Studying Community Size, Local Activity, and Disparate Treatment
752 Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms
259 Best vs. All: Equity and Accuracy of Standardized Test Score Reporting
112 Treatment Effect Risk: Bounds and Inference
299 Disclosure by Design: Document engineering for meaningful data disclosures
Session Chair: Jake Metcalf Session Chair: Ana Crisan Session Chair: Natarajan Krishnaswami
UTC+9: 0:30-1:00
UTC+2: 17:30-18:00
UTC-7: 8:30-9:00
Break
UTC+9: 1:00-2:00
UTC+2: 18:00-19:00
UTC-7: 9:00-10:00
ONLINE: Proceedings 8
8A AI in Government 8B Ethical Guidelines and Value Systems 8C Understanding Groups and Group Fairness
511 Trade-offs between Group Fairness Metrics in Societal Resource Allocation
602 Learning Resource Allocation Policies from Observational Data with an Application to Homeless Services Delivery
137 South Korean Public Value Coproduction Towards 'AI for Humanity': A Synergy of Sociocultural Norms and Multistakeholder Deliberation in Bridging the Design and Implementation of National AI Ethics Guidelines
430 Disentangling Research Ethics in Machine Learning
382 AI Ethics Statements - Analysis and Lessons Learnt from NeurIPS Broader Impact Statements
145 How Different Groups Prioritize Ethical Values for Responsible AI
31 Theories of Gender in Natural Language Processing
142 GetFair: Generalized Fairness Tuning of Classification Models
786 Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency
Session Chair: Sunipa Dev Session Chair: Sasha Luccioni Session Chair: Lu Cheng
UTC+9: 2:00-2:30
UTC+2: 19:00-19:30
UTC-7: 10:00-10:30
ONLINE: Meet Someone!

UTC+9: 4:30-5:00
UTC+2: 21:30-22:00
UTC-7: 12:30-13:00
ONLINE: Meet Someone!
UTC+9: 5:00-6:00
UTC+2: 22:00-23:00
UTC-7: 13:00-14:00
ONLINE: Tutorials VI
TUTORIAL: TUTORIAL: TUTORIAL:
“…contextual Integrity is the worst definition of privacy, except for all the others that have been tried…” (adapted from original quote, Winston S. Churchill, 11 November 1947)" "How to Assess Trustworthy AI in practice" "Contesting 'Fairness' in Machine Learning"
Helen Nissenbaum Roberto Zicari Jenny Davis
UTC+9: 6:00-6:30
UTC+2: 23:00-23:30
UTC-7: 14:00-14:30
Break
UTC+9: 6:30-7:30
UTC+2: 23:30-0:30
UTC-7: 14:30-15:30
ONLINE: Tutorials VII
TUTORIAL: CANCELLED TUTORIAL: TUTORIAL:
“Accountability as Operations: Methods for Holding Public Sector Tech Accountable" "AI and Equality" "Competing for the Curb: The Anticipatory Politics of Micromobility"
Bianca Wylie and Sidra Mahmood Reuben Binns Sarah Fox
UTC+9: 8:00-9:00
UTC+2: 1:00-2:00
UTC-7: 16:00-17:00
IN-PERSON AND ONLINE:  Community Keynote
Drew Ambrogi, Vanessa Bain, Dan Calacci, Willy Solis, Danny Spitzberg (Chair: Seth Lazar)
Room 202 + 203
UTC+9: 9:00-9:30
UTC+2: 2:00-2:30
UTC-7: 17:00-17:30
Break
UTC+9: 9:30-10:30
UTC+2: 2:30-3:30
UTC-7: 17:30-18:30
IN-PERSON AND ONLINE: Keynote III
Mariano-Florentino (Tino) Cuéllar (Chair: Charles Isbell)
Room 202 + 203
UTC+9: 10:30-11:00
UTC+2: 3:30-4:00
UTC-7: 18:30-19:00
Break
UTC+9: 11:00-12:00
UTC+2: 4:00-5:00
UTC-7: 19:00-20:00
IN-PERSON: CRAFT IX + TUTORIALS VIII
CRAFT 997: TUTORIAL (TRANSLATION): CRAFT 120:
"Fairer Working Futures: Co-Developing a Roadmap For Labor-Centric FAccT Research" "Philosophy of Psychology for Machine Learning" "A Sociotechnical Audit: Evaluating Police Use of Facial Recognition"
Wilneida Negron, SuMin Park, Vanessa Bain, Willy Solis, Dan Calacci Gabbrielle Johnson Evani Radiya-Dixit
Room 201 Room 209 Room 206
CRAFT 171:
"ML Assisted System for Identifying At-risk Children: A Multidisciplinary Approach to Seek a Better Way to Prevent Child Abuse"
Moon Choi
Room 210
UTC+9: 12:00-13:00
UTC+2: 5:00-6:00
UTC-7: 20:00-21:00
Lunch
UTC+9: 13:00-14:00
UTC+2: 6:00-7:00
UTC-7: 21:00-22:00
IN-PERSON: Proceedings 9
9A Health and Sustainability 9B Trust and Explainability 9C Understanding and Operationalising Limitations of ML
959 Measuring Machine Learning Software Carbon Intensity in Cloud Instances
1022 Healthsheet: development of a transparency artifact for health datasets
662 Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging
574 The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
608 Designing for Responsible Trust in AI Systems: A Communication Perspective
725 How Explainability Contributes to Trust in AI
428 The Fallacy of AI Functionality
133 It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy
358 Model Multiplicity: Opportunities, Concerns, and Solutions
Session Chair: Angelina Wang Session Chair: Jooyoung Lee Session Chair: Corinna Hertweck
Room 201 Room 202 Room 203
9D Humanistic Explanation and Interpretation
892 NeuroView-RNN: It's About Time
305 Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory
139 The Conflict Between Explainable and Accountable Decision-Making Algorithms
Session Chair: Marta Maslej
Room 209
UTC+9: 14:00-14:15
UTC+2: 7:00-7:15
UTC-7: 22:00-22:15
Break
UTC+9: 14:15-15:15
UTC+2: 7:15-8:15
UTC-7: 22:15-23:15
IN-PERSON: Proceedings 10
10A Ethics and Theory of Predictions 10B AI in the Law 10C Human-Centred Explanation
411 Prediction as Extraction of Discretion
926 Predictability and Surprise in Large Generative Models
146 Keep your friends close and your counterfactuals closer: Improved learning from closest rather than plausible counterfactual explanations in an abstract setting
78 AI Opacity and Explainability in Tort Litigation
170 Flipping the Script on Criminal Justice Risk Assessments
917 Adversarial Scrutiny of Evidentiary Software
274 Human Interpretation of Saliency-based Explanation Over Text
406 Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
461 A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods
Session Chair: Margarita Boyarskaya Session Chair: Simone Zhang Session Chair: Euodia Dodd
Room 201 Room 202 Room 203
10D Governance of Language Processing
177 Interactive Model Cards: A Human-Centered Approach to Documentation for Large Language Models
652 What Does it Mean for a Language Model to Preserve Privacy?
323 Data Governance in the Age of Large-Scale Data-Driven Language Technology
Session Chair: Isabel Chien
Room 209
UTC+9: 15:15-15:45
UTC+2: 8:15-8:45
UTC-7: 23:15-23:45
Break
UTC+9: 15:45-16:45
UTC+2: 8:45-9:45
UTC-7: 23:45-0:45
IN-PERSON: Proceedings 11
11A Bias in Image and Language Processing 11B Critiquing FAccT and AI ethics 11C Counterfactuals in Explanation and Prediction
153 Measuring Representational Harms in Image Captioning
120 Bias in Automated Speaker Recognition
314 Robots Enact Malignant Stereotypes
1032 CounterFAccTual: How FAccT’s acontextual framing undermines its organizing principles
424 The forgotten margins of AI ethics
678 The AI Ethics Money Problem: It's @ FAccT
490 Rational Shapley Values
470 FADE: FAir Double Ensemble Learning for Observable and Counterfactual Outcomes
647 Understanding Lay Users' Needs of Counterfactual Explanations for Everyday Recommendations
Session Chair: Maximilian Fischer Session Chair: Joseph Donia Session Chair: Deep Ganguli
Room 201 Room 202 Room 203
11D Crowd Workers and System Safety
Session Chair: Elizabeth Kumar
842 System Safety and Artificial Intelligence
868 CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation
144 The Effects of Crowd Workers Biases in Fact-Checking Tasks
Session Chair: Elizabeth Kumar
Room 209
UTC+9: 16:45-17:00
UTC+2: 9:45-10:00
UTC-7: 0:45-1:00
Break
UTC+9: 17:00-18:00
UTC+2: 10:00-11:00
UTC-7: 1:00-2:00
IN-PERSON: Proceedings 12
12A Facial Recognition Systems 12B Intersectional Identities and Human Categories 12C Use and Abuse of Visual Analysis
431 Female, white, 27? Bias Evaluation on Data and Algorithms for Affect Recognition in Faces
61 What People Think AI Should Infer From Faces
459 Regulating Facial Processing Technologies: Tensions Between Legal and Technical Considerations in the Application of Illinois BIPA
439 Subverting machines, fluctuating identities: Re-learning human categorization
226 Are ""Intersectionally Fair"" AI Algorithms Really Fair to Women of Color? A Philosophical Analysis
155 Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
657 Can Machines Help Us Answering Question 16 in Datasheets, and in turn Reflecting on Inappropriate Content?
376 Promoting Ethical Awareness in Communication Analysis: Investigating Potentials and Limits of Visual Analytics for Intelligence Applications
295 Seeing without Looking: Analysis Pipeline for Child Sexual Abuse Datasets
Session Chair: Gauri Kambhatla Session Chair: Freyja van den Boom Session Chair: Lydia Lucchesi
Room 201 Room 202 Room 203
12D Challenges in Algorithmic Governance
458 Models for Classifying AI Systems: the Switch, the Ladder, and the Matrix
357 Learning to Limit Data Collection via Scaling Laws: An Interpretation of GDPR's Data Minimization
331 Behavioral Use Licensing for Responsible AI
Session Chair: Thomas Laurent
Room 209
UTC+9: 18:00-21:30
UTC+2: 11:00-14:30
UTC-7: 2:00-5:30
Break
UTC+9: 21:30-22:00
UTC+2: 14:30-15:00
UTC-7: 5:30-6:00
ONLINE: Meet Someone!
UTC+9: 22:00-23:00
UTC+2: 15:00-16:00
UTC-7: 6:00-7:00
ONLINE: Proceedings 13
13A Technical Approaches to Fairness and Trust 13B AI and Social Welfare 13C Privacy and Trust
895 Fairness-aware Model-agnostic Positive and Unlabeled Learning (Co-Winner, Distinguished Paper Award)
700 Towards Fair Unsupervised Learning
342 Fair Representation Clustering with Several Protected Classes
567 Multi Stage Screening: Enforcing Fairness and Maximizing Efficiency in a Pre-Existing Pipeline
464 A Data-Driven Simulation of the New York State Foster Care System
562 Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders
1044 Uncertainty Estimation and the Social Planner’s Problem: Why Sample Complexity Matters
233 Goodbye Tracking? Impact of iOS App Tracking Transparency and Privacy Labels
35 News from Generative Artificial Intelligence is Believed Less
692 The Alchemy of Trust: The Creative Act of Designing Trustworthy Socio-Technical Systems
Session Chair: Stephen Pfohl Session Chair: Wonyoung So Session Chair: Shikun Zhang
UTC+9: 23:00-23:30
UTC+2: 16:00-16:30
UTC-7: 7:00-7:30
Break
UTC+9: 23:30-0:30
UTC+2: 16:30-17:30
UTC-7: 7:30-8:30
ONLINE: Proceedings 14
14A Qualitative and Technical Approaches to Privacy 14B AI, Democracy, and Social Justice  14C FAccT in Practice
319 Attribute Privacy: Framework and Mechanisms
8 Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent
881 Stop the Spread: A Contextual Integrity Perspective on the Appropriateness of COVID-19 Vaccination Certificates
515 Auditing for Gerrymandering by Identifying Disenfranchised Individuals
856 Characterizing Properties and Trade-offs of Centralized Delegation Mechanisms in Liquid Democracy
438 Beyond Fairness: Reparative Algorithms to Address Historical Injustices of Housing Discrimination in the US
1047 Reliable and Safe Use of Machine Translation in Medical Settings
72 Automating Care: Online Food Delivery Work During the CoVID-19 Crisis in India
220 Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits
Session Chair: Rui-Jie Yew Session Chair: Eleonora Viganó Session Chair: Benjamin Laufer
UTC+9: 0:30-1:00
UTC+2: 17:30-18:00
UTC-7: 8:30-8:00
Break
UTC+9: 1:00-2:00
UTC+2: 18:00-19:00
UTC-7: 9:00-10:00
ONLINE: Proceedings 15
15A New Approaches to Explainability 15B Aspirations and Limitations for Ethical AI 15C Ethical AI in Theory and Practice
215 Equi-explanation Maps: Concise and Informative Global Summary Explanations
639 DualCF: Efficient Model Extraction Attack from Counterfactual Explanations
472 Counterfactual Shapley Additive Explanations
250 Ethical Concerns and Perceptions of Consumer Neurotechnology from Lived Experiences of Mental Workload Tracking
262 REAL ML: Recognizing, Exploring, and Articulating Limitations in Machine Learning Research
50 When learning becomes impossible
688 People are not coins: Morally distinct types of predictions necessitate different fairness constraints
469 Net benefit, calibration, threshold selection, and training objectives for algorithmic fairness in healthcare
423 German AI Start-Ups and “Ethical AI”: Using Social Practice as Basis for Assessing and Implementing Socio-Technical Innovation
Session Chair: leif Hancox-Li Session Chair: Wesley Deng Session Chair: Tasfia Mashiat
UTC+9: 2:00-2:30
UTC+2: 19:00-19:30
UTC-7: 10:00-10:30
ONLINE: Meet Someone!

UTC+9: 4:00-4:30
UTC+2: 21:00-21:30
UTC-7: 12:00-12:30
ONLINE: Meet Someone!
UTC+9: 4:30-6:00
UTC+2: 21:30-23:00
UTC-7: 12:30-14:00
ONLINE:  CRAFT X
CRAFT 125: CRAFT 167 (continued): CRAFT 21:
"Application Denied: A Global Coalition to Expose and Resist Discrimination in Automated Decisions Online "
"Communication Across Communities in Machine Learning Research and Practice"
"Applying a justice framework to natural language processing (NLP): Decentering standard language ideology in pursuits of fair and equitable NLP"
Ushnish Sengupta and Peaks Krafft Adriana Romero Soriano, Caglar Gulcehre, Jessica Forde, Levent Sagun, Negar Rostamzadeh, Samuel Bell, Seyi Olojo and Stefano Sarao Mannelli Genevieve Smith, Julia Nee and Ishita Rustagi
UTC+9: 5:00-6:00
UTC+2: 22:00-23:00
UTC-7: 13:00-14:00
ONLINE:  Tutorial Reschedule
“…contextual Integrity is the worst definition of privacy, except for all the others that have been tried…” (adapted from original quote, Winston S. Churchill, 11 November 1947)"
Helen Nissenbaum
UTC+9: 6:00-6:30
UTC+2: 23:00-23:30
UTC-7: 14:00-14:30
Break
UTC+9: 6:30-8:00
UTC+2: 23:30-1:00
UTC-7: 14:30-16:00
ONLINE:  CRAFT XI and Tutorials X
TUTORIAL (IMPLICATIONS): TUTORIAL (TRANSLATION/DIALOGUE): TUTORIAL:
"Model Monitoring in Practice: Lessons Learned and Open Challenges" "Impacts of Data Privacy and Equity on Public Policy" "Why Contact Tracing Apps Failed: a Postmortem"
Krishnaram Kenthapadi, Hima Lakkaraju, Pradeep Natarajan, and Mehrnoosh Sameki Ferdinando Fioretto and Claire McKay Bowen Baobao Zhang
CRAFT 125 (continued): CRAFT 21: (continued) CRAFT 50:
"Application Denied: A Global Coalition to Expose and Resist Discrimination in Automated Decisions Online "
"Applying a justice framework to natural language processing (NLP): Decentering standard language ideology in pursuits of fair and equitable NLP" "Addressing Ageism in AI"
Ushnish Sengupta and Peaks Krafft Genevieve Smith, Julia Nee and Ishita Rustagi Robin Brewer, Clara Berridge, Mark Díaz and Christina Fitzpatrick
UTC+9: 6:30-7:30
UTC+2: 23:30-0:30
UTC-7: 14:30-15:30
ONLINE:  Tutorial Reschedule
"Competing for the Curb: The Anticipatory Politics of Micromobility"
Sarah Fox
UTC+9: 8:00-9:00
UTC+2: 1:00-2:00
UTC-7: 16:00-17:00
IN-PERSON AND ONLINE: Keynote IV
André Brock (Chair: Alice Xiang)  
Room 202, 203 (180, 180)
UTC+9: 9:00-9:30
UTC+2: 2:30-2:30
UTC-7: 17:00-17:30
Break
UTC+9: 9:30-10:30
UTC+2: 2:30-3:30
UTC-7: 17:30-18:30
IN-PERSON AND ONLINE: Keynote Panel III
Algorithmic Governance of the Public Sphere
Meredith Clark, Tarleton Gillespie, Daphne Keller, Jon Kleinberg (Chair: Seth Lazar)
Room 202, 203 (180, 180)
UTC+9: 10:30-10:45
UTC+2: 3:30-3:45
UTC-7: 18:30-18:45
Break
UTC+9: 10:45-11:45
UTC+2: 3:45-4:45
UTC-7: 18:45-19:45
IN-PERSON AND ONLINE: Keynote Panel and CRAFT
CRAFT 999: Mapping Data Stories in the Surveillance Ecosystem Karen Hao in Conversation with William Isaac (Chair: Alice Xiang)
Jennifer Lee, Micah Epstein
Room 201 Room 202, 203 (180, 180)
UTC+9: 11:45-12:45
UTC+2: 4:45-5:45
UTC-7: 19:45-20:45
Lunch
UTC+9: 12:45-13:45
UTC+2: 5:45-6:45
UTC-7: 20:45-21:45
IN-PERSON: Proceedings 16
16A Identity and Visual/Linguistic Processing 16B Fair Ranking 16C Human and Machine Decision-Making
848 Surfacing Racial Stereotypes through Identity Portrayal
620 Markedness in Visual Semantic AI
622 Evidence for Hypodescent in Visual Semantic AI
637 Measuring Fairness of Rankings under Noisy Sensitive Information
278 Subverting Fair Image Search with Generative Adversarial Perturbations
1010 Fair ranking: a critical review, challenges, and future directions
973 Model Explanations with Differential Privacy
853 "There Is Not Enough Information": On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making
986 Marrying Fairness and Explainability in Supervised Learning
Session Chair: Chinasa T. Okolo Session Chair: Frederik Zuiderveen Borgesius Session Chair: Aparna Balagopalan
Room 201 Room 202 Room 203
UTC+9: 13:45-14:00
UTC+2: 6:45-7:00
UTC-7: 21:45-22:00
Break
UTC+9: 14:00-15:00
UTC+2: 7:00-8:00
UTC-7: 22:00-23:00
IN-PERSON: Proceedings 17
17A AI in experimental health applications 17B Understanding and Achieving Structural Societal Change 17C Responsible Data Management
558 BCIs and human rights: Brave new rights for a brave new world
410 Multi-disciplinary fairness considerations in machine learning for clinical trials
175 Four Years of FAccT: A Reflexive, Mixed-Methods Analysis of Research Contributions, Shortcomings, and Future Prospects
929 Models for understanding and quantifying feedback in societal systems
828 Locality of Technical Objects and the Role of Structural Interventions for Systemic Change
293 The Long Arc of Fairness: Formalisations and Ethical Discourse
537 Smallset Timelines: A Visual Representation of Data Preprocessing Decisions
738 Adaptive Sampling Strategies to Construct Equitable Training Datasets
940 On the Existence of Simpler Machine Learning Models
Session Chair: Frederik Zuiderveen Borgesius Session Chair: Virgínia Fernandes Mota Session Chair: Amanda Coston
Room 201 Room 202 Room 203
UTC+9: 15:00-15:30
UTC+2: 8:00-8:30
UTC-7: 23:00-23:30
Break
UTC+9: 15:30-16:30
UTC+2: 8:30-9:30
UTC-7: 23:30-0:30
IN-PERSON: Proceedings 18
18A AI and Society 18B Novel Normative Challenges and Proposals 18C Hate Speech and Mistranslation
304 The Death of the Legal Subject: How Predictive Algorithms Are (Re)constructing Legal Subjectivity
887 What is the Bureaucratic Counterfactual? Categorical versus Algorithmic Prioritization in U.S. Social Policy
228 Affirmative Algorithms: Relational Equality as Algorithmic Fairness
80 Designing Up with Value-Sensitive Design: Building a Field Guide for Ethical Machine Learning Development
514 Limits of individual consent and models of distributed consent in online social networks
754 Should attention be all we need? The ethical and epistemic implications of unification in machine learning
846 Assessing Annotator Identity Bias via Item Response Theory: A Case Study in a Hate Speech Corpus
334 Exploring the Role of Grammar and Word Choice in Bias Toward African American English (AAE) in Hate speech Classification
326 Understanding and being understood: user strategies for identifying and recovering from mistranslations in machine translation-mediated chat
Session Chair: Robert Gorwa Session Chair: Robindra Prabhu Session Chair: Mayra Russo
Room 201 Room 202 Room 203
UTC+9: 16:30-17:00
UTC+2: 9:30-10:00
UTC-7: 0:30-1:00
Break
UTC+9: 17:00-18:00
UTC+2: 10:00-11:00
UTC-7: 1:00-2:00
IN-PERSON: Town Hall
Town Hall
Room 202
UTC+9: 18:00-21:30
UTC+2: 11:00-14:30
UTC-7: 2:00-5:30
Break
UTC+9: 21:30-22:00
UTC+2: 14:30-15:00
UTC-7: 5:30-6:00
ONLINE: Meet Someone!
UTC+9: 22:00-23:00
UTC+2: 15:00-16:00
UTC-7: 6:00-7:00
ONLINE: Town Hall
Town Hall (Online)
Town Hall (Online)