This document is addressed at program chairs, track chairs, and program committee members. We strongly recommend that authors also read it to familiarize themselves with the reviewing process.

Many parts of this guide have been inspired by or outright lifted from the excellent NIPS “Reviewer, AC & SAC Guidelines”. We’ve marked the sections of this document with ‡.

1. Overview

ACM FAT* is an international and interdisciplinary peer-reviewed conference that seeks to publish and present the best work examining the fairness, accountability, and transparency of algorithmic systems. The research community is young but growing fast.

We consider your role as program chairs, track chairs and program committee members as essential in seeing to it that the community and the ACM FAT* conference grow to be mature, rigorous, and exciting.

The objectives of the reviewing process are:

  1. to ensure that the best content is presented at the conference, and
  2. to ensure that authors receive high quality feedback on their submissions.

The first objective ensures that ACM FAT* is considered a top tier venue for the discussion of fairness, accountability, and transparency of algorithmic systems. “Best” can be defined in a variety of ways, including quality, clarity, originality, interdisciplinarity and significance. We will expand on this below.

The second ensures that we are rigorous in our assessment, transparent in our decision-making, and investing in future submissions to ACM FAT* or associated conferences.

To this end, we adopt a double-blind peer review system consisting of,

  • Program Chairs: responsible for aggregating decisions from track chairs; responsible for the main conference program.
  • Track Chairs: responsible for aggregating recommendations from the program committee; responsible for the track’s program.
  • Program Committee: responsible for reviewing specific submissions and providing recommendations and feedback.

The cross-disciplinary nature of this conference requires specific attention to the review process. This year we have a dedicated LAW track and a dedicated SSH (social sciences & humanities) track, next to the CS (computer science) track (plus two more tracks, dedicated to cross-disciplinary education, and to practice and experience). 

Each paper will be peer reviewed by 3 reviewers of the own subdiscipline of the paper, and those papers that match a threshold in terms of high quality will subsequently be reviewed by one reviewer from a different discipline (a "cross-disciplinary review"):

  • Peer Reviews: performed by 3 experts on the same discipline.
  • Cross-Disciplinary Reviews: performed by an academic in a different discipline.

Peer reviews have different objectives from cross-disciplinary reviews, as the latter will not check the methodological integrity of the paper but the extent to which it takes into account the assumptions and/or implications that regard the other discipline. Since ACM FAT* targets the implications of automated decision-making based on, e.g., machine learning, the cross-disciplinary reviews are pivotal. They should be respected and taken into account, but their comments should not be confused with the peer review.

1.1. Objectives

Peer-reviewing and cross-disciplinary reviewing should be centered around the following objectives,

  1. Providing thorough peer reviews/cross-reviews for the track chair to,
    1. understand the submission at a high level, and
    2. compare the submission to other submissions,
  2. Providing thorough peer reviews/cross-reviews for the authors to,
    1. understand your opinion, and
    2. prepare the eventual rebuttal and revise their work for final presentation,
  3. Coordinating with other reviewers and track chairs by,
    1. completing peer reviews/cross-reviews in a timely manner, and
    2. being available for discussion.

1.2. Best Practices‡

  • With great power comes great responsibility! Take your job seriously and be fair.
  • Write thoughtful and constructive reviews. Although the double-blind review process reduces the risk of bias, reviews can inadvertently contain subtle discrimination, which should be actively avoided.
  • Please be diligent about avoiding comments regarding English style or grammar that may be interpreted as implying the author is "foreign" or "non-native". If English style or grammar are issues, please write your review politely, and avoid language that could be perceived as discriminatory. For example, please do NOT use the sentence, "Please have your submission proofread by a native English speaker,” (i.e., avoid the phrase “native English speaker”). Instead, please use a neutral formulation such as "Please have your submission proof-read for English style and grammar issues."
  • It is not fair to dismiss any submission without having thoroughly read it. Think about the times when you received an unfair, unjustified, short, or dismissive review. Try not be that reviewer! Always be constructive and help the authors understand your viewpoint, without being dismissive or using inappropriate language. If you need to cite existing work to justify one of your comments, please be as precise as possible and give a complete citation.

Team work:

  • Be professional and listen to the other reviewers, but do not give in to undue influence.
  • Engage actively in the discussion phase for each of the submissions that you are assigned, even if you are not specifically prompted to do so by the corresponding track chair.
  • It is okay to be unavailable for part of the review process (e.g., on vacation for a few days), but if you will be unavailable for more than that -- especially during important windows (e.g., discussion, decision-making) -- you must let the Track Chairs know.

Conflicts of interest:

  • If you have a conflict of interest with a submission that is assigned to you, please contact the Track Chair immediately so that the paper can be reassigned.

Author identities:

  • Under no circumstances should you attempt to find out the identities of the authors for any of your assigned submissions (e.g., by searching on Google or arXiv). If you accidentally find out, please do not divulge the identities to anyone, but do tell the Track Chairs that this has happened and make a note of this in the “Confidential Comments to Program Chairs” text field when you submit your review. You should not let the authors’ identities influence your decision in any way.

Confidentiality:

  • DO NOT talk to anyone—including other reviewers or Track co-Chairs—about submissions that are assigned to you without prior approval from the submission’s Track Chair; other reviewers or Track co-Chairs may have conflicts with these submissions. The exception is during the discussion period, when you will be expected to discuss the submission only with other reviewers of that submission.
  • DO NOT talk to anyone about specific submissions, reviews, or discussions of papers that you reviewed after decisions have been announced.
  • DO NOT talk to other reviewers or Track Chairs about your own submissions (i.e., submissions you are an author on) or submissions with which you have a conflict.

1.3. Unintentional Identification of Authors

Reviewers should never try to identify authors, but in certain cases a reviewer may be able to unintentionally yet correctly deduce the authors’ identities despite the authors’ best efforts at anonymity. 

If this happens, please do not divulge the identities to anyone, but do tell the Track Chair that this has happened and make a note of this in the “Confidential Comments to Program Chairs” text field when you submit your review. Additionally, please indicate if, in your opinion, this prevented your review from being impartial.

1.4. Reviewing platform and bidding

ACM FAT* 2020 is using the FAT Conf 2020 HotCRP.  Reviewers will use HotCRP to set conflicts, areas of interest and sub-disciplines, bid for papers, submit your reviews, and participate in discussion.

Bidding. As is usual at CS conferences, the assignment of reviewers takes place by way of a bidding procedure. This means that you provide your preferences (a.o. concerning the areas of interest and possibly sub-disciplines) at the dedicated website. Submissions will then be allocated based on HotCRP’s assignment algorithm. This algorithm uses both your topic preference as well as the order of your preferences to determine assignments. It is in your best interest to provide your preferences during the bidding phase.

2. Review content

2.1. Peer-Review/Cross-Disciplinary Review Content‡

Review content serves two purposes. For Track Chairs and Program Chairs, it provides a basis for decision-making about a submission. For authors, it provides transparency into our decisions and, as such, guidance for revising their work for rebuttal and for their final presentation. Please make your review as detailed and informative as possible; short, superficial reviews that venture uninformed opinions or guesses are worse than no review since they may result in the rejection of a high-quality submission.

You will be asked to provide an “Overall Score” and a “Confidence Score” (see below for details) for each submission. You should explain these values in the "Detailed Comments " text field. Your comments should include the following sections:

  • Summary: Summarize the main ideas of the submission and relate these ideas to previous work at ACM FAT* and in other archival conferences and discipline-specific journals. Although this part of the review may not provide much new information to authors, it is invaluable to Track Chairs and Program Chairs.
  • Quality: ACM FAT* is committed to strengthening the disciplinary and interdisciplinary areas it touches by maintaining a high standard of quality in the submissions it accepts. Are claims well supported by theoretical analysis/exploration or experimental results? Is this a complete piece of work or work in progress? Are the authors careful and honest about evaluating both the strengths and weaknesses of their work? See track descriptions of each track for discipline-specific quality criteria.
  • Clarity: Is the submission clearly written? Is it well organized? Does it adequately inform the reader? Are the claims or arguments explored in sufficient depth? A superbly written paper provides enough information for an expert reader to reproduce its results.
  • Originality: Are the analyses, tasks, theories or methods new? Is the work a novel combination of well-known research? Is it clear how this work differs from previous contributions? Is related work adequately cited? Law track: note that the paper must be accessible for a CS audience, which may require compromises in terms of e.g. doctrinal technicalities.
  • Significance/Impact: Are the results or conclusions important or insightful? Are the results well-grounded and conclusions avoid exaggerating its merits? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a problem in a better way than previous work or advance the state of the art in a demonstrable way? In the case of CS or quantitative SH/Law, does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach? In the case of qualitative SH/Law, does it have a clear added value compared to existing literature?
  • Relevance: Does the submission have relevance to ACM FAT* computing? If you believe that a submission is out of scope for ACM FAT*, then please justify this judgement appropriately. In formulating this judgment, you may find it helpful to review the Topics of Interest found in the Call for Papers. Note that this list of topics was not meant to be all-inclusive. We welcome submissions that address other important problems surrounding the fairness, accountability and transparency of socio-technical systems.

Please comment on and take into account the strengths of the submission. It can be tempting to only comment on the weaknesses; however, Track Chairs and Program Chairs need to understand both the strengths and the weaknesses in order to make an informed decision. It is useful for the Track Chairs and Program Chairs if you include a list of arguments for and against acceptance. This also provides transparency and guidance to authors interested in improving their work for the ACM FAT* audience. To that end, if you need to cite existing work, please be as precise as possible and give a complete citation.

Your comments should be detailed, specific, and polite. Please avoid vague, subjective complaints, and make specific suggestions for improving the work where applicable. Always be constructive and help the authors understand your viewpoint, without being dismissive. Using inappropriate or derogatory language is not acceptable.

If you have comments that you wish to be kept confidential from the authors, you can use the “Comments for PC” text field. Such comments might include explicit comparisons of the submission to other submissions and criticisms that are more bluntly stated. If you accidentally find out the identities of the authors, please do not divulge the identities to anyone, but do tell your Track Chair that this has happened and make a note of this in the “Confidential Comments to Track Chairs and PC Chairs” text field.

2.2. Overall score‡

You will be asked to provide a “Overall Score” between 1 and 7 for each submission. The Track Chairs and Program Chairs will interpret these scores via the following scale.

  • 7: Would be a top 10% accepted paper at a top tier venue in my discipline. A “must accept”. I will fight for accepting this submission.
  • 6: Would be a top 50% accepted paper at a top tier venue in my discipline. A very good submission, clear accept. I vote and argue for accepting this submission.
  • 5: Would be an accepted paper at a top tier venue in my discipline. A good submission; an accept. I vote for accepting this submission, but would not be upset if it were rejected.
  • 4: Marginally above acceptance threshold; an accept. I tend to vote for accepting this submission, but rejecting it would not be that bad.
  • 3: Marginally below acceptance threshold. I tend to vote for rejecting this submission, but accepting it would not be that bad.
  • 2: A clear reject. I vote and argue for rejecting this submission.
  • 1: A “must reject”. Paper is fundamentally flawed (E.g., major results are trivial, wrong, already known, etc.) — I will fight for rejecting this submission.

“Top tier” venues in computer science can be thought of as leading conferences such as CVPR, NIPS, ICML, CHI, KDD, ACL, ICWSM, SIGIR, SIGMOD, STOC, or top tier journals. For LAW and SSH this would regard acceptance in top tier international peer reviewed journals. Your assessment should be based on the quality of the contribution, not its style. ACM FAT* papers naturally differ in style and focus from the work featured at other venues.

You should NOT assume that you were assigned a representative sample of submissions, nor should you adjust your scores to match the overall conference acceptance rates. The “Overall Score” for each submission should reflect your assessment of the submission’s contributions.

Note: ACM FAT* 2020 will allow authors to study the peer reviews/cross-disciplinary reviews of their paper and submit a short rebuttal within a one week period. Although you may have the opportunity to review this rebuttal and revise your recommendation, generally speaking, your overall assessment should reflect on whether you believe the paper merits acceptance except for “minor revisions”. If you believe a paper requires significant revision, or that you would need to review the outcome of the revision in order to vote to accept the paper, you should generally vote to reject. There is one key exception. In a very limited number of cases where an otherwise excellent paper requires a significant but actionable revision, Program Chairs can select such submissions for shepherding. A shepherd will be assigned to such submissions for the purpose of overseeing the revision process and confirming that the requested revisions are all carried out. 

2.3. Confidence score‡

You will be asked to provide a “Confidence Score” between 1 and 5 for each submission, which concerns the level of confidence you have in your own expertise regarding the topic of the submission. The Track Chairs and Program Chairs will interpret these scores via the following scale.

  • 5: You are absolutely certain about your assessment. You are very familiar with the related work.
  • 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
  • 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
  • 2: You are willing to defend your assessment, but it is quite likely that you did not understand central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
  • 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.

Note: If you feel that your confidence rating is likely to be a 1 at the end of a review due to your lack of expertise in the given subject area, you should notify the Track Chair as early as possible in the process. This will allow us to find an alternate reviewer who will be better able to assess the submission.

2.4. Cross-Disciplinary Review and Discussion‡

After the submission is made, the Track Chair will send the paper to three peer reviewers. Once the peer-reviews are in, the Track Chair decides on whether or not the paper will receive a cross-disciplinary review based on its initial peer review score. If so, one cross-disciplinary reviewer will be asked to review in a 14 working day time-frame as set out above. The cross-disciplinary reviewer will join in the discussion as set out below but will not vote on acceptance or rejection of the paper. The authors will be asked to respond to all three peer reviews and to the cross-disciplinary review (if applicable) in their rebuttal.

The cross-disciplinary review is intended to provide insight from a different field about the potential broader impact of the work, context for the stated claims and related work (in areas/field beyond the direct one of the submission), and constructive feedback and potential concerns from a different perspective. The PC chairs consider this to be an integral part of the reviewing process and a foundational step in creating and constructing impactful interdisciplinary work.

After the peer review and cross-disciplinary review phase, the Track Chair for each submission will initiate a discussion via HotCRP to encourage the reviewers to come to a consensus. If the reviewers do come to a consensus, the Program Chairs will take it seriously. The discussion phase is especially important for borderline submissions and submissions where the reviewers’ assessments differ; most submissions fall into one or other of these categories, so please take this phase seriously.

When discussing a submission, try to remember that different people have different backgrounds and different points of view. Ask yourself, “Do the other reviewers' comments make sense?" and do consider changing your mind in light of their comments, if appropriate. That said, if you think the other reviewers are not correct, you are not required to change your mind. Reviewer consensus is valuable, but it is not mandatory.

3. Contact

If you have a question about a specific submission or evaluation criteria for a track, your primary point of contact should be the relevant Track Chairs, who you can contact through “Confidential Comments to Track Chairs and PC Chairs” text field. If necessary, this can be escalated to the PC chairs.

If you have a question about the reviewing system, your primary point of contact should be the PC Chairs.