Schedule

9:00-9:00 Opening
FEVER Organizers
9:00-9:45 Human Values in Recommender Systems: a Multi-Disciplinary Discussion
Alon Halevy, Meta AI
9:45-10:30 Data collection, bias mitigation and hate speech detection in multiple languages
Alice Oh, KAIST
10:30-11:00 Coffee Break
11:00-11:45 Supporting professional fact-checking: how can NLP/AI help?
Carolina Scarton, University of Sheffield
11:45-12:30 Contributed Talks [show/hide details]
Neural machine Translation for Fact-Checking Temporal Claims
Marco Mori, Paolo Papotti, Luigi Bellomarini and Oliver Giudice
Automatic Fake News Detection: Are current models “fact-checking” or“gut-checking”?
Ian Kelk, Benjamin Basseri, Wee Yi Lee, Richard Qiu and Chris Tanner
Retrieval Data Augmentation Informed by Downstream Question Answering Performance
James Ferguson, Hannaneh Hajishirzi, Pradeep Dasigi and Tushar Khot
12:30-14:00 Lunch
14:00-14:30 In-person poster session [show/hide details]
XInfoTabS: Evaluating Multilingual Tabular Natural Language Inference
Bhavnick Singh Minhas, Anant Shankhdhar, Vivek Gupta, Divyanshu Aggarwal and Shuo Zhang
PHEMEPlus: Enriching Social Media Rumour Verification with External Evidence
John Dougrez-Lewis, Elena Kochkina, Miguel Arana-Catania, Maria Liakata and Yulan He
A Semantics-Aware Approach to Automated Claim Verification
Blanca Calvo Figueras, Montse Cuadros Oller and Rodrigo Agerri
Graph and Attention Based Fact Verification and Heterogeneous COVID-19 Claims Dataset
Miguel Arana-Catania, Elena Kochkina, Arkaitz Zubiaga, Maria Liakata, Robert Procter and Yulan He
14:30-15:00 Online poster session [show/hide details]
Heterogeneous-Graph Reasoning and Fine-Grained Aggregation for Fact Checking
Hongbin Lin and Xianghua Fu
Distilling Salient Reviews with Zero Labels
Chieh-Yang Huang, Jinfeng Li, Nikita Bhutani, Alexander Whedon, Estevam Hruschka and Yoshi Suhara
Synthetic Disinformation Attacks on Automated Fact Verification Systems
Yibing Du, Antoine Bosselut and Christopher D Manning
15:00-15:30 Coffee Break
15:30-16:15 Content moderation on encrypted platforms
Kiran Garimella, Rutgers University
16:15-17:00 Problematic Information on Social Media Platforms: Understanding and Countering
Tanu Mitra, University of Washington
17:00-17:00 Closing Remarks
FEVER Organizers

Invited Talks

Content moderation on encrypted platforms
Kiran Garimella

I will start the talk with our recent work on collecting and analyzing data from WhatsApp. I’ll try to summarize work on how WhatsApp is used by political parties in India to spread misinformation and hate speech. Next, I will delve deep into developing solutions for content moderation on WhatsApp, which is non trivial due to the end to end encrypted nature of WhatsApp. I will present two solutions: One based on an on-device setup to content moderation, before the content is encrypted, and the second based on a crowdsourced, bottom up model for fact checking. I will end with a discussion on potential future research directions in this space.



Human Values in Recommender Systems: a Multi-Disciplinary Discussion
Alon Halevy

In recent years, researchers and practitioners at companies have paid significant attention to mitigating harms that can be created in the online world, such as misinformation, hate speech and other integrity violations. The primary reason we want to limit those harms is that they violate human values that we hold dear as individuals and as societies. This begs a set of broader questions: what are these human values that we should be incorporating into our online recommender systems and how can we also shift some of our attention to ensure that the recommendation systems provide benefits, and not only focus on their possible harms. In the first part of this talk, I will share some observations from a discussion on the topic of human values that included experts from the fields of AI, HCI, psychology, policy, law and journalism. In the second half of the talk I’ll describe how addressing human values also raises new challenges at the intersection of multiple fields: natural language processing, information retrieval, database and knowledge graph management.



Problematic Information on Social Media Platforms: Understanding and Countering
Tanu Mitra

Online social media platforms have brought numerous positive changes, including access to vast amounts of news and information. Yet, those very opportunities have created new challenges—our information ecosystem is now rife with problematic content, ranging from misinformation, conspiracy theories, to hateful and incendiary propaganda. In this talk, I will focus on one aspect of problematic online information: conspiracy theories.Leveraging data spanning millions of conspiratorial posts on Reddit, 4chan, and 8chan, I will present scalable methods to unravel who participates in online conspiratorial discussions, what causes users to join conspiratorial communities and then potentially abandon them. I will close by previewing important new opportunities to counter online misinformation, including conducting social audits to defend against algorithmically generated misinformation and designing socio-technical interventions to promote meaningful credibility assessment of information.



Data collection, bias mitigation and hate speech detection in multiple languages
Alice Oh

There are thousands of languages in the world, and they are fully understood only in their cultural context. What may be understood as an obvious fact in English may not be so obvious in another language, and social biases exacerbate this problem of miscommunication that can occur across languages. I will begin this talk with how we work on building datasets for low- and medium-resource languages. As part of that topic, I will present on-going work on annotation with language learners. I will then present research on analyzing and mitigating bias in several different languages. In the last part of the talk, I will discuss hate speech and how that can vary across languages.



Supporting professional fact-checking: how can NLP/AI help?
Carolina Scarton

The task of professionally debunking disinformation narratives is not trivial and the significant increase in information shared online has added a huge strain in the work of journalists and fact-checkers. In particular, with the COVID-19 pandemic, the WHO has coined the term "infomedic", describing a scenario where the high amount of information (including disinformation) during the pandemic outbreak, resulted in confusion and mistrust in reliable authorities. Disinformation is also known to quickly spread, highlighting the need for timely debunks and mitigation actions. In this talk, I will present Natural Language Processing (NLP) approaches for supporting the work of professional fact-checking. Drawing from the expertise of my research group on working closely with journalists and fact-checkers, I will discuss the main challenges in developing such tools bridging the gap between research and real-world applications. Finally, the importance of fairness and explainability in models' decisions will also be discussed.



Call For Papers

With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to reason about in a wide range of domains. However, in order to do so, we need to ensure that we trust the accuracy of the sources of information that we use. Handling false information coming from unreliable sources has become the focus of a lot of recent research and media coverage. In an effort to jointly address these problems, we are organizing the 5th instalment of the Fact Extraction and VERification (FEVER) workshop (http://fever.ai/) to promote research in this area. The workshop will be co-located with ACL 2022 and will be held either in Dublin or virtually depending on circumstances.

We invite long and short papers on all topics related to fact extraction and verification, including:

  • Information Extraction
  • Semantic Parsing
  • Knowledge Base Population
  • Natural Language Inference
  • Textual Entailment Recognition
  • Argumentation Mining
  • Machine Reading and Comprehension
  • Claim Validation/Fact checking
  • Question Answering
  • Information Retrieval and Seeking
  • Theorem Proving
  • Stance detection
  • Adversarial learning
  • Computational journalism
  • Descriptions of systems for the FEVER, FEVER 2.0 and FEVEROUS Shared Tasks

Long/short papers should consist of eight/four pages of original content plus unlimited pages for bibliography. Submissions must be in PDF format, anonymized for review, and follow the ACL 2022 conference submission guidelines, using the LaTeX style files, Word templates or the Overleaf template from the official ACL website. The submission page is here: https://openreview.net/group?id=aclweb.org/ACL/2022/Workshop/FEVER

Each long paper submission consists of up to eight (8) pages of content, plus unlimited pages for references; final versions of long papers will be given one additional page (up to nine pages with unlimited pages for references) so that reviewers’ comments can be taken into account.

Each short paper submission consists of up to four (4) pages of content, plus unlimited pages for references; final versions of short papers will be given one additional page (up to five pages in the proceedings and unlimited pages for references) so that reviewers’ comments can be taken into account.

The review will be double-blind (two-way anonymized review). Please do not include any self-identifying information in the submission. Papers can be submitted as non-archival, so that their content can be reused for other venues. Please put a footnote stating "NON-ARCHIVAL submission" on the first page. Non-archival papers will follow the same submission guidelines, and, if accepted, will be linked from the FEVER website but not from the ACL proceedings. Previously published work can also be submitted in this manner, with the additional requirement to state on the first page the original publication. In this case, the paper does not need to be anonymized.

ACL Rolling Review

We welcome submissions already reviewed via the ACL Rolling Review.

FEVER Shared Tasks

We encourage continued participation in the existing FEVER, FEVER 2.0 and FEVEROUS shared tasks. We will accept system description papers for all three previous shared tasks. For more information on the shared tasks please visit the following pages: FEVER, FEVER 2.0, and FEVEROUS.

Important dates

  • Submission deadline:
    • For papers submitted via OpenReview directly for review: 3rd of March 2022 (extended, was 28th of February)
    • For papers already reviewed via ARR: 25th of March 2022
  • Notification: 1st of April 2022
  • Camera-ready deadline: 10th of April 2022
  • Workshop: 26th of May 2022

All deadlines are 11.59 pm GMT

Workshop Organising Committee

Rami Aly

University of Cambridge

Christos Christodoulopoulos

Amazon

Oana Cocarascu

King's College London

Zhijiang Guo

HKUST (GZ)

Arpit Mittal

Meta

Michael Schlichtkrull

Queen Mary University of London

James Thorne

KAIST AI

Andreas Vlachos

University of Cambridge