FEVER 3 Workshop

Virtual Workshop Format

Just like the whole of ACL 2020, our workshop was held virtually at this address (registration is required). All presentations were pre-recorded, but they will also be part of the livestream. The two live Q&A sessions will involve all the speakers of the talks preceding each of one in the schedule.

Schedule

04:00-04:1512:00-12:15 Opening Remarks
FEVER Organizers
04:15-05:0012:15-13:00 Project Debater [slides]
Noam Slonim
05:00-05:4513:00-13:45 Towards explainable fact checking [slides]
Isabelle Augenstein
05:45-06:2513:45-14:25Oral presentations
05:45-06:0513:45-14:05Simple Compounded-Label Training for Fact Extraction and Verification [slides]
Yixin Nie, Lisa Bauer and Mohit Bansal
06:05-06:2514:05-14:25Stance Prediction and Claim Verification: An Arabic Perspective
Jude Khouja
06:25-07:2514:25-15:25Live Q&A Session
Q&A with invited talk and oral presentation speakers
07:25-09:0015:25-17:00Break
09:00-09:4517:00-17:45 How to "inoculate" people against misinformation and online extremism [slides]
Jon Roozenbeek
09:45-10:3017:45-18:30 Beyond Facts: The Problem of Framing in Assessing What is True [slides]
Philip Resnik
10:30-11:2018:30-19:20Poster Session
 A Probabilistic Model with Commonsense Constraints for Pattern-based Temporal Fact Extraction
Yang Zhou, Tong Zhao and Meng Jiang
 Developing a How-to Tip Machine Comprehension Dataset and its Evaluation in Machine Comprehension by BERT
Tengyang Chen, Hongyu Li, Miho Kasamatsu, Takehito Utsuro and Yasuhide Kawada
 Language Models as Fact Checkers?
Nayeon Lee, Belinda Li, Sinong Wang, Wen-tau Yih, Hao Ma and Madian Khabsa
 Maintaining Quality in FEVER Annotation
Leon Derczynski, Julie Binau and Henri Schulte
 Distilling the Evidence to Augment Fact Verification Models
Beatrice Portelli, Jason Zhao, Tal Schuster, Giuseppe Serra and Enrico Santus
11:20-12:0519:20-20:05 Integration of (Un)structured World Knowledge In Task Oriented Conversations
Dilek Hakkani-Tur
12:05-12:5020:05-20:50 Fake Fake News and Real Fake News [slides]
Yjein Choi
12:50-13:5020:50-21:50Live Q&A Session

Invited Talks

Project Debater
Noam Slonim

Project Debater is the first AI system that can meaningfully debate a human opponent. The system, an IBM Grand Challenge, is designed to build coherent, convincing speeches on its own, as well as provide rebuttals to the opponent's main arguments. In February 2019, Project Debater competed against Harish Natarajan, who holds the world record for most debate victories, in an event held in San Francisco that was broadcasted live world-wide. In this talk I will tell the story of Project Debater, from conception to a climatic final event, describe its underlying technology, and discuss how it can be leveraged for advancing decision making and critical thinking.



Towards explainable fact checking
Isabelle Augenstein

Automatic fact checking is one of the more involved NLP tasks currently researched: not only does it require sentence understanding, but also an understanding of how claims relate to evidence documents and world knowledge. Moreover, there is still no common understanding in the automatic fact checking community of how the subtasks of fact checking — claim check-worthiness detection, evidence retrieval, veracity prediction — should be framed. This is partly owing to the complexity of the task, despite efforts to formalise the task of fact checking through the development of benchmark datasets. The first part of the talk will be on automatically generating textual explanations for fact checking, thereby exposing some of the reasoning processes these models follow. The second part of the talk will be on re-examining how claim check-worthiness is defined, and how check-worthy claims can be detected; followed by how to automatically generate claims which are hard to fact-check automatically.



How to "inoculate" people against misinformation and online extremism
Jon Roozenbeek

Our society is struggling with an unprecedented amount of falsehood, hyperbole, and half-truths. Politicians and organizations repeatedly make false claims that jeopardize the integrity of journalism. Disinformation now floods the cyberspace and influences many events on and offline. To fight false information, the need for automatic fact verification has never been so urgent. Existing studies primarily focus on free-form text as evidence crawled from Wikipedia or News websites. The direction of using semi-structured knowledge as evidence like relational tables has yet to be explored systematically. In this talk, we will mainly focus on introducing a new benchmark dataset called TabFact, which allows us to systematically study the fact verification problem under semi-structured tables as evidence.



Beyond Facts: The Problem of Framing in Assessing What is True
Philip Resnik

Significant progress has been made recently in using NLP techniques to identify facts and the relationships between them. In this talk, I will argue that in formulating problems related to the assessment of truth, it is important to take into account the human process of interpretation. This potentially motivates a shift in thinking about fact extraction, from a problem that is fundamentally about engineering and addressed by NLP and machine learning, to a richer combination of engineering and scientific inquiry that overlaps more significantly with questions in the social and cognitive sciences.



Integration of (Un)structured World Knowledge In Task Oriented Conversations
Dilek Hakkani-Tür

Majority of previous studies on task-oriented dialogue systems are restricted to a limited coverage of APIs related to the set of tasks considered in the application domain. However, users oftentimes have domain related requests that are not covered by these APIs, even for their task-focused intents. To enable natural interactions with machines, we propose to expand the coverage of task-oriented dialogue systems by incorporating external, unstructured knowledge sources, such as web documents related to the task domain. We recently introduced an augmented version of MultiWOZ 2.1 multi-domain task-oriented dialogue corpus, which includes sub-dialogues of out-of-API-coverage turns and responses grounded on external knowledge sources. In this talk, I’ll review our work in this area and summarize our initial findings, focusing on challenges related to factual accuracy.



Integration of (Un)structured World Knowledge In Task Oriented Conversations
Yejin Choi

Are fake fake news as bad as real fake news? Are all fakes news bad and all real news good? Is fact-checking all we need? In the first part of the talk, I will present our recent study that investigates the extent to which state-of-the-art neural language models can generate fake news and learn to sort them out from real news. Next, I will share our recent efforts to reason about malicious intents behind human manipulated images. I will conclude the talk by returning to the three questions posed above in light of three levels of analysis of fake news: perceptual (i.e., distributional fingerprints), semantic (i.e., fact-checking), and pragmatic (i.e., malicious vs benign intent) and open the floor for discussion.



Call for Papers

With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question.
However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge.
There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. 1, 2

In an effort to jointly address both problems, herein we are organizing the 3rd instalment of the Fact Extraction and VERification (FEVER) workshop (following up on the 2019 and 2018 workshops) to promoting research in these areas.

Submissions

We invite long and short papers on all topics related to fact extraction and verification, including:

  • Information Extraction
  • Semantic Parsing
  • Knowledge Base Population
  • Natural Language Inference
  • Textual Entailment Recognition
  • Argumentation Mining
  • Machine Reading and Comprehension
  • Claim Validation/Fact checking
  • Question Answering
  • Theorem Proving
  • Stance detection
  • Adversarial learning
  • Computational journalism
  • System demonstrations on the FEVER and FEVER 2.0 Shared Tasks

Long/short papers should consist of eight/four pages of content plus unlimited pages for bibliography. Submissions must be in PDF format, anonymized for review, and follow the ACL 2020 two-column format, using the LaTeX style files or Word templates or the Overleaf template from the official ACL website.

Each long paper submission consists of a paper of up to eight (8) pages of content, plus unlimited pages for references; final versions of long papers will be given one additional page (up to nine pages with unlimited pages for references) so that reviewers’ comments can be taken into account.

Each short paper submission consists of up to four (4) pages of content, plus unlimited pages for references; final versions of short papers will be given one additional page (up to five pages in the proceedings and unlimited pages for references) so that reviewers’ comments can be taken into account.

Papers can be submitted as non-archival, so that their content can be reused for other venues. Please select the "NON-ARCHIVAL" submission type in Softconf. Non-archival papers will be linked from this webpage.

Authors can also submit extended abstracts of up to eight pages of content. Add "(EXTENDED ABSTRACT)" to the title of an extended abstract submission. Extended abstracts will be presented as talks or posters if selected by the program committee, but not included in the proceedings. Thus, your work will retain the status of being unpublished and later submission at another venue is not precluded.

Previously published work can also be submitted as an extended abstract in the same way, with the additional requirement to state on the first page the original publication.

Softconf submission link: http://softconf.com/acl2020/FEVER

FEVER Shared task

We encourage continued participation in the existing FEVER and FEVER 2.0 shared tasks. We will accept system description papers for both the two previous shared tasks. For more information on the shared tasks please visit the following pages: FEVER and FEVER 2.0.

Important dates

  • First call for papers: 20 November 2019
  • Second call for papers: 20 January 2020
  • Third (final) call for papers: 20 March 2020
  • Submission deadline: 10 April 2020
  • Notification: 4 May 2020
  • Camera-ready deadline: 21 May 2020
  • Workshop: 9 July (ACL)

All deadlines are calculated at 11:59pm Pacific Daylight Savings Time (UTC -7h).

Organizers

James Thorne

University of Cambridge

Andreas Vlachos

University of Cambridge

Oana Cocarascu

Imperial College London

Christos Christodoulopoulos

Amazon Research Cambridge

Arpit Mittal

Amazon Research Cambridge