The Fourth Workshop on Fact Extraction and Verification

10/11th November 2021 - Co-located with EMNLP 2021

About

With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]

The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.

Key Dates

Research Track

In order to bring together researchers working on the various tasks related to fact extraction and verification, we will host a workshop welcoming submissions on related topics such as recognizing textual entailment, question answering and argumentation mining.

  • Submission deadline: 5th August 2021
  • Notification: 5th September 2021
  • Camera-ready: 15th September 2021
  • Workshop: 10-11th November 2021 (EMNLP)

Shared Task Track

We will be hosting a new shared task with a new dataset for 2021. Details to be announced!

Approximate timeline is as follows:

  • Training data release: May 2021
  • Test data release: July 2021
  • System descriptions due: 5th August 2021

All deadlines are calculated at 11:59pm Anywhere on Earth (UTC-12).

How to Participate in the challenge

1) Join the Slack Group

Join the Slack Group for chat and updates: https://fever2018.slack.com.

2) Register on our Codalab page

Data can be downloaded and submissions evaluated on the Codalab competition page https://competitions.codalab.org/competitions/18814.

3) Develop your Systems

The shared task guidelines are available on the task page.

A simple baseline described in this NAACL2018 paper preprint, the scorer code, as well as the annotation UI source code are available on our GitHub page.

4) (July) Evaluate on the Test Set and Submit your System Descriptions

We will open the blind test set for scoring in July and accept system description papers on Softconf for the workshop at EMNLP2018. More details to follow soon.

The softconf submission page for Shared Task system descriptions and workshop papers is https://www.softconf.com/emnlp2018/FEVER.

Invited Speakers

Mohit Bansal

UNC Chapel Hill

Mirella Lapata

University of Edinburgh

Maria Liakata

Queen Mary University of London

Pasquale Minervini

University College London

Preslav Nakov

Qatar Computing Research Institute

Steven Novella

Yale University School of Medicine

Brendan Nyhan

Dartmouth College

Workshop Organising Committee

Rami Aly

University of Cambridge

Oana Cocarascu

King's College London

Christos Christodoulopoulos

Amazon

James Thorne

University of Cambridge

Zhijiang Guo

University of Cambridge

Michael Schlichtkrull

University of Cambridge

Arpit Mittal

Facebook

Andreas Vlachos

University of Cambridge