The First Workshop on Fact Extraction and Verification

Join us at EMNLP on 1st November 2018

Latest

  • The (preliminary) leaderboard of the Shared Task is out!

About

With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]

In an effort to jointly address both problems, herein we propose a workshop promoting research in joint Fact Extraction and VERification (FEVER). We aim for FEVER to be a long-term venue for work in verifiable knowledge extraction and to stimulate progress in this direction, we will also host the FEVER Challenge, an information verification shared task on the dataset that we plan to release as part of the challenge.

The first workshop on Fact Extraction and VERification will be held at EMNLP2018 in Brussels. We are hosting two tracks and are seeking papers on topics relating to fact checking as well as system descriptions of entries to the FEVER shared task. At the workshop, we will host invited talks, presentations of submitted papers and announce the results and winners of the FEVER Shared Task.

Key Dates

Research Track

In order to bring together researchers working on the various tasks related to fact extraction and verification, we will host a workshop welcoming submissions on related topics such as recognizing textual entailment, question answering and argumentation mining.

  • First call for papers: 24 May 2018
  • Second call for papers: 26 June 2018
  • Submission deadline: 10 August 2018
  • Notification: 27 August 2018
  • Camera-ready deadline: 2 September 2018
  • Workshop: 1 November (EMNLP)

Shared Task Track

Participants will be invited to develop systems to identify evidence and reason about truthfulness of a given claim that we have generated. Our dataset currently contains 200,000 true and false claims. The true claims are written by humans annotators extracting information from Wikipedia.

  • Challenge Launch: 3 April 2018
  • Testing Begins : 24 July 2018
  • Submission Closes: 27 July 2018
  • Results Announced: 30 July 2018
  • System Descriptions Due for Workshop: 10 August 2018
  • Winners Announced: 1 November (EMNLP)

All deadlines are calculated at 11:59pm Pacific Daylight Savings Time (UTC -7h).

How to Participate in the challenge

1) Join the Slack Group

Join the Slack Group for chat and updates: https://fever2018.slack.com.

2) Register on our Codalab page

Data can be downloaded and submissions evaluated on the Codalab competition page https://competitions.codalab.org/competitions/18814.

3) Develop your Systems

The shared task guidelines are available on the task page.

A simple baseline described in this NAACL2018 paper preprint, the scorer code, as well as the annotation UI source code are available on our GitHub page.

4) (July) Evaluate on the Test Set and Submit your System Descriptions

We will open the blind test set for scoring in July and accept system description papers on Softconf for the workshop at EMNLP2018. More details to follow soon.

The softconf submission page for Shared Task system descriptions and workshop papers is https://www.softconf.com/emnlp2018/FEVER.

Workshop Invited Speakers

Delip Rao

Joostware AI Research, Johns Hopkins University, Fake News Challenge

Luna Dong

Amazon

Marie-Francine Moens

KU Leuven

Tim Rocktäschel

Facebook AI Research, University College London

Workshop Organising Committee

James Thorne

University of Sheffield

Andreas Vlachos

University of Sheffield

Oana Cocarascu

Imperial College London

Christos Christodoulopoulos

Amazon Research Cambridge

Arpit Mittal

Amazon Research Cambridge