With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources.  
Last year, in an effort to jointly address both problems, we organised the first workshop on Fact Extraction and VERification (FEVER) at EMNLP 2018. For the second workshop, in addition to extracting facts and verifying them, we would also like to focus on adversarial learning by generating adversarial examples that fool these systems. The FEVER 2.0 Shared Task will build upon work from the first shared task in a Build it Break it Fix it setting.
The second workshop on Fact Extraction and VERification (FEVER 2.0) will be held at EMNLP-IJCNLP 2019 in Hong Kong. Just like in FEVER 1.0, we are hosting two tracks and are seeking papers on topics relating to fact checking as well as system descriptions of entries to the FEVER 2.0 Shared Task. At the workshop, we will host invited talks, presentations of submitted papers and announce the results and winners of the FEVER 2.0 Shared Task.
In order to bring together researchers working on the various tasks related to fact extraction and verification, we will host a workshop welcoming submissions on related topics such as recognizing textual entailment, question answering and argumentation mining.
Participants will be invited to participate in any of these roles: Builders, Breakers, or Fixers. For more details about each one of these roles and how to participate, see our Shared Task page.
All deadlines are calculated at 11:59pm Pacific Daylight Savings Time (UTC -7h).
Join the Slack Group for chat and updates: https://fever2018.slack.com.
Data can be downloaded and submissions evaluated on the Codalab competition page https://competitions.codalab.org/competitions/18814.
The shared task guidelines are available on the task page.
We will open the blind test set for scoring in July and accept system description papers on Softconf for the workshop at EMNLP2018. More details to follow soon.
The softconf submission page for Shared Task system descriptions and workshop papers is https://www.softconf.com/emnlp2018/FEVER.