Schedule

9:00-9:00 Opening
FEVER Organizers
9:00-9:45 Towards NLP for more realistic fact-checking
Iryna Gurevych, TU Darmstadt
9:45-10:30 Oral presentations [show/hide details]
Hierarchical Representations in Dense Passage Retrieval for Question-Answering
Philipp Ennen, Federica Freddi, Chyi-Jiunn Lin, Po-Nien Kung RenChu Wang, Chien-Yi Yang, Da-Shan Shiu and Alberto Bernacchia
Enhancing Information Retrieval in Fact Extraction and Verification
Daniel Guzman-Olivares, Lara Quijano-Sanchez and Federico Liberatore
BEVERS. A General, Simple, and Performant Framework for Automatic Fact Verification
Mitchell DeHaven and Stephen Scott
10:30-11:15 Coffee Break
11:15-12:00 Can scientific claim verification help us do better science?
Lucy Lu Wang, University of Washington
12:00-12:45 Poster session (online and in-person) [show/hide details]
Rethinking the Event Coding Pipeline with Prompt Entailment
Clément Lefebvre and Niklas Stoehr
World Knowledge in Multiple Choice Reading Comprehension
Adian Liusie, Vatsal Raina and Mark Gales
An Entity-based Claim Extraction Pipeline for Real-world Biomedical Fact-checking
Amelie Wührl, Lara Grimminger, and Roman Klinger
An Effective Approach for Informational and Lexical Bias Detection
Iffat Maab, Edison Marrese-Taylor and Yutaka Matsuo
12:45-14:15 Lunch
14:15-15:00 Whose Truth is It anyway?
Dirk Hovy, Bocconi University
15:00-15:45 Faith in Reason: Prospects for fact checking in a world of bias
Tom Stafford, University of Sheffield
15:45-16:30 Coffee Break
16:30-18:00 Panel Discussion on 6 years of FEVER workshops - how far have we come? With Isabelle Augenstein, Lucy Lu Wang, Christopher Guess, Preslav Nakov, and Tom Stafford.
18:00-18:00 Closing Remarks
FEVER Organizers

Invited Talks

Towards NLP for more realistic fact-checking
Iryna Gurevych

Dealing with misinformation is a grand challenge of the information society directed at equipping computer users with effective tools for identifying and debunking misinformation. As such, there are many machine learning-based methods for detecting harmful content, but they can be expensive or infeasible to train, retrain for domain drift, and deploy practically. On top of this, current Natural Language Processing (NLP), including fact-checking research, fails to meet the expectations of real-life scenarios. In this talk, we show why the past work on fact-checking has not yet led to truly useful tools for managing misinformation by comparing the current NLP paradigm against what human fact-checkers do. NLP systems are expensive in terms of financial cost, computation, and manpower needed to create data for the learning process. With that in mind, we are pursuing research on the detection of emerging misinformation topics to focus human attention on the most harmful, novel examples. We further compare the capabilities of automatic, NLP-based approaches to what human fact-checkers do, uncovering critical research directions for the future.



Whose Truth is It anyway?
Dirk Hovy

NLP always deals with the notion of truth, not just in fact verification. But the notion we use is often restrictive and sometimes artificial, and many times, it is unwarranted, because the process we use introduces falsehoods. In this talk, I look at some of the roots of NLP's notion of truths, the way falsehoods enter our systems, and what we can do about it.



Faith in Reason: Prospects for fact checking in a world of bias
Tom Stafford

Fact checking requires some optimism about human reasoning. We hope that, once checked, errors will be corrected, misinformation will be slowed and false beliefs will be diminished. Pessimists point to abundant signs of polarisation, biased evaluation and motivated reasoning. The truth does not lie somewhere in the middle. I will review evidence from our studies, and those of others, which tells us something about how we reason in the face of potentially mind-changing arguments and evidence. Only by properly understanding the nature of human bias can we have realistic expectations for fact-checking, and even keep some optimism about our capacity to change each other’s minds.



Can scientific claim verification help us do better science?
Lucy Lu Wang

Scientific disagreements in the public sphere are a well-known phenomenon, especially when it comes to contentious issues around health and the environment. But even within science itself, the body of work we have collectively created, the scientific literature, is not as consistent as one may think. I will briefly introduce a line of work that I have been involved in on defining and operationalizing the task of scientific claim verification. Given a scientific claim, this task asks a model to uncover all relevant evidence from the peer-reviewed literature and make veracity predictions on this evidence. Since we first introduced the SciFact scientific claim verification dataset back in 2020, we've demonstrated that the current generation of language models can be trained to perform this task quite well, even in an open domain setting involving retrieval of evidence from large corpuses. However, time and time again, we encounter the challenge of what to do with contradictory evidence and how to communicate it to the fact verification audience. In this talk, I'll discuss the presence of contradictory evidence in the scientific literature, and how scientific claim verification models can help us detect the existence of such evidence. Highlighting contradiction is also a useful step in performing literature review, helping to quantify the amount of disagreement or consensus within a specific field or research topic.



Workshop Organising Committee

Mubashara Akhtar

King's College London

Rami Aly

University of Cambridge

Christos Christodoulopoulos

Amazon

Oana Cocarascu

King's College London

Zhijiang Guo

HKUST (GZ)

Arpit Mittal

Meta

Michael Schlichtkrull

Queen Mary University of London

James Thorne

KAIST AI

Andreas Vlachos

University of Cambridge