Schedule

09:00-09:15 Welcome Talk
Organizers
09:15-10:00 Learning With Explanations
Tim Rocktäschel, Facebook AI
10:00-10:30 Research Talks 1
10:00-10:30 Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection
Lev Konstantinovskiy, Oliver Price, Mevan Babakar and Arkaitz Zubiaga
10:30-11:30 Research Posters + Coffee [show/hide details]
Crowdsourcing Semantic Label Propagation in Relation Classification
Anca Dumitrache, Lora Aroyo and Chris Welty
Retrieve and Re-rank: A Simple and Effective IR Approach to Simple Question Answering over Knowledge Graphs
Vishal Gupta, Manoj Chinnakotla and Manish Shrivastava
Information Nutrition Labels: A Plugin for Online News Evaluation
Vincentius Kevin, Birte Högden, Claudia Schwenger, Ali Sahan, Neelu Madan, Piush Aggarwal, Anusha Bangaru, Farid Muradov and Ahmet Aker
Joint Modeling for Query Expansion and Information Extraction with Reinforcement Learning
Motoki Taniguchi, Yasuhide Miura and Tomoko Ohkuma
Towards Automatic Fake News Detection: Cross-Level Stance Detection in News Articles
Costanza Conforti, Mohammad Taher Pilehvar and Nigel Collier
Belittling the Source: Trustworthiness Indicators to Obfuscate Fake News on the Web
Diego Esteves, Aniketh Janardhan Reddy, Piyush Chawla and Jens Lehmann
Automated Fact-Checking of Claims in Argumentative Parliamentary Debates
Nona Naderi and Graeme Hirst
Stance Detection in Fake News A Combined Feature Representation
Bilal Ghanem, Paolo Rosso and Francisco Rangel
Zero-shot Relation Classification as Textual Entailment
Abiola Obamuyide and Andreas Vlachos
Teaching Syntax by Adversarial Distraction
Juho Kim, Christopher Malon and Asim Kadav
Where is Your Evidence: Improving Fact-checking by Justification Modeling
Tariq Alhindi, Savvas Petridis and Smaranda Muresan
11:30-12:15 Argumentation Mining and Generation Supporting the Verification of Content
Marie-Francine Moens
12:15-12:30 Research Talks 2
12:15-12:30 Affordance Extraction and Inference based on Semantic Role Labeling
Daniel Loureiro and Alípio Jorge
14:00-14:45 Building a broad knowledge graph for products
Luna Dong
14:45-15:30 Shared Task Flash Talks
14:45–14:50 The Fact Extraction and VERification (FEVER) Shared Task
James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos and Arpit Mittal
14:50–15:00 Combining Fact Extraction and Claim Verification in an NLI Model
Yixin Nie, Haonan Chen and Mohit Bansal
15:00–15:10 UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF)
Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp and Sebastian Riedel
15:10–15:20 Multi-Sentence Textual Entailment for Claim Verification
Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz and Iryna Gurevych
15:20-15:30 Team Papelo: Transformer Networks at FEVER
Christopher Malon
15:30–16:15 Shared Task Posters + Coffee [show/hide details]
Uni-DUE Student Team: Tackling fact checking through decomposable attention neural network
Jan Kowollik and Ahmet Aker
SIRIUS-LTG: An Entity Linking Approach to Fact Extraction and Verification
Farhad Nooralahzadeh and Lilja Øvrelid
Integrating Entity Linking and Evidence Ranking for Fact Extraction and Verification
Motoki Taniguchi, Tomoki Taniguchi, Takumi Takahashi, Yasuhide Miura and Tomoko Ohkuma
Robust Document Retrieval and Individual Evidence Modeling for Fact Extraction and Verification.
Tuhin Chakrabarty, Tariq Alhindi and Smaranda Muresan
DeFactoNLP: Fact Verification using Entity Recognition, TFIDF Vector Comparison and Decomposable Attention
Aniketh Janardhan Reddy, Gil Rocha and Diego Esteves
An End-to-End Multi-task Learning Model for Fact Checking
sizhen li, Shuai Zhao, Bo Cheng and Hao Yang
Team GESIS Cologne: An all in all sentence-based approach for FEVER
Wolfgang Otto
Joint Sentence Extraction and Fact Checking with Pointer Networks
Christopher Hidey and Mona Diab
QED: A fact verification system for the FEVER shared task
Jackson Luken, Nanjiang Jiang and Marie-Catherine de Marneffe
Team UMBC-FEVER : Claim verification using Semantic Lexical Resources
Ankur Padia, Francis Ferraro and Tim Finin
A mostly unlexicalized model for recognizing textual entailment
Mithun Paul, Rebecca Sharp and Mihai Surdeanu
16:15–16:30 Research Talks 3
16:15–16:30 The Data Challenge in Misinformation Detection: Source Reputation vs. Content Veracity
Fatemeh Torabi Asr and Maite Taboada
16:30–17:15 Call for Help: Putting Computation in Computational Fact Checking
Delip Rao
17:15–17:30 Closing Remarks
Organizers

Invited Talks

Learning With Explanations
Tim Rocktäschel

Despite the success of deep learning models in a wide range of applications, these methods suffer from low sample efficiency and opaqueness. Low sample efficiency limits the application of deep learning to domains for which abundant training data exists whereas opaqueness prevents us from understanding how a model derived a particular output, let alone how to correct systematic errors, how to remove bias, or how to incorporate common sense and domain knowledge. To address these issues for knowledge base completion, we developed end-to-end differentiable provers which (i) learn neural representations of symbols in a knowledge base, (ii) make use of similarities between learned symbol representations to prove queries to the knowledge base, (iii) induce logical rules, and (iv) use provided and induced rules for multi-hop reasoning. I will present our recent efforts in applying differentiable provers to statements in natural language texts and large-scale knowledge bases. Furthermore, I will introduce two datasets for advancing the development of models capable of incorporating natural language explanations: eSNLI, crowdsourced explanations for over half a million sentence pairs in the Stanford Natural Language Inference corpus, and ShARC, a conversational question answering dataset with natural language rules.



Argumentation Mining and Generation Supporting the Verification of Content
Marie-Francine Moens

Argumentation mining and generation are intelligent tasks mastered by humans. They both require understanding of human language in context. In this talk we give an overview of state-of-the-art argumentation mining and generation techniques, and we show their potential in the verification of content. We elaborate on how to extract facts or claims from text and their supporting and non-supporting arguments. In order to verify content, the resulting content representations are compared with those of facts and arguments extracted from other texts and possibly extracted from other sources, or with those that are available in knowledge repositories. Current research starts to explore the automated generation of arguments given a certain claim and its context, which might lead to ways that facilitate the verification of claims.



Building a broad knowledge graph for products
Luna Dong

Knowledge graphs have been used to support a wide range of applications and enhance search results for multiple major search engines, such as Google and Bing. At Amazon we are building a Product Graph, an authoritative knowledge graph for all products in the world. The thousands of product verticals we need to model, the vast number of data sources we need to extract knowledge from, the huge volume of new products we need to handle every day, and the various applications in Search, Discovery, Personalization, Voice, that we wish to support, all present big challenges in constructing such a graph. In this talk we describe our efforts in building a broad product graph, a graph that starts shallow with core entities and relationships, and allows easily adding verticals and relationships in a pay-as-you-go fashion. We describe our efforts on knowledge extraction, linkage, and cleaning to significantly improve the coverage and quality of product knowledge. We also present our progress towards our moon-shot goals including harvesting knowledge from the web, hands-off-the-wheel knowledge integration and cleaning, human-in-the-loop knowledge learning, and graph mining and graph-enhanced search.



Call for Help: Putting Computation in Computational Fact Checking
Delip Rao

Fact Checking, a discipline at least as old as journalism, is undergoing a massive overhaul to meet the needs of rapidly emergent news, digital consumption and social media, and the rise of misinformation and disinformation attacks. In this talk, I will review some of the attack vectors, major challenges faced by fact checkers and journalists and what we can offer as a community to help maintain the integrity of our news sources. I will also review some current efforts, future directions, and open problems related to computational fact checking.



Call For Papers

With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources.1,2

In an effort to jointly address both problems, herein we are organizing a workshop promoting research in joint Fact Extraction and VERification (FEVER). We aim for FEVER to be a long-term venue for work in verifiable knowledge extraction and to stimulate progress in this direction, we will also host the FEVER shared task, an information verification task based on a recently released dataset consisting of 220K claims verified against Wikipedia (Thorne et al., NAACL 2018).

The workshop will consist of oral and poster presentation of submitted papers including papers from the shared task participants, panel discussions and presentations by the following invited speakers:

Instructions for Camera Ready Papers

Camera ready papers must be submitted by 2nd September 2018.

The final versions of papers may include one additional page of content to address the reviewer's comments

For shared task papers, the provisional rank and score may be included in the paper. Please state that this was the score prior to any human-evaluation of the evidence.

For papers selected as oral presentations, a 15 minute slot will be provided including Q&A. The presenters are welcomed to use this time as they see fit. Format for the projector screen is TBD.

All papers (including those accepted as oral) are welcomed to present a portrait poster. The poster boards for the workshops will the same as the main conference ones. The boards will be 1m wide and 2.50m tall (= +/- 3,28 feet wide and 8,20 feet high). These boards will comfortably fit an A0 poster in portrait mode. Materials for affixing the posters to the boards will be provided.

Submissions

We invite long and short papers on all topics related to fact extraction and verification, including:

  • Information Extraction
  • Semantic Parsing
  • Knowledge Base Population
  • Natural Language Inference
  • Textual Entailment Recognition
  • Argumentation Mining
  • Machine Reading and Comprehension
  • Claim Validation/Fact checking
  • Question Answering
  • Theorem Proving
  • Stance detection

Long/short papers should consist of eight/four pages of content plus unlimited pages for bibliography. Submissions must be in PDF format, anonymized for review, and follow the EMNLP 2018 two-column format, using the LaTeX style files or Word templates to be provided on the official EMNLP 2018 website.

Papers can be submitted as non-archival, so that their content can be reused for other venues. Add "(NON-ARCHIVAL)" to the title of the submission. Non-archival papers will be linked from this webpage.

Authors can also submit extended abstracts of up to eight pages of content. Add "(EXTENDED ABSTRACT)" to the title of an extended abstract submission. Extended abstracts will be presented as talks or posters if selected by the program committee, but not included in the proceedings. Thus, your work will retain the status of being unpublished and later submission at another venue is not precluded.

Previously published work can also be submitted as an extended abstract in the same way, with the additional requirement to state on the first page the original publication.

Softconf submission link: https://www.softconf.com/emnlp2018/FEVER

FEVER Shared task

For more information on the shared task please visit the following page: Shared Task

Important dates

  • First call for papers: 24 May 2018
  • Second call for papers: 26 June 2018
  • Submission deadline: 10 August 2018
  • Notification: 27 August 2018
  • Camera-ready deadline: 2 September 2018
  • Workshop: 1 November (EMNLP)

All deadlines are calculated at 11:59pm Pacific Daylight Savings Time (UTC -7h).

Workshop Organising Committee

James Thorne

KAIST AI

Andreas Vlachos

University of Cambridge

Oana Cocarascu

King's College London

Christos Christodoulopoulos

Amazon

Arpit Mittal

Meta