|10:00–10:30||Research Talks 1|
|10:00–10:15||The Data Challenge in Misinformation Detection: Source Reputation vs. Content Veracity|
Fatemeh Torabi Asr and Maite Taboada
|10:15–10:30||Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection|
Lev Konstantinovskiy, Oliver Price, Mevan Babakar and Arkaitz Zubiaga
|Crowdsourcing Semantic Label Propagation in Relation Classification|
Anca Dumitrache, Lora Aroyo and Chris Welty
|Retrieve and Re-rank: A Simple and Effective IR Approach to Simple Question Answering over Knowledge Graphs|
Vishal Gupta, Manoj Chinnakotla and Manish Shrivastava
|Information Nutrition Labels: A Plugin for Online News Evaluation|
Vincentius Kevin, Birte Högden, Claudia Schwenger, Ali Sahan, Neelu Madan, Piush Aggarwal, Anusha Bangaru, Farid Muradov and Ahmet Aker
|Joint Modeling for Query Expansion and Information Extraction with Reinforcement Learning|
Motoki Taniguchi, Yasuhide Miura and Tomoko Ohkuma
|Towards Automatic Fake News Detection: Cross-Level Stance Detection in News Articles|
Costanza Conforti, Mohammad Taher Pilehvar and Nigel Collier
|Belittling the Source: Trustworthiness Indicators to Obfuscate Fake News on the Web|
Diego Esteves, Aniketh Janardhan Reddy, Piyush Chawla and Jens Lehmann
|Automated Fact-Checking of Claims in Argumentative Parliamentary Debates|
Nona Naderi and Graeme Hirst
|Stance Detection in Fake News A Combined Feature Representation|
Bilal Ghanem, Paolo Rosso and Francisco Rangel
|Zero-shot Relation Classification as Textual Entailment|
Abiola Obamuyide and Andreas Vlachos
|Teaching Syntax by Adversarial Distraction|
Juho Kim, Christopher Malon and Asim Kadav
|Where is Your Evidence: Improving Fact-checking by Justification Modeling|
Tariq Alhindi, Savvas Petridis and Smaranda Muresan
|12:15–12:30||Research Talks 2|
|12:15–12:30||Affordance Extraction and Inference based on Semantic Role Labeling|
Daniel Loureiro and Alípio Jorge
|14:45–15:30||Shared Task Flash Talks|
|14:45–14:50||The Fact Extraction and VERification (FEVER) Shared Task|
James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos and Arpit Mittal
|14:50–15:00||Combining Fact Extraction and Claim Verification in an NLI Model|
Yixin Nie, Haonan Chen and Mohit Bansal
|15:00–15:10||UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF)|
Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp and Sebastian Riedel
|15:10–15:20||Multi-Sentence Textual Entailment for Claim Verification|
Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz and Iryna Gurevych
|15:20–15:30||Team Papelo: Transformer Networks at FEVER|
|15:30–16:30||Shared Task Posters|
|Uni-DUE Student Team: Tackling fact checking through decomposable attention neural network|
Jan Kowollik and Ahmet Aker
|SIRIUS-LTG: An Entity Linking Approach to Fact Extraction and Verification|
Farhad Nooralahzadeh and Lilja Øvrelid
|Integrating Entity Linking and Evidence Ranking for Fact Extraction and Verification|
Motoki Taniguchi, Tomoki Taniguchi, Takumi Takahashi, Yasuhide Miura and Tomoko Ohkuma
|Robust Document Retrieval and Individual Evidence Modeling for Fact Extraction and Verification.|
Tuhin Chakrabarty, Tariq Alhindi and Smaranda Muresan
|DeFactoNLP: Fact Verification using Entity Recognition, TFIDF Vector Comparison and Decomposable Attention|
Aniketh Janardhan Reddy, Gil Rocha and Diego Esteves
|An End-to-End Multi-task Learning Model for Fact Checking|
sizhen li, Shuai Zhao, Bo Cheng and Hao Yang
|Team GESIS Cologne: An all in all sentence-based approach for FEVER|
|Joint Sentence Extraction and Fact Checking with Pointer Networks|
Christopher Hidey and Mona Diab
|QED: A fact verification system for the FEVER shared task|
Jackson Luken, Nanjiang Jiang and Marie-Catherine de Marneffe
|Team UMBC-FEVER : Claim verification using Semantic Lexical Resources|
Ankur Padia, Francis Ferraro and Tim Finin
|A mostly unlexicalized model for recognizing textual entailment|
Mithun Paul, Rebecca Sharp and Mihai Surdeanu
|17:15–17:30||Prizes + Closing Remarks|
With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question.
However, only a small
fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge.
There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources.1,2
In an effort to jointly address both problems, herein we are organizing a workshop promoting research in joint Fact Extraction and VERification (FEVER).
We aim for FEVER to be a long-term venue for work in verifiable knowledge extraction and to stimulate progress in this direction, we will also host the FEVER shared task, an information verification task based on a recently released dataset consisting of 220K claims verified against Wikipedia (Thorne et al., NAACL 2018).
The workshop will consist of oral and poster presentation of submitted papers including papers from the shared task participants, panel discussions and presentations by the following invited speakers:
Camera ready papers must be submitted by 2nd September 2018.
The final versions of papers may include one additional page of content to address the reviewer's comments
For shared task papers, the provisional rank and score may be included in the paper. Please state that this was the score prior to any human-evaluation of the evidence.
For papers selected as oral presentations, a 15 minute slot will be provided including Q&A. The presenters are welcomed to use this time as they see fit. Format for the projector screen is TBD.
All papers (including those accepted as oral) are welcomed to present a portrait poster. The poster boards for the workshops will the same as the main conference ones. The boards will be 1m wide and 2.50m tall (= +/- 3,28 feet wide and 8,20 feet high). These boards will comfortably fit an A0 poster in portrait mode. Materials for affixing the posters to the boards will be provided.
We invite long and short papers on all topics related to fact extraction and verification, including:
Long/short papers should consist of eight/four pages of content plus unlimited pages for bibliography. Submissions must be in PDF format, anonymized for review, and follow the EMNLP 2018 two-column format, using the LaTeX style files or Word templates to be provided on the official EMNLP 2018 website.
Papers can be submitted as non-archival, so that their content can be reused for other venues. Add "(NON-ARCHIVAL)" to the title of the submission. Non-archival papers will be linked from this webpage.
Authors can also submit extended abstracts of up to eight pages of content. Add "(EXTENDED ABSTRACT)" to the title of an extended abstract submission. Extended abstracts will be presented as talks or posters if selected by the program committee, but not included in the proceedings. Thus, your work will retain the status of being unpublished and later submission at another venue is not precluded.
Previously published work can also be submitted as an extended abstract in the same way, with the additional requirement to state on the first page the original publication.
Softconf submission link: https://www.softconf.com/emnlp2018/FEVER
For more information on the shared task please visit the following page: Shared Task
All deadlines are calculated at 11:59pm Pacific Daylight Savings Time (UTC -7h).