Computing Veracity – the Fourth Challenge of Big Data

#RDSM2020

3rd International Workshop on Rumours and Deception in Social Media (RDSM)

December 13, 2020 in Barcelona, Spain
Collocated with COLING 2020

                                                       Abstract

The 3rd edition of the RDSM workshop will particularly focus on online information disorder and its interplay with public opinion formation.

Social media is a valuable resource for mining all kind of information varying from opinions to factual information. However, social media houses issues that are serious threats to the society. Online information disorder and its power on shaping public opinion lead the category of those issues. Among the known aspects are the spread of false rumours, fake news or even social attacks such as hate speech or other forms of harmful social posts. In this workshop the aim is to bring together researchers and practitioners interested in social media mining and analysis to deal with the emerging issues of information disorder and manipulation of public opinion. The focus of the workshop will be on themes such as the detection of fake news, verification of rumours and the understanding of their impact on public opinion.  Furthermore, we aim to put a great emphasis on the usefulness and trust aspects of automated solutions tackling the aforementioned themes.

Workshop Theme and Topics

The aim of this workshop is to bring together researchers and practitioners interested in social media mining and analysis to deal with the emerging issues of veracity assessment, fake news detection and manipulation of public opinion. We invite researchers and practitioners to submit papers reporting results on these issues. Qualitative studies performing user studies on the challenges encountered with the use of social media, such as the veracity of information and fake news detection, as well as papers reporting new data sets are also welcome. Finally, we also welcome studies reporting the usefulness and trust of social media tools tackling the aforementioned problems.

Topics of interest include, but are not limited to:

  • Detection and tracking of rumours.
  • Rumour veracity classification.
  • Fact-checking social media.
  • Detection and analysis of disinformation, hoaxes and fake news.
  • Stance detection in social media.
  • Qualitative user studies assessing the use of social media.
  • Bots detection in social media.
  • Measuring public opinion through social media.
  • Assessing the impact of social media on public opinion.
  • Political analyses of social media.
  • Real-time social media mining.
  • NLP for social media analysis.
  • Network analysis and diffusion of dis/misinformation.
  • Usefulness and trust analysis of social media tools.
  • Benchmarking disinformation detection systems.
  • Open disinformation knowledge bases and datasets.

Workshop Schedule/Important Dates

  • Submission deadline: 21st September, 2020
  • Notification of Acceptance: 9th October, 2020
  • Camera-Ready Versions Due: 1st November, 2020
  • Workshop date: 13th December, 2020

Submission Format

We will have a set of 8-10 presentations of peer-reviewed submissions, organised into 3 sessions by subject (the first two sessions about online information disorder and public opinion and the third session about the usefulness and trust aspects). After the session we also plan to have a group work (groups of size 4-5 attendances) where each group will sketch a social media tool for tackling e.g. rumour verification, fake news detection, etc. The emphasis of the sketch should be on aspects like usefulness and trust. This should take no longer than 120 minutes (sketching, presentation/discussion time).  We will close the workshop with a summary and take home messages (max. 15 minutes). Attendance will be open to all interested participants.

We invite submissions of up to nine (9) pages maximum, plus bibliography for long papers and four (4) pages, plus bibliography, for short papers. The COLING’2020 templates must be used; these are provided in LaTeX and also Microsoft Word format. Submissions will only be accepted in PDF format. Deviations from the provided templates will result in rejections without review. Submit papers by the end of the deadline day (timezone is UTC-12) via our Softconf Submission Site: http://softconf.com/coling2020/RDSM/

Download the MS Word and LaTeX templates here: https://coling2020.org/coling2020.zip

Selected papers will be invited to submit extended versions to the following special issue: https://www.mdpi.com/journal/information/special_issues/tackling_misinformation_online

Workshop Organizers

Programme Committee

  • Pepa Atanasova, University of Copenhagen, Denmark
  • Giannis Bekoulis, Ghent University, Belgium
  • Costanza Conforti, University of Cambridge, UK
  • Thierry    Declerck, DFKI GmbH, Germany
  • Leon Derczynski, IT University Copenhagen, Denmark
  • Samhaa R. El-Beltagy, Newgiza University, Egypt
  • Genevieve Gorrell, University of Sheffield, UK
  • Elena Kochkina, University of Warwick, UK
  • Dominik Kowald, Graz University of Technology, Austria
  • Chengkai Li, The University of Texas at Arlington, USA
  • Diana Maynard, University of Sheffield, UK
  • Preslav Nakov, QCRI, Qatar
  • Viviana    Patti, University of Turin, Italy
  • Georg Rehm, DFKI GmbH, Germany
  • Paolo Rosso, Technical University of Valencia, Spain
  • Carolina Scarton, University of Sheffield, UK
  • Ravi Shekhar, Queen Mary University of London, UK
  • Panayiotis Smeros, EPFL, Switzerland
  • Antonela Tommasel, UNICEN, Argentina
  • Adam Tsakalidis, Alan Turing Institute, UK
  • Onur Varol, Northeastern University, USA
  • Svitlana Volkova, Pacific Northwest National Laboratory, USA

 

Workshop Programme

(Online, life via underline)

Time Slot (CET) Paper/Talk Title Paper Authors/Speaker
13:00-13:05 Introduction Ahmet Aker, Arkaitz Zubiaga
13:05-14:00 Invited talk: Understanding and Mitigating Bias in the Media Henning Wachsmuth
Paper Oral Session 1
14:10-14:25

Towards Trustworthy Deception Detection: Benchmarking Model Robustness across Domains, Modalities, and Languages

Maria Glenski, Ellyn Ayton, Robin Cosbey, Dustin Arendt and Svitlana Volkova

14:25-14:40

A Language-Based Approach to Fake News Detection Through Interpretable Features and BRNN

Yu Qiao, Daniel Wiechmann and Elma Kerz, yuPre

14:40-14:55

Covid or not Covid? Topic Shift in Information Cascades on Twitter

Liana Ermakova, Diana Nurbakova and Irina Ovchinnikova,lianaPre

14:55-15:10 Break
Paper Oral Session 2
15:10-16:05 Invited Talk: Matching similar claims: From research to practice Scott A. Hale
16:10-16:25

Revisiting Rumour Stance Classification: Dealing with Imbalanced Data

Yue Li and Carolina Scarton

16:25-16:40

Fake news detection for the Russian language

Gleb Kuzmin, Daniil Larionov, Dina Pisarevskaya and Ivan Smirnov, glebPre

16:40-16:55

Automatic Detection of Hungarian Clickbait and Entertaining Fake News

Veronika Vincze and Martina Katalin Szabó

16:55-17:10

Fake or Real? A Study of Arabic Satirical Fake News

Hadeel Saadany, Constantin Orasan and Emad Mohamed, hadeelPre

17:10-17:30

Closing

Ahmet Aker, Arkaitz Zubiaga

Invited Speaker(s)

Invited Talk: Understanding and Mitigating Bias in the Media

Media plays an important role in shaping public opinion. What is said and how it is said in news articles and the like, however, may be biased with respect to political orientation and may range from original fact reporting to pure propaganda. Biased media can influence people in undesirable directions and hence should at least be unmasked as such. This talk will look into recent research on the computational detection and analysis of bias in news articles, aiming for a better understanding of the manifestation of media bias in language. I will also give insights into how to mitigate media bias using neural style transfer techniques. On this basis, I will discuss how technology may support self-determined opinion formation in the biased postfactual age we live in.

By Henning Wachsmuth

Henning Wachsmuth is a junior professor at Paderborn University in Paderborn, Germany. He finished his PhD in Paderborn in 2015, before he worked as a PostDoc at Bauhaus-Universität Weimar. Since 2018, Henning leads the Computational Social Science Group at the Department of Computer Science in Paderborn.

Henning’s group studies how intentions and views of people are reflected in their language and how machines can understand and imitate this with natural language processing methods. A main focus is on the computational analysis and synthesis of argumentation. The goal underlying their research is to support self-determined opinion formation in times of fake news and alternative facts through technology.

Invited Talk: Matching similar claims: From research to practice

Misinformation is often remixed and reshared online resulting in many distinct yet similar pieces of content. Grouping similar claims is useful to highlight existing fact-checks and to prioritize content for human fact-checking, which can be a time-consuming process. This talk will present research evaluating the performance of multilingual text similarity methods for grouping social media content making similar claims and discuss the development of that research into a practical solution for use by fact-checking organizations operating misinformation tiplines on WhatsApp and other platforms.

By Scott A. Hale

Dr Scott A. Hale is a Director of Research at Meedan, a non-profit building digital tools for global journalism and translation. He sets strategy and oversees research on widening access to quality information online and seeks to foster greater academic–industry collaboration through chairing multistakeholder groups, developing and releasing real-world datasets, and connecting academic and industry organizations. Scott is also a member of the Credibility Coalition, a Senior Research Fellow at the Oxford Internet Institute, University of Oxford, and a Fellow at the Alan Turing Institute.

Sponsor

This workshop is  supported by the European Union under grant agreement No. 654024, SoBigData.

 

Be Sociable, Share!