Computing Veracity – the Fourth Challenge of Big Data

News

Mental Health and Social Media

PHEME partner King’s College London reports on their work in mental health and social media, one of PHEME’s two use cases.

Do they think it’s true? Automatic Rumour Stance Classification.

This post describes the task and the winning system of SemEval-2017 Task 8 RumourEval A Stance Classification, where teams had to create systems that would classify tweets discussing different rumours as either supporting, denying, questioning the truthfulness of the rumour or just commenting on it. Teams had access to the complete Twitter conversations and were […]

RumourEval: bringing veracity checking to the community

While Pheme has been about advancing our own technology and understanding around veracity and fake news, there’s a much broader opportunity to make progress if these advances are shared. One way we can do this is by presenting problems from Pheme to the research community, using our data and the ways we found of framing the rumour detection problem. SemEval is […]

Work on automatic detection of online claims far from over

Before wrapping up the project, Pheme partner swissinfo.ch caught up with project coordinator Kalina Bontcheva from the University of Sheffield to get her take on how far technologies designed to identify questionable claims have come since Pheme was launched. The answer: far, but not far enough given the current context of fake news and other […]

, ,

Country-level geolocation of tweets

A new paper in collaboration between the University of Warwick and the University of St. Andrews, exploring the ability to determine the country that a tweet has been posted from, has been accepted for publication in IEEE Transactions on Knowledge and Data Engineering. This study explores how tweets from an unfiltered stream can be effectively […]

How does a fake news story develop?

Tracing rumours to their origin shows us how they develop, and how they are spread. Often, they’re seeded in many places, with a grain of truth, and described in very specific and predictable ways. These seeds are then picked up and spread (by what Facebook calls “amplifiers”) until they become somewhat credible stories with a huge audience. […]

Workshop on Noisy, Unstructured Text

We’re holding a workshop on dealing with noisy text, like the social media content that Pheme relies upon. The workshop will be held in Copenhagen, Denmark on September 7, 2017, in conjunction with the top-tier EMNLP conference. Here’s the official call for papers. Call for Papers We seek submissions of regular papers on original and […]

Supporting the Use of User Generated Content in Journalistic Practice

A paper written jointly by researchers from the University of Warwick, swissinfo.ch, Lancaster University and the University of Siegen will be presented at the flagship Human-Computer Interaction conference, CHI, in Denver in May this year. The paper is entitled ‘Supporting the Use of User Generated Content in Journalistic Practice’ and documents how initial studies of […]

Exploiting Tree-Structured Conversations for Rumour Stance Classification

A new paper led by the University of Warwick, in collaboration with the University of Sheffield, has explored the ability to classify rumour stance by leveraging the discursive nature of Twitter conversations. The paper is titled “Stance Classification in Rumours as a Sequential Task Exploiting the Tree Structure of Social Media Conversations“, published at the Natural […]

Limiting the spread of fake news on social media

As talk of fake news online reached fever pitch following the United States presidential election, Pheme partner SWI swissinfo.ch took a closer look at work already underway to address the spread of misinformation. Naturally, the first person swissinfo.ch turned to was project coordinator Kalina Bontcheva, who explained some of the contributions Pheme is making in […]

, ,

Previous Posts