Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ 103 C
Workshop on Prioritising Online Content
John Shawe-Taylor · Massimiliano Pontil · Nicolò Cesa-Bianchi · Emine Yilmaz · Chris Watkins · Sebastian Riedel · Marko Grobelnik





Workshop Home Page

Social Media and other online media sources play a critical role in distributing news and informing public opinion. Initially it seemed that democratising the dissemination of information and news with online media might be wholly good – but during the last year we have witnessed other perhaps less positive effects.
The algorithms that prioritise content for users aim to provide information that will be ‘liked’ by each user in order to retain their attention and interest. These algorithms are now well-tuned and are indeed able to match content to different users’ preferences. This has meant that users increasingly see content that aligns with their world view, confirms their beliefs, supports their opinions, in short that maintains their ‘information bubble’, creating the so-called echo-chambers. As a result, views have often become more polarised rather than less, with people expressing genuine disbelief that fellow citizens could possibly countenance alternative opinions, be they pro- or anti-brexit, pro- or anti-Trump. Perhaps the most extreme example is that of fake news in which news is created in order to satisfy and reinforce certain beliefs.
This polarisation of views cannot be beneficial for society. As the success of Computer Science and more specifically Machine Learning have led to this undesirable situation, it is natural that we should now ask how Online Content might be prioritised in such a way that users are still satisfied with an outlet but at the same time are not led to more extreme and polarised opinions.
What is the effect of content prioritisation – and more generally, the effect of the affordances of the social network – on the nature of discussion and debate? Social networks could potentially enable society-wide debate and collective intelligence. On the other hand, they could also encourage communal reinforcement by enforcing conformity within friendship groups, in that it is a daring person who posts an opinion at odds with the majority of their friends. Each design of content prioritisation may nudge users towards particular styles of both content-viewing and of content-posting and discussion. What is the nature of the interaction between content-presentation and users’ viewing and debate?
Content may be prioritised either ‘transparently’ according to users’ explicit choices of what they want to see, combined with transparent community voting, and moderators whose decisions can be questioned (e.g. Reddit). At the other extreme, content may be prioritised by proprietary algorithms that model each user’s preferences and then predict what they want to see. What is the range of possible designs and what are their effects? Could one design intelligent power-tools for moderators?
The online portal Reddit is a rare exception to the general rule in that it has proven a popular site despite employing a more nuanced algorithm for the prioritisation of content. The approach was, however, apparently designed to manage traffic flows rather than create a better balance of opinions. It would, therefore, appear that even for this algorithm its effect on prioritisation is only partially understood or intended.
If we view social networks as implementing a large scale message-passing algorithm attempting to perform inference about the state of the world and possible interventions and/or improvements, the current prioritisation algorithms create many (typically short) cycles. It is well known that inference based on message passing fails to converge to an optimal solution if the underlying graph contains cycles because information then becomes incorrectly weighted. Perhaps a similar situation is occurring with the use of social media? Is it possible to model this phenomenon as an approximate inference task?
The workshop will provide a forum for the presentation and discussion of analyses of online prioritisation with emphasis on the biases that such prioritisations introduce and reinforce. Particular interest will be placed on presentations that consider alternative ways of prioritising content where it can be argued that they will reduce the negative side-effects of current methods while maintaining user loyalty.

Call for contributions - see conference web page via link above.

We will issue a call for contributions highlighting but not restricted to the following themes:
() predicting future global events from media
(
) detecting and predicting new major trends in the scientific literature
() enhancing content with information from fact checkers
(
) detection of fake news
() detecting and mitigating tribalism among online personas
(
) adapted and improved mechanisms of information spreading
(*) algorithmic fairness in machine learning

Automating textual claim verification (Talk)
Reducing controversy by connecting opposing views (Presentation)
Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation (Presentation)
A Framework for Automated Fact-Checking for Real-Time Validation of Emerging Claims on the Web (Presentation)
Equality of Opportunity in Rankings (Spotlight)
The Unfair Externalities of Exploration (Spotlight)
Developing an Information Source Lexicon (Spotlight)
Mitigating the spread of fake news by identifying and disrupting echo chambers (Spotlight)
An Efficient Method to Impose Fairness in Linear Models (Spotlight)
Poster session (Posters)
Lunch Break (Lunch)
Philosophy and ethics of defining, identifying, and tackling fake news and inappropriate content (Debate)
Political echo chambers in social media (talk)
Coffee break (break)
Reality around fake news (Debate)