Coping with infodemic: Three Challenges and a (Possible) Solution, Co-created with TED

5 de Octubre de 2021

--- Image caption ---

Lorena Moscovich, Head of Experimentation UNDP, Argentina @lmoscovich  lorena.moscovich@undp.org

Olivia  Sohr, Director of Impact and New Initiatives, Chequeado Argentina @olisohr  

With the Internet and social media, we can get new knowledge, build communities, chat with people around the world, and learn about thousands of local and global news and trends only seconds after they have happened. Plus, crucial social concerns kept under the shadows for public opinion are made visible. However, the Internet is also a place where disinformation is spreading faster and faster, and where thousands of people are victims of harassment or hate speech.

The Dangers Seem Clear. The Solution? Not that Much

Possible solutions to counteract harmful contents spreading on the Internet have at least three challenges: avoid amplifying contents when trying to moderate them or debunk them when appropriate, get people to be receptive to the corrections even when they come from others with different opinions, and make use of tools to control contents without attempting against free speech.

What would be the most effective alternative to stop the infodemic? Starting with the latter: no efforts by governments or the civil society will be enough unless Internet users and audiences get involved in checking the content that is being spread on the web. On the one hand, because infodemic is a massive phenomenon and, on the other, because it is necessary to look after the essential plurality of voices while performing this task.

Second, it often happens that while in the duty of sharing corrections to disinformation, the latter is spread even further. It happens when corrections circulate much less and within different audiences than those of the target audiences. Hence, checkers keep developing new strategies so that checks can reach those who have been exposed to disinformation. Furthermore, it has been proved that checks help to reduce the number of people sharing disinformation preventing its spread. However, we need an alternative tool to the usual checks.

Finally, the hypothesis put forward is that, with more information about the kinds of content that are spread on the Internet, if checks multiply, the biases of audiences that do not rely on a single speaker’s objectivity could be counteracted. This task would be impossible for one person or even a team, but it is very feasible for the collective intelligence of hundreds of people.

To meet the three requirements described, we would need a tool that gives anyone from anywhere in the world the possibility of monitoring the Internet while looking after the plurality of opinions. Thus, it would be possible to categorize the content as valuable or harmful and generate massive and anonymous information according to the scale of disinformation that circulates. As a result, more harmful content could be identified and reported.

Fortunately, this tool is already in town! It was developed by the international organization TED. It is a browser extension that allows users to flag content as valuable or harmful with only a few clicks. And if contents were harmful, to be able to distinguish between false content, hate speech or harassment.

But developments of this nature need testing. To this end, an agreement was made with the global network of Accelerator Labs of the United Nations Development Programme (UNDP) to carry out pilot tests and adaptations in different countries. The infodemic is a matter of great concern for UNDP at the global level, for other agencies such as UNICEF and the World Health Organization (WHO), and, of course, for the UN Secretariat that launched the Verified campaign.

The Lab of Argentina, Co_Lab, was responsible for adapting the tool to the Spanish-speaking audience and its prototyping. Together with Chequeado and TEDxRiodelaPlata, we conducted one of the pilot tests. To provide the initiative with a clear framework, a strategy was defined for the pilot, which was also supported by UNICEF and the United Nations Information Centres, focusing on the disinformation being spread on COVID-19 vaccines. The option for vaccines was based on how relevant this topic was in the public debate through the tool testing period (May-July 2021), and the amount of disinformation about vaccines circulating since we know from previous experiences, that disinformation can discourage people from getting vaccinated.

This small-scale pilot allowed us to adapt the tool to the Spanish-speaking audience, encouraging with the help of two seminars the public debate on the most common biases when consuming Internet content that can be disinformation, and on tools to counterbalance it. In this process, we learned a lot on how to think about collective systems for monitoring contents spread on the web. When launching the tool, dozens of people joined as active agents to help improve the web and use their knowledge to enhance collective debates.

🟧🟦 HEALTHY  INTERNET  PROJECT: Toolkit to avoid misinformation, recognize hate speech and cases of harassment

Through social media campaigns, we invited people to download the extension and flag content. Through the testing period, 150 people downloaded the extension, but not all of them used it to flag content. About seventy flaggings were collected, mainly identifying lies or manipulation, second, valuable ideas, then content causing division or fear, and finally, there were only a few cases of harassment or abuse.

Among the lessons learned with this pilot, we found out that the extension itself is a simple mechanism; however, it has some access challenges, since not everyone usually uses extensions in their web browsers. Nevertheless, we found out that, once installed, the HIP (Health Internet Project) extension is easy to use and appealing, and it allows for extremely fast flagging. One idea that came up strongly was the chance of adapting the tool for mobiles, and thus include the hundreds of thousands of users who browse the Internet on their phones.

Anonymity can have a lot of potential to draw attention to disinformation as it allows to avoid biased interpretations of these reports, and to protect personal data. However, it limits the possibilities of communication with the participants and community building. Furthermore, it makes it difficult to provide feedback on biases that can also influence the type of categorizations and impact resources that could come up from the project, which could work as an additional motivation for people to keep using this tool. Another challenge is to encourage users to continue with the flagging once the extension was installed, since participation tended to decrease throughout the test.

Beyond these drawbacks, there is huge potential in the collective processing of content being spread on the Internet because it allows for multiplying the flaggers and the number of posts, while preserving the identity of participants. Given the amount of information being spread on the Internet, and the interest identified in large groups of people to have an active role in improving the public debate, it is possible to achieve a collective work that can help promote the best of the web and mitigate its harmful aspects, without affecting freedom of speech.

This first pilot allowed us to adapt a collective fact-checking tool to a local audience. Regardless of the low engagement level, this first pilot shed light on how this kind of collaborative efforts work and provided some lessons for implementing future projects if any.