Social media are platforms where there is basically no limit to personal freedom of expression, where users are encouraged to stay thanks to a myriad of new content every day and, indeed, they are asked to generate news and share information.
It is the beauty of today's Internet paradigm, where the user becomes the protagonist of transmissions, of the continuously updated and alive flow in the present, in which to express everything that passes through the head.
But perhaps, all this freedom is not so useful to leave it to users. The content of what is being shared should be checked, or reported in order to recognize if that what is being read is not so true. Even if the border with censorship is really blurred. But what if the platforms decide to start putting a heavier hand on limiting content? What if these reports reach none other than the President of the United States of America?
It was just some time ago that Twitter reported a POTUS tweet as "incitement to violence" regarding protests after George Floyd's arrest. Obviously Trump's reaction was not long in coming: he soon signed an executive order, with the intention of proceeding to reduce the protections that social networks and online platforms have, which, precisely as platforms and not newspapers, do not have the obligation and responsibility to control the contents that are published inside them.
This move sparked multiple comments and reflections from different sides. Several platform owners have shown concern about these grievances; Mark Zuckerberg, Facebook CEO, however, raised the dose by criticizing Twitter for expressing himself to limit the contents, even if the social giants' commitment to countering fake news, which has been in existence for years, remains important.
Thanks to artificial intelligence and deep learning algorithms, millions and millions of textual and visual data are scanned and analyzed to look for clues if they are false or malicious content, leading to the reporting or cancellation of the same. However, the detections do not always work: if on the one hand, for example the photo of a breastfeeding mother can be detected as pornographic, on the other the artificial intelligence is unable to recognize the new "threats" allowing new false news to spread, as happened recently with the hoaxes on Covid19.
There is also the work of human reviewers who continue the fact checking of information, but despite this, the war on false news is still ongoing and with a lot of effort, because if the content often you can forget - as far as " Internet data manent ", the authors are difficult to punish, the news has generated a chain of infestant derivates who perpetuate the harmful influence of the news itself and in general, there are too many good and bad contents to check.
But on the other side of the discussion, new questions arise: who decides what is right and what is wrong? Who determines how harmful news can be? Will social media platforms set up "news validity" committees? And how will people dedicated to this activity be selected? And furthermore, won't these limitations and "precautions" lead to a form of censorship? And what kind of information will it be possible to convey, creating a new theory of agenda setting?
So, for social platforms the needle of the balance oscillates between the freedom for users to write whatever and the commitment to counteract fake news phenomena as much as possible. While the management and verification of these user information flows is difficult and complex on their part, on the other hand it can be the latter, the main "self-reviewers": check the sources, look for more evidence to support your thesis, conveying official content and, at times, even keeping silent when not enough news is known, can be the first steps towards a more aware Internet.
E-Business Consulting, active since 2003, is a leading company in digital communication and Internet advertising and can help you create your online presence thanks to its experience. Call us and request a consultation!