EARLIER this month, Sri Lanka restricted access to the internet and blocked Facebook in the wake of communal violence in the multi-ethnic Kandy district of the country. A number of Facebook pages and profiles were reported as instigating violence and spreading hate speech in the district.
The Sunday Times newspaper in Sri Lanka quoted Harendra B Dassanayake, Director, Research and International Media at the President’s office, as saying that there was one particular individual who posted a video on YouTube of himself walking around Kandy, urging people to gather around mosques as Muslims were plotting to attack Buddhist temples. This person called on them to destroy all the mosques if even a single temple was attacked.
The clip was circulated via social media. Hatred and incitement to violence were also spread via messaging apps like WhatsApp, owned by Facebook, and Viber.
“There was a WhatsApp group called ‘Muslim Media’. It circulated voice clips with step-by-step instructions on how to manufacture petrol bombs, how to mix caustic soda, battery acid and flour to make an instant explosion,” Dassanayake said.
In its response to the restrictions, a spokesperson for Facebook based in India was reported as saying the US firm is in contact with the government as well as non-government organisations in Sri Lanka to support efforts to identify and remove hate speech and incitement to violence.
“The safety of our community is critical to us. We have clear rules against hate speech or incitement to violence and work hard to keep it off our platform,” said the spokesperson, in reply to email queries sent by The Sunday Times.
The individual who did the inciting in Kandy soon found himself in police custody. His actions played on the majority Buddhist sentiment. Kandy is famed for sacred Buddhist sites including Sri Dalada Maligawa shrine, or the Temple of the Tooth shrine.
Aggressively Pursuing A Course Of Action
What this individual did and said comes squarely under the notion of “deliberate online falsehoods”. This has been in the news in Singapore due to the proceedings of the Select Committee on Deliberate Online Falsehoods, chaired by veteran Member of Parliament Charles Chong. Clips of the Select Committee’s exchanges with representatives of US firms like Facebook and Google have been circulating rampantly on social media.
To avoid the potential unfolding of communal violence like what happened in Kandy, there is little doubt that more should be done by social media platforms like Facebook to try and identify inciters of hatred and violence in real time, and before their messages can be spread. This is a big, ongoing challenge for social media companies.
If a government vehicle like the Select Committee can push social media giants to hasten the process, then it is doing its job in attempting to quell the spreading of deliberate online falsehoods.
You Might Also Like To Read:
However, the demarcation line on deliberate online falsehoods should be clear.
Trying to stop violence and racial hatred should certainly be a top priority.
In contrast, it is arguable if criticisms or parodies of a government should come under the umbrella of deliberate online falsehoods. If a person is unhappy with the way a government is treating him, then he does not have too many channels apart from the social media to express his reservations or distaste.
Such actions should not be legislated against because the closing of this door could fester wounds and make them dangerous to the individual.
Indeed, there are other legal channels that can be pursued if an individual or an organisation or a government is made to look bad by social media.
So, instead of contemplating new laws to censor such activity, perhaps better education programmes are needed to make readers more perceptive and discerning about what they see and read. This would put them in a better position to discern what is a deliberate online falsehood or fake news, and what is not.
There are examples of this on the popular WhatsApp messaging system. If a gullible member of a WhatsApp chat group passes around an erroneous post from another source, he is soon condemned and corrected. He learns from his mistake and becomes more discerning next time around about passing on posts. This likely happens on WhatsApp on a daily basis. The point is that there is a lot of self-regulation going on.
For More Commentaries visit http://www.storm.sg/views/
Some sense of editorial integrity should be actively promoted on social media. Perhaps that is what social media firms should really focus on.
It would help people realise that posting macabre pictures of dead bodies, or someone being stabbed repeatedly, or a car being run over by a truck, should be done with more consideration to the sensitivities of receivers of the message, as well as the victims and their families.
A better understanding of this by users of social media will narrow the gap available for inciters of violence and hatred to exploit. That last bit of the gap likely needs legislation.
At the end of the day, social media hasn’t been around for long and is still on its learning curve. Education rather than the arbitrary implementation of new laws might be a better way to handle the turbulence, both local and foreign, that sometimes gathers around it.
Thus It Was Unboxed by One-Five-Four Analytics presents alternative angles to current events. Reach us at firstname.lastname@example.org
Main Image: / Shutterstock.com