Over the last week, there have been stories in the media of the Independent journalist Guy Adams having his social networking account suspended by Twitter at the request of NBC after he complained of their coverage and tweeted the work email of NBC Olympics President Gary Zenkel, and also of the Olympic diver Tom Daley and Blue Peter presenter Helen Skelton being victims of trolling (cyber bullying), which resulted an arrest of 17 year old Reece Messer and Skelton removing herself off Twitter entirely. The people involved in these cases are not the first or the last to encounter online harrassment or the heavy-handedness of social media policing but it does ask the important question of how we should move forward in employing tighter rules and regulations for social media networks who are responsible for those who have reported or are being accused of abusing individuals online.
We all know social media is great form of communication but these recent and perhaps somewhat unnecessary actions which have been picked up in the press have affected both the victims and the accused in significant ways. Did a menacing but thoughtless tweet really warrant an arrest? When does a provocative tweet become slanderous and require local law enforcement to come into play and how do we ensure we maintain our right to freedom of speech? The Malicious Communications Acts of 1998 and 2003 designed originally to combat poison pen letters states:
“anyone who sends to another person a message which is indecent, grossly offensive, a threat, which contains information which is false (or believed to be false) by the sender or is in some way “indecent or grossly offensive” and which is intended to cause distress or anxiety to the recipient is punishable by a fine or a six month custodial sentence.”
We all agree, there is a line that should never be crossed but does a person have a right to report a tweet that says something they simply don’t want to hear? In an interview on BBC Radio 4’s Today programme, Chief Constable Stuart Hyde who is in charge of e-Crime for Association of Chief Police Officers (Acpo) admits “the legislation gave them a lot of power to do stuff but wasn’t particularly created for this [social media], but it works reasonably well most of the time.” He continues to say they were right for police to intervene in Tom Daley’s case where individuals were being targeted by Twitter trolls but added a “common sense” approach is needed. Some experts have called for initiatives to educate the public on the legal risks of social media usage as it’s clear the “common sense” required needs clear remits especially when the web is redefining the way we communicate.
What is acceptable behaviour online is straight forward – what you say online is equally as punishable by law as what you say offline, people need to be aware their tweets have consequences and can be used against them in a court of law. Traditional copyright laws were adapted for the world wide web using the same principle that the internet publishes content much like editorial. Yet the endless generation of information and content online now means rights over audio and visual content has become impossible to enforce – just ask the music industry how it’s doing. Web 2.0 is changing the functionality of the internet yet again, as we begin to rebuild the web not just around around people and relationships as well as content. So how does one regulate this new age of conversation to give us the same level of protection we need as offline?
Social media has transformed freedom of speech, enabling us spread opinion and messages to the far corners of the globe within hours and minutes, just think back to how influential Twitter was in the Egyptian Revolt in 2011. Yet without these social media networks protecting us with effective regulation and improving their reporting systems to make sure both sides are treated fairly, we are left to local laws to govern how we can communicate. Hyde supports this by adding “I think there is a case that if you are going to run it as a commercial organisation, then you have got to allow people to use it safely and securely, and have the processes in place where people are acting in a strange way.” Abuse can be found on the comments of YouTube, the feeds of Twitter and the Walls and Pages of Twitter everyday, yet many find it difficult or near impossible to get these companies to effectively police their own networks – the man hours and the amount of content being mostly likely reason why.
Paul Adams #TwitterJokeTrial was finally acquitted this month by the High Court after a two and a half year legal battle, so perhaps we are finally moving towards reforming our own UK laws, but until social networks to step up to the mark, here are a few points to prevent such situations from happening to us:
- • Twitter is a public forum of communication so be prepared to take the rough with the smooth and rise above it
- • Know there is a difference between someone being negative and being slanderous, defamatory or threatening
- • Consider the bigger picture and the context of any online conversation, don’t provoke unnecessary attention just because your name has been mentioned
- • Think twice before engaging with any harrassment, dispute or argument, it can encourage more negative behaviour
- • You can always block and ignore. Abusive users often lose interest if they realise they can’t get a reaction out of you
- • Refer to the Twitter’s Help Centre regarding any abusive behaviour you might encounter
- • Think twice before involving local law enforcement, the publicity and public perception from such involvement could backfire on you
- • Seek specialist advice from your social media manager as they should have relationships with online policing experts