Dishonesty online is being passed off as satire and humour by political parties and activists responsible for fake videos and websites, says Alastair Reid of First Draft News in an article published by journalism.co.uk. That’s his response to a video faked by the Conservative Party and then published on its official Twitter account, falsely showing Labour’s shadow Brexit secretary unable to answer a question on TV. The BBC’s Andrew Neil had to apologise for sharing a manipulated video of an SNP politician during the 2019 general election. Facebook subsequently said it would ban some deepfake political videos – but not all. It’s a hazard for journalists who need to be wary of being taken in, as happened with a fake website for US election contender Joe Biden. It’s also of interest because satire can be used as a defence in a defamation case.
Emily Bell, one of the world’s most highly respected commentators on media, has written a post for the Columbia Journalism Review that draws together strands of important thinking on the dilemmas involved in terrorism fed by social media, and the emerging wisdom on the ethics involved for mainstream media in following it. Messages include: “Do not report facts until they are verified, do not focus on the perpetrator over the victims, do not use sensational language that might glamorize the terrorist.” Read it here (and find many more internationally-focused articles on the media in the same place).
Lots of angles for any 2006MAPA students looking at ethical fallout from the New Zealand terror attack in the excellent Columbia Journalism Review newsletter: “For Australia’s ABC, Rashna Farrukh, a Muslim woman who worked in a junior role at Sky News Australia, explains that last week’s mosque massacre in New Zealand drove her to quit the channel, which she characterizes as a platform for incendiary right-wing rhetoric. Sky is owned by Rupert Murdoch. In the aftermath of the attack, Sky New Zealand said it removed Sky News Australia from its platform while the latter was still showing clips from the mosque shooter’s video. New Zealand’s chief censor subsequently made it illegal to view, possess, or share that video.”
Google and YouTube were working with governments to confront violent extremism online, wrote Google lawyer Kent Walker. Thousands of people were being enlisted as trusted flaggers of offensive content. But “a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user.” Read his op-ed piece here.
Days after the New Zealand mosque shootings, copies of the killer’s live web footage were circulating online, despite attempts to remove it. Wired magazine said detecting such footage using artificial intelligence was “a lot harder than it sounds”, hence the use of human moderators trained to look for warning signs in Live videos, like “crying, pleading, begging” and the “display or sound of guns”. Facebook was tagging all footage removed to prevent it being reposted but Google said it would not take down extracts deemed to have news value, putting it, said Wired, “in the tricky position of having to decide which videos are, in fact, newsworthy”. The piece goes on to look at the ethics of YouTube and Facebook policies that mean offensive footage may be removed, unless posted by a news organisation. YouTube has been criticised for removing videos of atrocities that were valued by researchers. The article points to the lack of regulation, or “big stick” incentives for social media companies to solve the problem. Read the piece here.
“A video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user.” – Google lawyer Kent Walker, writing in 2017. Read his op-ed here.
As of the middle of 2018, there was no regulator for social media in the UK, noted the Independent Press Standards Organisation (IPSO) in its blog on 13 July 2018. A tricky topic: is it really feasible to regulate the content of individual users? Can platforms realistically keep on top of all the content?
Across the European Union, TV and on-demand video is regulated under the Audio Visual Media Services Directive (known as the AVMS Directive). In Britain the actual work of regulating broadcast and on-demand is carried out by Ofcom.
At the start of 2019, video on newspaper and magazine websites was not regulated by Ofcom: it fell under the ambit of the print and online regulators – IPSO for most publishers. The same was true for online-only publications signed up to IPSO.
The EU could change that.
Proliferation of multi-media content meant “newspaper websites could, in theory, start to resemble video-sharing platforms,” said the IPSO blog.
But this website notes that the ethical codes for newspaper sites are nowhere near as demanding as for broadcast media.
Read the full IPSO blog post here.
The Mail Online agreed to take down a video showing a bullying attack on a schoolgirl after the mother of one of the alleged bullies said it breached her daughter’s right to privacy. The Editor’s Code section on photographing children was also considered. The decision avoided the need for an IPSO ruling. Read more.