Why terror video keeps reappearing online

Days after the New Zealand mosque shootings, copies of the killer’s live web footage were circulating online, despite attempts to remove it. Wired magazine said detecting such footage using artificial intelligence was “a lot harder than it sounds”, hence the use of human moderators trained to look for warning signs in Live videos, like “crying, pleading, begging” and the “display or sound of guns”. Facebook was tagging all footage removed to prevent it being reposted but Google said it would not take down extracts deemed to have news value, putting it, said Wired, “in the tricky position of having to decide which videos are, in fact, newsworthy”. The piece goes on to look at the ethics of YouTube and Facebook policies that mean offensive footage may be removed, unless posted by a news organisation. YouTube has been criticised for removing videos of atrocities that were valued by researchers. The article points to the lack of regulation, or “big stick” incentives for social media companies to solve the problem. Read the piece here.

“A video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user.” – Google lawyer Kent Walker, writing in 2017. Read his op-ed here.