No answers here, but a tweet from the Financial Times includes a facial-recognition image and a line about the need to regulate the use of artificial intelligence. Newsrooms are already (in 2020) making advances in the use of AI in news coverage: for instance, to identify guests arriving at Prince Harry’s wedding. As far as privacy goes, the GDPR, the Data Protection Act and the Human Rights Act already make it clear what the law is. However, AI takes away some of the human decision-making, and may involve taking images from remote cameras so that it is not obvious to subjects that they have been photographed. Does this mean there needs to be special consideration of how to meet the obligations of the data protection laws – taking into account the exemptions for public-interest journalism, and the “legitimate purpose” under the GDPR of exercising the right to freedom of expression? (The link is to a paid-to-view story; anyone interested to explore further will have to investigate for themselves).
Police rebuked national media for using Facebook pictures of a boy who had been mauled to death by a dog, but a local site baulked at doing so – and won the family’s trust. Read more.
Photographers are using bots such as Picscout to find out when people have used their online images without paying – and sending large bills. People have been caught out even when using Creative Commons images, if they failed to comply with licence conditions such as full attribution, says media law consultant David Banks on his blog.
David Mascord, media law lecturer at Bournemouth University, shares tips on protecting your copyright – including putting a watermark on your social media images – on the journalism.co.uk website, here.
Days after the New Zealand mosque shootings, copies of the killer’s live web footage were circulating online, despite attempts to remove it. Wired magazine said detecting such footage using artificial intelligence was “a lot harder than it sounds”, hence the use of human moderators trained to look for warning signs in Live videos, like “crying, pleading, begging” and the “display or sound of guns”. Facebook was tagging all footage removed to prevent it being reposted but Google said it would not take down extracts deemed to have news value, putting it, said Wired, “in the tricky position of having to decide which videos are, in fact, newsworthy”. The piece goes on to look at the ethics of YouTube and Facebook policies that mean offensive footage may be removed, unless posted by a news organisation. YouTube has been criticised for removing videos of atrocities that were valued by researchers. The article points to the lack of regulation, or “big stick” incentives for social media companies to solve the problem. Read the piece here.
“A video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user.” – Google lawyer Kent Walker, writing in 2017. Read his op-ed here.
A former MP complained of breach of privacy after The Sun published photographs of him allegedly “nuzzling” his face in a friend’s breasts. The Independent Press Standards Organisation found in favour of the MP on privacy grounds, saying there was no public interest justification. A further complaint on grounds of accuracy was not upheld. Press Gazette’s account of the saga shows the reasoning behind the ruling. Read it here.
Scraping pictures and other information from Facebook and other social media could be unethical, the former editor Chris Frost argues in a book chapter (2018). He cites a publisher saying one woman’s picture was “publicly accessible”, but the account privacy was set to “family and friends”. Another picture of a possible Manchester bombing victim was taken from a hoax account. Both resulted in IPSO rulings. Read the full chapter here.