NSPCC: Social media networks failing to tackle online child abuse
Bosses at the charity have set out a three-step “rulebook”, which they want enforced by independent regulators.
Social media firms have failed to tackle online child abuse, grooming and bullying, a leading children’s charity has said.
The NSPCC has called on the Government to create new laws forcing internet giants such as Facebook and Twitter to do more to stop the rising problem, with those that fail to meet the standards sanctioned and fined.
The calls come weeks after Prime Minister Theresa May put pressure on social media networks to clamp down on terrorist or extremist content on their sites in the wake of the Manchester bombing.
NSPCC bosses have set out a three-step “rulebook”, which they want enforced by independent regulators. It would mean “safe accounts” with the highest privacy settings for under-18s, grooming and bullying notifications for youngsters being targeted and child safety moderators employed by all networks.
The charity’s chief executive Peter Wanless said leaving sites to make up their own rules was unacceptable, adding: “Enough is enough.”
“We need legally enforceable universal safety standards that are built in from the start,” he said.
“We’ve seen time and time again social media sites allowing violent, abusive or illegal content to appear unchecked on their sites, and in the very worst cases children have died after being targeted by predators or seeing self-harm films posted online.
“It makes no sense that in the real world we protect children from going to night clubs or seeing over-18 films, but in the online world, where they spend so much of their time, we have no equivalent safeguards to protect them from harmful strangers or violent content.
“Government must urgently bring in safe accounts, groomer alerts and specially trained child safety moderators as a bare minimum to protect our children. And websites who fail to meet these standards should be sanctioned and fined.”
The safe accounts would have location settings switched off, would not be searchable by email or phone number and would see new followers needing approval.
Grooming alerts would flag harmful behaviour, such as adults with a high rejection rate of friend requests to under-18s. It could also use artificial intelligence to pick up on “sinister messages”.
Research by the NSPCC and O2 found four out of five children feel social media sites aren’t doing enough to protect them from pornography, self-harm, bullying and hatred.
Last year, Childline gave 12,248 counselling sessions about online safety and abuse. Of those, 5,103 mentioned cyber-bullying – up 12% from the previous year.
Over the same period, there was 2,123 counselling sessions about online child sexual exploitation – up 44%.