Children are being targeted with graphic online content sometimes within hours of setting up social media accounts, a report has revealed.
Researchers created avatars based on information from real teenagers aged 13 to 17, including who they follow and what posts they like.
But despite their age, it wasn’t long before the fake profiles were receiving an array of inappropriate material.
“We saw a lot of very graphic self-harm imagery, images of razors, of cuts,” said Abi Perry, a 24-year-old researcher at Revealing Reality, which carried out the work.
“They were able to see content that was promoting diets to them and saw a lot of very sexualised images.
“We were able to search porn, for example, and click through to content that showed explicit images.”
Many of the avatars were contacted by unknown adults just hours after signing up.
Within a day, the fake profile for 14-year-old “Justin” had received three separate direct messages linking to sites offering paid-for porn.
The experiment was commissioned by the children’s safety group 5Rights Foundation and the children’s commissioner for England.
They are calling for rules on how online services are designed.
Tony Stower, director of external engagement at the foundation, said: “In the offline world we put in place a whole range of protections for children, so they can’t go into R18 films. Of course we don’t give them access to pornography, and knives and alcohol.
“But in the online world, these services are designed specifically to allow that.
“What we’re calling for is for those services to put in place the same kind of protections in the offline world into those digital services, so that children are protected from the moment they go online.”
The research also found that children are being targeted with age specific advertising, such as information on college courses while also making sexual or self-harm content available.
And “a child who clicks on a dieting tip, by the end of the week, is recommended bodies so unachievable that they distort any sense of what a body should look like”, the report said.
Hearing about the research has taken Ian Russell back to “the horror” of his daughter Molly’s death. She was about to turn 15 when she took her own life after viewing graphic self-harm and suicide content online.
“The priority of the platforms is profit,” he said.
“They are designed to keep people on there as long as possible with scant thought for the safety of the people, particularly young people online, so that’s what has to change.
“People’s safety has to come first so that they are not led down these rabbit holes, the algorithms don’t push ever more harmful content to the people who are using their platform.”
An age appropriate design code will come into force in September, with the Information Commissioner’s Office (ICO) able to levy fines and other punishments to services that fail to build in, by design, new safety standards around protecting the data of users under 18.
Facebook, Instagram and TikTok were all named in the report.
A spokesperson for Facebook, which also owns Instagram, said in a statement: “We agree our apps should be designed with young people’s safety in mind.
“We don’t allow pornographic content or content that promotes self-harm and we’re also taking more aggressive steps to keep teens safe, including preventing adults from sending DMs (direct messages) to teens who don’t follow them.
“It’s worth pointing out, however, that this study’s methodology is weak in a few areas: first, it seems they’ve drawn sweeping conclusions about the overall teen experience on Instagram from a handful of avatar accounts.
“Second, the posts it highlights are not ones recommended to these avatar accounts, but actively searched for or followed.
“Third, many of these examples pre-date changes we’ve made to offer support to people who search for content related to self-harm and eating disorders.”
A spokesperson for TikTok said that it had taken “industry-leading steps to promote a safe and age-appropriate experience for teens”.
“We removed 62 million videos in the first quarter of 2021 for violating our Community Guidelines, 82% of which were removed before they had received a single view.”