CHILDREN COULD GET ONLINE GROOMING 'ALERTS' - NSPCC
29
January 2018
Children
at risk of online grooming should be sent automatic alerts as part of the
government's internet safety strategy, the NSPCC has said.
The
children's charity said existing algorithms could be used to flag suspected
groomers to moderators.
A
"staggering" 1,316 offences were recorded in the first six months of
a new child grooming law being introduced last year in England and Wales.
Minister
Matt Hancock said he would be robust with social media companies.
The
minister for Digital, Culture, Media and Sport said the government was working
on making the UK the safest place in the world to go online and that can and
"must" include grooming alerts.
He
told BBC Breakfast that as a father of three young children it was something
that "really mattered" to him.
Before
the new offence of sexual communication with a child was introduced in April,
police could not intervene until groomers attempted to meet their targets
face-to-face.
Of
the cases recorded, the youngest victim was a seven-year-old girl, although
girls aged between 12 and 15 were the most likely to be targeted by predators.
Facebook,
Instagram and Snapchat were the most common sites used by offenders, making up
63% of all incidents.
The
NSPCC, which campaigned to bring in the new legislation, has criticised social
media companies for not making the most of the technology they already use to
enforce the law.
Algorithms
- the calculations that tell computers what to do - are currently used by
social media companies to flag up images of child abuse, hate speech and
extremist material.
The
charity said the same techniques should be used to pick up "grooming
language" and then send an automatic alert to both the child and
moderators.
Automatically
identifying malicious or illegal content is something that social networks
already do, in some measure.
For
example, give a machine learning system thousands of nude pictures and it can,
much of the time, go on to pick out new examples of nude pictures in the
future.
However,
the specifics of these algorithms are closely guarded secrets - companies like
Facebook don't like competitors to know too much about content filtering, or
that they have established a certain way of doing things that they may later
decide to change.
But
concerns over how much
these sites are doing to tackle problematic material are not new. While
Facebook already does some work in identifying grooming behaviour, social
networks in general may be reluctant to take on too much responsibility in this
area.
Should
any new anti-child predator system be shown to have failings or loopholes, they
could face even greater criticism.
Tony
Stower, head of child safety online at the NSPCC, said that despite the
"staggering number of offences", government and social networks are
not properly working together to stop this crime from happening.
"Government's
Internet Safety Strategy must require social networks to build in technology to
keep their young users safe, rather than relying on police to step in once harm
has already been done," he said.
The
NSPCC said an existing voluntary code of practice does not go far enough and
has called for a mandatory code to be put in place.
Meanwhile
Facebook said it was already using technology to identify grooming behaviour.
Vera
Baird, victim affairs lead at the Association of Police and Crime Commissioners,
said she expected the number of cases to be higher given the
"endemic" scale of online grooming.
She
said alerts are "imperative" to prevention, but should be accompanied
by sex and relationships education so that children know how to respond to such
a warning.
The
Home Office said £20m was spent pursuing grooming offenders in 2017.
Commentaires
Enregistrer un commentaire