A type of predictive tool that sparked controversy when it was explored in New Zealand has confirmed that US children it identified as at-risk are also in danger of being hospitalised.
The Allegheny Family Screening Tool (AFST), whose development has been led by New Zealand data expert Professor Rhema Vaithianathan, uses machine learning to predict the risk that a child will be removed from home for safety concerns within two years.
It's been used to support screening decisions around children referred for alleged abuse or neglect in Allegheny County, in Pennsylvania, for the past four years.
While the model has been shown to be effective at predicting which children were at risk of being removed from home, until now, there was no evidence those who were flagged were also at higher risk of becoming hospital cases.
Vaithianathan and her colleagues used a one-off link between the child protection system and Pittsburgh Children's Hospital data to show the flagged children were at considerably higher risk of being hospitalised with injury, abuse and self-harm.
She said the study, just published in journal JAMA Pediatrics, confirmed that such models could find children at real risk of harm – and felt it should settle a long-standing debate about child removal as a "proxy" for future harm.
"Secondly, we can report that 'hospitalisation risk' is sharply heightened for children receiving the highest AFST risk scores," said Vaithianathan, who heads Auckland University of Technology's Centre for Social Data Analytics.
"For example, children with high AFST scores are nine times as likely to experience an abusive injury hospital encounter and 10 times as likely to experience suicide or self-inflicted harm as children with low-risk scores.
"This finding suggests that increasing child protection interventions for children at high risk of placement, may assist child-protection systems striving to reduce harm to children."
Her team also found the model was particularly sensitive to the risk facing white American children, which means authorities could be potentially "under protecting" those children.
"For example, white children who are flagged as high risk by the AFST are 13 times as likely to suffer abusive injury hospitalisation and 15 times as likely to suffer self-harm or suicide hospitalisation as low-risk white children - which is larger than for the overall population."
She told the Herald the same methods used in the study could be adapted to a New Zealand setting, with the potential to build a call-screening decision support tool using nationwide child welfare data.
"We could then access nationwide healthcare data to validate the tool. This could all be done as a theoretical exercise or as a deployed tool," she said.
"In a New Zealand study, we would 'see' all children - whereas in the county-based study, children enter and exit the county databases as they migrate across county lines."
That's not yet been done in New Zealand, although officials from the Allegheny Department of Human Services have been invite to share their experiences with Government agencies.
In 2012, the Ministry of Social Development commissioned Vaithianathan to develop a new predictive risk modelling tool that attempted to identify those at risk of physical, sexual or emotional abuse before the age of 2.
Three years later, it emerged that ethical approval was sought for another observational study which would have seen a group of 60,000 newborns assessed for risk using the tool, and then seeing if those deemed high-risk went on to suffer abuse.
That move was angrily halted by then Social Development Minister Anne Tolley, who wrote in the margins of a document outlining the proposal: "not on my watch, these are children not lab rats".
Labour's then children spokesperson, and now Prime Minister, Jacinda Ardern said at the time her party had long warned the model was "playing with kids' lives".
The issue also prompted a series of editorials from experts around privacy worries and data ethics, and whether big data actually could reliably used to counter child abuse.
Vaithianathan felt issues could be overcome with design and transparency.
"As with most of these tools, once we've shown their power in protecting children and potentially reducing racial disparities in investigation, we would support the Government in starting a national conversation to consider what guardrails would be required for such a tool to have social licence."