Kiwi companies should be upfront with customers about what their data-harvesting artificial intelligence programmes do, a new report finds.
The review, just published by Chartered Accountants Australia and New Zealand (CAANZ), said these guidelines should be shared with consumers so they could better decide which businesses they chose to deal with.
The report also suggested data-trawling AI algorithms should be designed so they could be reviewed by a third party.
From smart phones to smart cars, AI has invaded every aspect of our lives, but the hottest field of AI right now was machine learning – the notion of using statistical techniques to help systems learn from data.
There was now heightening concern around how own personal information was being collected and used, and not just by global giants like Google and Facebook, but also Government ministries and agencies.
"In our world of fake news and privacy concerns, we are currently at an ethical crossroad where we need to determine the right direction for the development of machine learning and AI," said CAANZ's business reform leader, Karen McWilliams.
"By setting the right ethical framework now, we have an opportunity to design a new AI-enabled world, which could create a more inclusive global society and sustainable economy than exists today."
The report explored the upsides of AI, such as more powerful learning and research tools that could lead to medical breakthroughs or better predict customer behaviour.
But there were worrying concerns around privacy, data security and the potential for social re-engineering.
There was also the "strong possibility that vast numbers of the current workforce, and current graduates, may find themselves made obsolete because AI applications can do their work faster and more accurately".
All this, the report noting, was happening while there are no commonly agreed policies or plans.
It was only in the past few months that the AI Forum of New Zealand launched a paper that set out the first steps towards building "a cohesive national strategy" around AI.
The Australian government has meanwhile invested millions of dollars in a budget for AI development, which included building an ethical framework.
"The absence of transparency and a full understanding of how [AI] algorithms work creates significant ethical issues," the new report found.
However, it warned that over-regulating AI would be a "simplistic reaction".
"AI projects and further investments in AI will simply be moved to more relaxed regulatory regimes."
The report comes after a group of University of Otago experts called for a new watchdog to regulate how government-run AI sifted through Kiwi data.
That call was in response to increasing use of sophisticated new data tools and predictive analytics by state agencies - sometimes without the knowledge of those whose information is being used.
Immigration NZ had been piloting a profiling system to find over-stayers and public insurer ACC has been using a model to predict how long clients will be on its books.
The approach also came under the spotlight when former social development minister Anne Tolley angrily blocked a Ministry of Social Development trial that proposed to test its model by risk-rating tens of thousands of newborns - and waiting for two years to see if its predictions proved accurate.
The experts, working under the Law Foundation-funded Artificial Intelligence and Law in New Zealand Project, agreed there was a place for such AI-based programmes but argued they needed to be watched, and that the public not be kept in the dark.
The Government is holding discussions on the issue across different departments, and is also leading a new working group on digital rights following February's global D7 summit.