It seems nowadays artificial intelligence (AI) is everywhere. Dubbed the "new electricity", AI is transforming the ways we work, learn, and play. But it has a dark side.
No, I am not talking about the existential threat it poses to the human species, which is a whole topic in itself. As a lawyer, I am concerned with the legal issues when a company uses AI to make decisions that affect individuals like you and me.
Take Uber. Unlike traditional taxi fares, Uber fares are set by AI or, more accurately, machine learning algorithms.
For each ride, the fare takes into account not only the travel time and distance but also the demand at the relevant time and area.
For instance, if you are travelling from a wealthy neighbourhood, your fare is likely to be higher than another person travelling from a poorer part of city because the computer "knows" you can afford it.
Paying a few extra dollars for a ride is one thing. But AI is also being used to make decisions in other areas which have serious impacts on people's lives, such as credit scores, recruiting and promotion, medical care, crime prevention and even criminal sentencing.
While the benefits of such AI systems cannot be denied, automated decision-making suffers from two serious problems.
The first problem is non-transparency. Just like Google will not tell you how they rank search results, AI system designers do not disclose what input data the AI relies on, and which learning algorithms it uses.
The reason is simple: these are trade secrets and companies do not want their competitors to know.
In the United States, a 2016 study shows that "risk scores" – a score given by a computer programme predicting the likelihood of a defendant committing a future crime – are systematically biased against black people.
However, the programme designer would not publicly disclose the calculations which it said were proprietary. As a result, it is impossible for defendants or the public to challenge the risk scores.
The second problem with automated decision-making goes deeper into how AI works. Today, many advanced AI applications use "neural networks", a type of machine learning algorithms based on the structure of human brains.
While a neural network can produce accurate results, the way it does so is often impractical or impossible to be explained in human logic. This is commonly referred to as the "black box" problem.
Overseas regulators have started to regulate automated decision-makings. Recently, the European Union passed the General Data Protection Regulation (GDPR) which will come into force in May 2018. One of its key features is the right to explanation.
In short, if a person is being subjected to automated decision-making, that person has a right to request "meaningful information about the logic involved".
And individuals have the right to opt out of automated decision-making in a wide range of situations.
The GDPR will have important implications globally. To the extent that a New Zealand company controls or processes personal information of EU residents, they will need to comply with the GDPR, even if they do not have any physical presence in the EU.
Back home, the use of AI for decision-making is still rare but expanding. For instance, a credit score company now allows anyone to check their credit scores for free.
Very soon, robots will be advising you on which KiwiSaver or mortgage is most suitable. The potential legal issues cannot be overlooked.
Without proper oversight, an AI can be as manipulative and biased as a human. Policy makers, lawyers, and market participants need to start thinking about a regulatory framework for AI decision-making.
Should we set up an AI watchdog to ensure that AI applications are being used in a fair way? Should each person have a right to explanation?
The answer to this last question seems, to me at least, pretty clear. After all, if my rights are affected by an AI I want to know why. Don't you?
• Benjamin Liu is a lecturer in commercial law at the University of Auckland business school.