• Gehan Gunasekara is an associate professor in commercial law at the University of Auckland business school. The Asian Privacy Scholars Network 5th International Conference takes place today and tomorrow, followed by a privacy research symposium on Thursday.
President-elect Donald Trump gaining control of the US National Security Agency is not the scariest development confronting those concerned about privacy, although that has been exercising some minds since his election. Rather, it is that the relentless intrusions wrought by technology are moving us towards - if we are not already living in one - a surveillance society run by corporations.
For example, we know what happened to promises by WhatsApp not to share user's data with Facebook after it was bought by Facebook (broken).
Information of all sorts is gathered daily by our myriad devices. It may no longer be possible to stop the collection of personal data or even to know when it happens. Facial recognition, location apps, Big Data and the Internet of Things are all challenging traditional ideas of privacy.
Some trade-off to privacy rights may be inevitable in our daily lives, but should there be limits to this trade-off? Personal data after all is about human beings. How our data is managed has real effects on peoples' lives.
These are just some of the questions being explored at a two-day conference of international privacy researchers at the University of Auckland Business School this week. Among the speakers are Privacy Commissioner John Edwards and the Hon Michael Kirby, a former Judge of Australia's highest court.
Kirby chaired the group of OECD experts who, in 1980, drew up the guidelines that formed the basis of our Privacy Act and indeed the privacy codes of most other countries. These laws are rapidly becoming outdated.
One challenge comes from the demands of Artificial Intelligence (AI). Google, for example, is an AI business.
Momentous advances in AI have been enabled only through access to unlimited amounts of data on what humans actually do. This allows computers to mimic these actions through predicting how humans would act in response to any situation. There is a sinister aspect to this technology.
Harvard Professor Shoshana Zuboff has written of the dangers of "Big Brother Capitalism". This turns the famous "unseen hand" of the market, identified by economist Adam Smith, on its head. Instead of millions of consumer choices every day controlling the unknowing market, the market now knows our every move.
We are already familiar with behavioural advertising and Facebook has famously shown it can manipulate news-feeds. Our behaviour clearly can and almost certainly will be manipulated.
New rules addressing these problems have been drawn up in some countries. One is the idea of privacy by design and default in new technology - meaning genuine privacy safeguards must be factored in at the front-end.
To keep up, when New Zealand reforms its Privacy Act it should seriously consider requiring new technologies to be certified as "privacy-safe" in the same way that electrical devices and cars are certified.
Another idea is data portability - so a person can transfer their online profile between companies. Finally, there is the "right to be forgotten".
To work, however, any regulation needs to be global as no individual country can rein in the likes of Facebook or Google. This was recognised by the OECD when drawing up its rules in 1980, but of course these rules did not anticipate the Internet.
The architects of these rules, led by Kirby, were influenced by fundamental human values such as autonomy and free will. And we still hold dear these values: the right to make choices, to determine how we live - as long as we do not hurt others - and to make mistakes. Take these things away and we become just cogs in the machine.
It is up to privacy scholars around the world to come up with solutions that allow technological progress to take place, but not at the expense of our humanity.