However coupled with these opportunities are a series of complex ethical and legal issues.
Dr Robert McGrath is the chief medical officer at private health and medical insurer nib.
OPINION
Healthcare in the future will be heavily reliant on artificial intelligence (AI). Tools like ChatGPT have introduced AI algorithms into our daily lives, signalling a profound shift in howwe interact with information and technology.
Healthcare is no different and stands to benefit from these advances, as long as we put guardrails in place and are aware of our responsibilities as medical practitioners and healthcare companies.
AI in organisations must remain human-centred. Proper use of AI demands curiosity, but also judgment and awareness of bias in the data feeding into it. It requires rigorous oversight from IT professionals to company directors.
The introduction of AI and Machine Learning (ML) into healthcare can transform how we diagnose, treat, and manage health conditions, but decisions in healthcare can have serious implications for individuals.
Health professionals are a scarce and an increasingly limited resource. In fact, New Zealand is currently almost 500 GPs short, with this gap expected to grow to a shortage of 753-1043 doctors in the next decade.
But the integration of AI in healthcare is not about replacing valuable health professionals, but about optimising their work.
AI can guide patients on when and where to seek medical attention, streamline clinical workflows, support clinical decision-making, and help health service providers predict need, ensuring valuable health resources are used effectively.
There are literally hundreds, if not thousands, of AI-supported health applications available today built on varying sources of information and each using different AI technologies.
Aidoc and Google’s DeepMind can enhance medical imaging and highlight critical findings. Applications like Dragon Medical One use voice recognition and transcription, generative AI and machine learning to draft medical letters. These tools have incredible potential, but we must understand the source of their knowledge and how they interpret it to have confidence in the outputs.
It must be flagged that currently available general large language models, such as ChatGPT-4, should not be used for medical purposes without clinical supervision since they’re built on general information and can generate incorrect results. Microsoft’s Peter Lee recently gave a warning against using ChatGPT for initial diagnoses.
Last week, nib launched an AI-supported symptom checker in New Zealand powered by technology company, Infermedica.
Available in the nib member app, the tool uses an inference engine built on medical literature and input from physicians to help members understand what steps to take when they feel unwell. The tool is available in 32 countries and 24 languages and a version of it is used by the Australian Government’s free health advice service, Healthdirect.
The symptom checker was launched in Australia in February and has been used more than 11,000 times by nib members. The checker “triages” patients, using a questionnaire to guide them to a GP or self-care, or directing them to an emergency department. In New Zealand, the tool also identifies possible conditions.
The complexity and rapid development of AI and ML present unique challenges for regulators, with new laws potentially out of date soon after they are enacted.
Globally, regulatory bodies like the Food and Drug Administration (United States) and the Therapeutic Goods Administration (Australia) have been pioneers in setting standards for novel healthcare technologies. In New Zealand, the Therapeutic Products Bill regulates software as a medical device to ensure AI products meet safety and performance standards.
The foundational inputs for these AI tools must be rooted in sound clinical governance, supported by medical literature, and guided by health professionals.
We must clearly understand how the AI is applied – for prediction, triage, diagnosis support, advice, or treatment – and who the end-users are. Health professionals, patients, administrators, and analysts will interact with AI in different ways, so will need tailored functionalities and safeguards.
Continual testing and clinical validation are vital to ensure the ongoing accuracy and reliability of AI applications. For instance, Infermedica relies on the expertise of more than 30 physicians who continuously review cases to validate and improve the tool’s performance. However, with anything that lacks real-time health professional involvement, we must always question the outputs and seek health professional advice if something does not seem right.
The aim of AI in healthcare should be to improve the efficiency and effectiveness of healthcare delivery, enabling health professionals to provide the best possible care and ensuring our scarce health resources keep pace with ever-increasing demand.
As we stand on the cusp of this healthcare revolution, let us embrace the promise of AI with both excitement and caution, ensuring it serves as a valuable ally in the pursuit of better accessibility and health outcomes for all.