Does artificial intelligence and its regulation constitute an opportunity or a threat to the future of medicine? Dr Grace Hatton, MPharm MBChB, FRSA asks what, after all, is the value one can place on our health, and how does (and will) AI have access to this?
Health, data & algorithms
In the past month on the Twitterverse, a thread from British journalist Talia Shadwell went ironically viral, documenting her interrogation of the abrupt change in nature of the sponsored advertisements that appeared on her Facebook. Being newly into her fourth decade and female, it appears that social media algorithms had cleverly pieced together that missing her period equated to her being newly pregnant (she had, in fact, forgot to log her cycle on her dedicated contraceptive tracking app), and so proceeded to bombard her feed with ads for baby clothes and maternal wear. Once she correctly plugged in her cycle dates, Talia noted that the ads promptly disappeared.
This bizarre online intrusion struck a distinct chord with me; I am both of the same demographic as Talia and a physician. It seems that the tech she was using to monitor her menstrual cycle had clocked on to her being potentially pregnant before she had, and subtly sold on the information to third-party distributors for baby grows and prams without actually alerting her of the fact. Social media giants selling on our readily-disclosed personal data for financial gain is nothing new. But Talia’s discovery highlights a topical issue that pervades not only the information we consciously share, but the growing prevalence of artificial intelligence (AI) algorithms in healthcare, where information-sharing has traditionally been (and, for the time being, still remains) confidential.
Increasingly AI – a term broadly used to span and infiltrate multiple differing fields of interest, from engineering to law, art, sociology, economics and medicine to name but a few – has worked its way into the modern hearts and minds of the global population. Often, this infiltration has been insidious; more and more of us are increasingly engaging with our Alexa or Google Home devices in the same way we would another human.
Technology in healthcare
AIis not simply spanning stereotyped robotics, which conjures up images of humanoid figures handing out drugs, or specialised machines performing surgery. On the contrary, AI is arguably a fluid entity, with the potential to revolutionise digital information about our health, both current and prospective. Predictive technology indicating risk of disease development, including commercially available DNA analysis ‘kits’ which utilise aspects of AI, has allowed for leaps and bounds across predictive medicine:. Thanks to AI, we now have personalised and targeted treatments for a range of conditions, including certain cancers. Likewise ‘smart’ pill trackers, which dispense and store medication, fitness trackers, which monitor your heart rate and sleep patterns and surgical robots are all now commonplace. Name a subsidiary field in healthcare, and there is an AI out there ‘for you’; it is AI even being used in the screening and diagnosis of stroke, before the advent of cardinal symptoms.
And yet AI is not entirely new to health. It is in fact eerily familiar; perhaps one of the many reasons that it has been so successful in working its way into day-to-day. After all, laparoscopic surgical techniques using cybernetic cameras and clips have been around for decades. The difference now is the evolution of more independently-thinking technology –Amazon Alexa is a poignant example of this – that was once the stuff of sci-fi fantasy. For Talia this exposed an unnerving reality. Indeed, her experience speaks to a level of deep ambiguity surrounding how this information is processed and used, which calls the credibility, safety and appropriation of AI into question.
Policy considerations
Of course, there is real potential for AI to make a difference to patient care and outcomes. But care must be taken about how the data produced and used by these systems is managed. This caution needs to be informed by both the ethical questions AI raises as well the legal issues involved and compliance with data protection laws. The potential threats of AI relate largely to a lack of regulation and control over its development and uses. It may be that processing data for certain medical purposes will ultimately be justified; after all, medical teams routinely process other biometric and health-related information. There are, however, many traps for the unwary from a legal point of view, and it would be wise for anyone considering introducing AI in healthcare systems to have a carefully considered policy in place.
While AI and its myriad associations – in automation, imaging, data processing and storing of medical data – may inform proper clinical judgement, it should never be a substitute for it. For now and in the foreseeable future, and in wanting to retain autonomy over the discovery of any future child I may be carrying before my app finds out, I will continue to track my menstrual cycle offline.
Grace is a UK-qualified physician and holds honours degrees in both pharmacy and medicine. She has worked as a research scientist in the fields of drug delivery, gastroenterology and hepatology; runs two organisations pertaining to sustainability in healthcare; and holds fellowships with both the NHS Clinical Entrepreneur Programme and the RSA. With thanks to John Goss for his legal advice and input in the writing of this article
Join the discussion
Comments
Please login to post a comment or reply
Don't have an account? Click here to register.
Thanks for the article Grace. In this example my sense is that the AI element is slightly overplayed. There's quite a big difference between simple algorythms which respond to changes in circumstances (if this, then that...) and a self learning capability which is more what AI relates to. A nice AI characterisation is “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation".
As was noted in this week's Economist tech focus on China, such machine based learning mechanisms not only require large amounts of data but have to be trained much as a human would be. At present this requires a huge human effort. Training AI systems to spot tumours on medical imaging results is a good example.
Agree with much of your article though - particularly data governance points. Digital trust is a whole another debate though.
A critical factor in the secure use of AI lies in what ways it is and is not connected to the Internet. A local network enabling it to team up with other medical units to improve its diagnostic capabilities, or perhaps to recognise a patient who is moving around, must surely be a good thing. So too must software updates from the supplier. But granting the supplier access to the patent data is far more dangerous - witness the sharp sales practice around menstrual cycles. An AI is like a scalpel or an aeroplane, you can't just get hold of one and let anybody play with it. The security and privacy environment for advanced data systems is quite complicated. You absolutely have to have a skilled and experienced team of data security specialists looking after it (used to be my day job).
And don't go selling patient data to the Big Data dot com giants - even if you think it is anonymised, they are THE leading experts in linking it to known people (exactly when and where did this somebody log on to their cycle check? Oh, we know exactly which social network accounts THAT access habit is associated with) and thus de-anonymising it. This data aggregation is their core business process enabling them to sell on the resulting advertising opportunities and thus to become some of the richest companies in the world. Whatever the NHS does it should never, ever do that. Oh, wait...
I was delighted to read this.
I was working with the Wellcome Foundation (Foundation NOT Trust) in when I attended at conference for the UK technical press in 1981 at which at Walter Bodmer, arguably the first person to use computers for genetic research, summarised the then state of the art in using the computer analysis of knowledge bases to improve diagnosis (alias Medical AI).
My role at the event was to help describe the likely impact of "Machine Learning" and "Robotics" on life and work (alias the need to move from education as a rite of passage for the young to lifelong learning for all).
I also had to report back to Wellcome on what I learned from the other speakers - to inform our strategy for using the technology and whether we should seek to become a player in the field of "medical informatics", including to care for an ageing population.
Over the subsequent 45 years I have watched what has changed as algorithmic programming has been quietly embedded into ever more systems. I have also watched what has not changed - as we have failed to address the governance issues of which Walter Bodmer, Ed Feigenbaum, Donald Michie (who organised the event) and others were already acutely aware in the early 1980s.
Angela Rumbold (Education Minister at the time of the launch of first Women into IT Campaign in 1989 - 94) got the true use of the technology about right when she described her personal computer as an extension of her mind. Her officials would not allow her to use departmental facilities to say so. She had to be filmed by the BBC using her own constituency computer
I am married into a medical clan. My in-laws were and are all enthusiastic users of technology to help them improve patient diagnosis and care with fewer resources. But they are also unanimous in believing that abdicating control over their patient records, let alone their clinical judgements, to the amoral young techies of the social media companies and their unaudited algorithms (alias artifial intelligence), is genuine stupidity.