Computers will never be sentient, mortal or able to grasp abstract constructs such as good, evil or morality, so should we let AI software make life or death decisions regarding our health.

AI (Artificial Intelligence) software is used to detect anomalies in X-rays and MRI scans which radiographers might otherwise miss. It is now suggested the same technology could aid clinical decisioning: software such as the NHS Palantir Foundry ‘making better use of data to improve patients’ lives.’ This is controversial in part because a programmer at Google suggested an AI chatbot was exhibiting signs it might be ‘sentient.’ Even so, already information is being collected prior to and after medical procedures going far beyond the obligatory ‘can our students play with any bits left over’ to include family history, impact of the procedure on mental health and improvements in the quality of life and life expectancies. This data, along with the medical records and genetic profiles will be available to AI software should it be used as a decisioning aid. A concern is some of this data might be biased in favour of certain social groups. Basically, if a patient is receiving substandard care from a human clinician that care will not improve if AI systems inherit human bias. Unfortunately, this issue has become politicised and, on occasions, used by campaigners to draw attention to inequalities within both the NHS and wider society. If you suspect your AI system might automate unconscious, and conscious, bias then prudence dictates a postponed launch. Rather than tinker with code each time a social group is negatively impacted by your software, revisit decisions made when designing it and, while doing so, think long and hard about the intelligence you intend to automate.
45,000 Years of ‘Artificial’ Intelligence
Where to draw the line in the grey area separating human and artificial intelligence, because AI existed millennia before Alan Turing’s electromechanics experiment. In fact, an artificial form of intelligence, (more accurately ‘abstracted intelligence’) has been with us since humans began painting pictures on cave walls. Up to that point, for humans, as with other social animals, bonding with other members of the species was instinctive. Once humans began communicating with pictures – which we still do, as text is merely a complex stream of images – that instinctive bonding was translated into a collection of abstract constructs such as good, evil, morality and eventually expressed as rules, dictates and commandments. The evolution of intelligence now followed two separate paths. The consequences have been both far reaching, profound.
Abstracted intelligence is constantly updated: numerous times during one human lifetime. Consequently, it evolves faster than human intelligence which, by comparison, advances at a glacial pace. Instances of pure genius can only be considered ‘pure’ if the genius never attended a school or university, read books or networked with peers. If tomorrow we woke to discover text and symbols no longer existed few of us would score well on IQ tests. Human intelligence remains largely unchanged by individual endeavour. Abstracted intelligence, on the other hand, benefits from ‘geniuses’ creating links between previously unconnected ideas. It is tempting to believe there is a form of neural processing within abstracted intelligence itself.
Abstract intelligence constrains the development of individual intelligence by suppressing instinctive behaviour. Humans are compelled to conform to rules they may instinctively feel are unjust and avoid behaviour betraying unconscious bias for fear of exclusion from their social group.
Failure of the Human Computer
Which intelligence – human or abstracted – should we automate. At one point human intelligence seemed the obvious choice and programmers set to work creating ‘genetic algorithms, ‘neural networks,’ and other pseudo biological software. These set the hi-tech industry on the wrong path and weighty tomes on AI, such as Margaret Boden’s ‘Artificial Intelligence and Natural Man,’ hurried it along the way. But, as Nils Nilsson pointed out in his book, ‘Principles of Artificial Intelligence,’ a computer capable of understanding natural language needs both contextual knowledge and the process for making inferences. As problems go this would prove a particularly tough nut to crack and the high-tech industry discovered this was best done with a sledgehammer.
In the mid-1980s my company, Digithurst, supplied scientific imaging systems used to detect anomalies in brain scans. It was hoped, with the help of AI, computers software might one day ‘understand’ the nature of tumours. Five years later, when we used computer software to help author a book, the approach to AI had changed. Rather than modelling the real-world AI software simply identified patterns within it. Show a person hundreds of pictures of cats and they will learn to identify a cat: but with no understanding of what a cat is. This dumbed down interpretation of AI was used by Amazon to suggest books a reader might find interesting based on the preferences of hundreds of other customers. Today AI software can detect the presence of cancer without constructing a complex model of a tumour by identifying patterns of pixels unique to radiographs of patients diagnosed with the condition. This has not, however, prevented AI being massively oversold.
Tech companies continue to pass off their AI products as magic rather tghan admitting all they have is sophisticated search software. Much of this hype is generated by medtech and biotech companies desperate to raise money in the desert which is present day private equity finance. Rumours of a sentient Google chatbox helps maintain AI’s mystique despite there being nothing mystical about either sentience or the consciousness it assumes.
I Compute Therefore I Am
Consciousness is a biological fluke: a transformation after evolution fixed a problem familiar to network engineers. Latency causes small, but nevertheless significant, differences in the time it takes sensory information gathered at various nodes of the body to reach the brain. Nerve impulses travel at approximately 50 metres per second; 0.8 milliseconds to pass along the optic nerve and 12 milliseconds to traverse the radial nerve. Sight, sound and touch are registered by the brain over a period of 20 milliseconds. This disparate data must be assembled into a workable model before it can be acted upon. Deciding how to react can take a further 100 milliseconds and the resulting 120 milliseconds of data processing becomes, what is in effect, a memory of what is happening now. A human’s perception of consciousness, an awareness of self, is merely an 8 frame per second movie continually playing in the brain.
Approximately 30% of sensory data routed into the brain contributes to that ‘memory of now’. Approximately because the proportion is continually changing. Learning to drive involves consciously shifting the car’s gear stick while a proficient driver does this reflexively. An action performed repeatedly becomes hardwired into the subconscious. Communication via voice and facial expression has been an important part of everyday life for humans for millennia and often key to survival. It makes sense then that when decoding visual information the brain first compares the captured image with a memory of the human face. One reason humans frequently, and mistakenly, identify faces amongst trees, clouds – even on burnt toast. Conversely our brain assumes, again based on millennia of experience, that any human like communication indicates the presence of another human. As thinking machines are a recent innovation our brain perceives these in much the same way it does a talking parrot, imbuing them with human qualities. Easy to see why someone would believe an AI chat box was sentient. Also why social media companies employ psychologists to help them mercilessly exploit these neural quirks.
Open Your Mind To Social Media
When Kurt Vonnegut wrote his novel ‘Slaughter House 5’ buried in his account of the bombing of Dresden was a description of people escaping the firestorm by diving into the river Elbe. The significance of this image, and any impact it had on the author, was unclear simply from reading the book. But parse this novel, and a later one ‘Hocus Pocus,’ using AI software and the escape into water scenario is found again, this time prisoners running across a frozen lake. There is enough data within both scenarios to create a link between the two and determine a significance to the author. Given so many of us pour our every thought into posts on social media, it is clear how companies such as Meta and Twitter appear to know more about our state of mind than we do.
By automating key components of human-to-human interaction, social media changes the delicate balance between conscious and subconscious communication. Overuse of Google search as a prosthetic renders redundant certain conscious thought processes. For two decades there has been a steady shift away from automating human intelligence towards an automation of abstracted intelligence over which the individual has limited control. While some fear AI software will enslave us by transforming computers into conscious, autonomous machines it is more likely AI gradually erodes human consciousness and Twitter like software reduces us to swarms of social insects. The former scenario brings us back to the idea that AI software might become sentient: thankfully, there are reasons why this will never happen.
Digital Existentialism
‘Life, don’t talk to me about life. Brain the size of a planet and I spend all day is mining bitcoins.’
Apologies to Douglas Adams (and Marvin)
Humans are, perhaps uniquely, aware of their own mortality. All animals fear death but only humans have a fair grasp of what it is and speculate what lies beyond it. However, at no point in the history human evolution has, for obvious reasons, the human brain been forced to adapt to situations in which it did not exist. This points to the human perception of mortality being a product of abstracted, rather than human, intelligence. Certainly, speculation about what follows death would be beyond any human unable to grasp abstract constructs.
Like the human brain, a computer will never, regardless of how aware of its environment, experience oblivion. It may capture a stream of images from the electronic device bin at a recycling centre, it might scan the works of Jean Paul-Sartre, but the chances of it discovering ‘5 volts good 0 volts bad’, or declaring it ‘computes there for it exists’ are remote. As well, within a computer network the latency indirectly responsible for human consciousness can be fixed with basic electronics and parallel processing. These devices may trick us into perceiving them as sentient, aware of their own existence. But logic tells us computers are not mortal and certainly do not think in the same way as a human. So, how likely is it patients will be comfortable having a computer, rather than a flesh and blood clinician, decide how their medical condition is treated?
Wee Medical Problems and AI Software
As male life expectancy in the UK has risen, in part due to a decline in smoking, incidences of prostate cancer, typically afflicting men aged over 60, have increased. As well, screening (the PSA test) now identifies patients displaying no obvious symptoms. The PSA test is unreliable and some symptoms, such as poor urine flow, may result from enlargement of the prostate rather than cancer. The presence of cancerous cells can be confirmed via a biopsy: this too is, quite literally, hit and miss although more accurate if preceded by an MRI scan. The biopsy can result in infection. The tumour itself may be either benign or aggressive cancer. Treatment options, of which there are many, include, active surveillance, wait and see, surgery, radiation therapy (external or internal), cryotherapy, chemotherapy, biological therapy, high-intensity focused ultrasound or hormone therapy. Side effects of some of these include impotence, urinary incontinence and erectile dysfunction. A challenging candidate for AI based clinical decisioning, and numerous opportunities to automate unconscious bias. On the other hand, as with AI assisted radiography, software may spot something otherwise missed or overlooked by clinicians.
When Doing Nothing is an Option
Two men, one aged 59 the other 65, have elevated levels of PSA. Demographic data indicates no family history of cancer in either case but the parents of the older man, and his siblings, already suffer from dementia. Neither man has relatives who survived beyond the age of 78. Assuming prostate cancer is in its early stages both men could live with the disease for another 8 years before the cancer spreads to other parts of the body. Treatment sees 80% of patients survive for 10 years or more. Full diagnosis (biopsy) and treatment will reduce the quality of life for both men for up to a year. It many be that side effects, such as urinary incontinence, can only be fixed with another operation. Impotence will be permanent and so to might erectile disfunction. The 65-year-old will be sacrificing at least a year of relatively good health merely to gain ten years, most of these suffering from dementia. Currently when a clinician discusses treatment, and likely outcomes with a patient, taking no action once cancer has been diagnosed is rarely put forward as an option. Would a patient accept an AI computer program prioritising quality over length of life and deciding treatment was not appropriate? Most likely not, instead, being human, the patient holds onto the slim hope of becoming the first male in his family to grow old with his brain intact. Given how controversial ending life support for brain dead patients has become, with relatives refusing to accept a clinician’s diagnosis, healthcare providers will remain be under pressure to do everything possible extend a patient’s life.
Those of Other Tribes
The patient sitting in front of you is obese and badly dressed: your tribe (in this case the state) does not value this person highly enough to feed and clothe them properly so why go the extra mile when treating them? Unconscious bias is older than cave painting, a human attribute sometimes at odds with abstract constructs such as morality. Obvious, and easy to guard against, but what about the patients who catch clinicians off guard. The Googlers who research every aspect of their condition, not to educate themselves but hoping, by sounding knowledgeable in all things medical, to pass themselves off as a member of the clinician’s tribe, realising a person naturally feels more empathetic towards fellow members of their social group. Less sophisticated in their approach are patients claiming delayed treatment impacts on their mental health. We have the media to thank for that: rewarding anyone – now that bursting into tears has less impact – with 15 seconds of fame for claiming an injustice, however minor, is causing an unbearable level of stress.
I Robot
There is unconscious, and sometimes not so unconscious bias, from within a health service itself, especially when a condition has multiple treatment options. Using a robot to assist with Prostate cancer surgery is relatively new and regarded as cutting edge (pardon the pun). It reduces the length of time a patient stays in hospital; some are discharged the day after surgery. The Davinci robot, commonly used for radical prostatectomy, is not so new. It is cumbersome and inflexible. Once set up for prostate removal it is difficult to repurpose and relocate to another department, gynaecology for example. Some hospitals are looking for alternatives – the CMR Versius for example. However justifying a second robot often requires demonstrating the first is working at full capacity. More committed to radical prostatectomy are predominantly US based medical centres promoting themselves as specialists in prostate cancer and robotic surgery. If the only tool you have is a hammer, then every problem looks like nail.
The very human desire to become well known in a particular medical field, mostly through publication of research involving trials, contributes to a subconscious and, sometimes conscious, bias which can see patients persuaded to have innovative surgery rather than alternatives such as radiation therapy. Men are not being pressganged in pubs after leaning too long over a urinal, but it is sometimes a struggle to find sufficient patients for medical trials.
Let us skip the farfetched dystopian nightmare which sees AI software and surgical robots forming an alliance which ends all interaction between patient and clinician from diagnosis to the operation itself. AI driven clinical decisioning itself is enough to keep healthcare professionals awake at night. There is some comfort in knowing humans will design the software’s algorithms. Also in knowing AI assisted clinical decisioning will be less prone to subconscious bias. The software will not take account of a patient’s knowledge of their condition: it will be fooled into instinctively bonding with the patient. Personal data gleaned from social media sites will give the clinical decisioning software a better insight into a patient’s state of mind than a clinician would gain from a fifteen-minute interview. The downside is the far less human approach the software will take to other aspects of the patient’s treatment.
Using quality of life data in clinical decisioning, regardless of how accurate predictions based on it prove, would be highly controversial. How would a patient’s relatives respond when discovering, despite a prostate cancer diagnosis eight years earlier, no action was taken assuming, correctly as it turned out, their father or brother was only six years from the onset of dementia? Would they express relief or outrage? Perhaps both, the former privately, the latter on social media. Again, it is data, whether based on past outcomes, patient records or genetic profiles which will prove problematic.
The Office of National Statistics has data on the number of Covid deaths in care homes. Politically sensitive data as some deaths resulted from a decision to transfer elderly patients from hospitals to care homes without adequate testing. We would like to believe no one in the Department of Health would ever sit down with a calculator and determine how much these 40,000 deaths saved in state pension payments and state funded care. After all that would be immoral. Unfortunately, as already pointed out, morality is an abstract concept, an abstraction which is beyond even the most advanced AI software. Matt Hancock’s decision to create a firewall behind which 40,000 people died was made, reflexively, by a human. The lives of young productive people were prioritised over the elderly net takers from the economy, many already in poor health. Theoretically every decision made by AI software will be made reflexively with no reference to any abstract construct, including morality, good or evil. Challenging these decisions could provide some real life ‘I’m sorry Dave I can’t do that’ moments.
It will be almost impossible to prevent AI clinical software concluding from historic data that in some cases the only way to limit human suffering and optimise the performance of the NHS, is to refuse a patient a lifesaving operation. This, more than the prospect AI software may ‘learn’ gender and racial bias or ageism is something the NHS, and the companies supplying its technology, will need to fix.