Robots and mental illness

Science fiction writers and bioethicists certainly have a shared interest in what happens when an AI (‘robot’) is able to predict human behaviour. Some of the best depictions in popular media have raised questions that we still don’t have answers to. Minority Report, GATTACA, The Matrix; these all represent the varying degrees of what makes us human, and how we strive to interpret that. I recently started thinking about how that intersects with psychiatry, mostly as a distraction from venting about my exams.

Haven’t seen GATTACA? The robots insist you watch it now.

One of the most emotive areas to make sense of is how this applies to an otherwise ‘normal’ person. For those playing along at home, this might be the ‘de-gene-rates’ from GATTACA or those convicted via the ‘precogs’ in Minority Report. What happens when technology and humanity come into conflict?

I don’t have any particularly good answers for that. I do have some interesting new (old) questions, inspired by some academic research I’ve been trawling through recently.

Machine learning has been aggressively reshaping a lot of fields and it seems psychiatry is next. The basic premise of ‘machine learning’ is the autonomous self improvement of a computer algorithm. A program makes a prediction based on data, is given feedback about its accuracy and thus improves its future performance. This is meant to leverage the strengths of computers, such as efficiency and pattern recognition, to solve problems beyond human capabilities.

For those wondering if this resembles Skynet a little too closely, we’d really rather you didn’t ask that sort of question, thanks.

He just wants a hug!

The inexorable grail quest of biological psychiatry for neurobiological correlates, has generated some extremely interesting research. In 2017, a summary of available studies looked at whether we can work backwards from brain scans to diagnosis.

The authors commented that the true power of this approach was that changes in many brain regions could be simultaneously considered and compared. An algorithm can track every subtle shade of grey and compare them across hundreds (thousands! millions!) of different scans, looking for diagnostic clues.

They quote a sensitivity and specificity of up to 90% with some modalities, not seeming to vary by severity of depressive symptoms. I doubt I have a sensivity or specificity anywhere close to that, especially in less severe presentations.

We present meta-analyses of a total of 33 studies with a total of 912 patients diagnosed with MDD and 894 HCs. Across all studies, neuroimaging-based diagnostic models were able to differentiate patients from HCs with 77% sensitivity and 78% specificity.

This data isn’t predictive, it’s important to acknowledge. These findings are entirely retrospective and show commonalities, rather than any specific link or causative diagnostic factor. I will leave the thorough critical appraisal of computer science to those adequately trained and instead reflect on some of the points it does raise.


To set the scene for that, some researchers have begun utilizing algorithms as diagnosticians, with results of varying quality and significance;

This approach shows the potential to develop non-intrusive, low-cost methods for monitoring individuals’ mental health in real time.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6530855/

There is a growing interest in using behavioural cues to automate depression diagnosis and stage prediction.

This paper proposes a multi-level attention based early fusion network which fuses audio, video and text modalities to predict severity of depression

https://dl.acm.org/doi/pdf/10.1145/3347320.3357697

As has been routinely outlined in popular culture, the authority of the machine is hard to question. Is it possible to challenge a diagnosis, made by a refined algorithm that examines every aspect of our behaviour? Can a (human) medical practitioner, consider themselves better suited to identifying features of depression? I can barely recall overall body language between two appointments, let alone fractional changes in facial expression and tone! I certainly wonder what the implications are if I disagree with such a prediction – when do decision support tools become a means to their own end?

The other unanswerable question is whether the public will accept the widespread use of digital diagnosis. For issues so fundamentally entwined with the vagaries of human emotion, I wonder how the artificial nature of artificial intelligence will be seen. What will it take for us to acknowledge an algorithm can diagnose sadness or provide structured therapy? Chatbots are already in common usage, as we grapple with the implications, so I suspect we’re not far away from something broader.

My final thought is about stigma, or lack thereof. It’s much easier to tell your phone app or computer screen the confronting secrets you’d rather not acknowledge with others. Perhaps they’re about socially sensitive or taboo topics, that you cannot initially (or ever!) raise with a therapist. As technology becomes integrated into every other aspect of human interaction, it may become so familiar as to ultimately facilitate conversations that cannot happen with another person.

The line between reality, possibility (and conspiracy theory) certainly seems to blur! I have no particular ideological objection to these novel diagnostic methods; indeed, I’m a large proponent of letting the robots loose in medicine. An interesting final reflection is this graphic from a survey I was sent last year, capturing psychiatrist perspectives on these issues

For the record, I suspect every task is ‘likely’ within my lifetime

Leave a Reply

Your email address will not be published. Required fields are marked *