Article

How AI could help manage mental health – and its limitations

A circuit board with a cross in the centre, representing AI and healthcare.

The potential for AI in improving mental health support within the NHS is promising, but complex, with significant ethical implications.

Human-centred designers often find themselves torn between excitement about new technology, like generative AI, and our commitment to doing the right thing for users.

From data privacy and clinical ethics to patient trust and the regulatory landscape, we need to navigate the path to using AI in mental health support very carefully.

At Sparck, we’ve started to find safe spaces to explore these issues. Big ideas are welcome. It’s fine to ask ‘silly’ questions. And we’re encouraged to disagree, in productive, thought-provoking ways.

For example, in one conversation on Slack, inspired by this story from the BBC, some Sparckies admitted to having tested ChatGPT’s abilities to act as a coach, mentor or (small T) therapist. While others were appalled at the very idea!

Recently, we ran a 6-week project involving multiple product designers, asking ourselves: what are the possibilities and challenges surrounding AI-powered tools in the provision of mental health services?

Why we need to be cautious about AI and mental health

Implementing AI in the NHS could bring significant benefits, but also presents certain challenges, and raises serious concerns.

Data privacy and security: AI systems often rely on large datasets, and tend to work better with access to real, current, task-specific data. How do we maintain the confidentiality of sensitive medical information in that context?

Bias and fairness: AI systems can inherit biases present in training data, potentially leading to unequal healthcare outcomes for different demographic groups. Ensuring AI algorithms are not discriminatory and that decision-making processes are transparent and explainable is essential.

Patient trust: Gaining the trust and acceptance of patients for AI-based healthcare solutions is essential. People will be understandably sceptical about relying on technology for mental health care decisions, while others might be insufficiently critical. That means we need clear communication and education about both the benefits and limitations of AI.

Integration with NHS systems: Integrating AI systems into existing healthcare infrastructure can be challenging, and it’s already a fragmented, complex technological landscape, with multiple providers.

Regulatory compliance: The healthcare sector is highly regulated. Any AI applications that are developed need to comply with healthcare regulations and standards to ensure patient safety. For example, apps must be UKCA approved if they are giving medical advice, and any NHS app must have a 'legal manufacturer'.

Potential benefits of AI in mental health services

As you can see, there’s a long list of concerns above, but we didn’t want to let that stop us exploring the benefits that an AI-powered personalised NHS-branded mental health app might bring.

Efficiency: AI-driven functionality could reduce the administrative burden on healthcare professionals.

Personalized assistance: AI could tailor mental health interventions based on individual user data, providing personalized recommendations and coping strategies that align with specific needs and preferences.

Integration with traditional services: Integrating the app with traditional healthcare services could allow for users to transition between digital and in-person support as needed.

Early intervention: By analysing user interactions and behaviour patterns, the app could potentially identify early signs of mental health problems, allowing for timely intervention and preventive measures.

24/7 support: AI-powered apps could offer continuous support, enabling users to access resources and assistance at any time, which is particularly crucial in mental health emergencies.

Our AI in mental health project

For 6 weeks a group of product designers at Sparck explored the challenges and opportunities being introduced within product design and AI.

We split into two groups and, following a brainstorm, decided to dig into the NHS and AI - and specifically a product focusing on mental health.

We were interested in exploring how AI could help reduce the burden on the NHS.

As part of the research phase of the project we invited our colleague Mark Branigan to share his own experience working with AI on the NHS COVID-19 app in 2020.

He talked through the importance of people having visibility over what is being generated by AI and being able to opt out.

In this app, users were given an opt-out at 4 stages of processing their COVID-19 test results. Though Mark felt that asking people 4 times if they wanted to opt-out wasn't the best user experience, there’s no doubt that it was transparent and ethical.

We followed the AI Meets Design Toolkit, which is designed to help designers turn AI into user and business value. It guides designers through creating human-centred applications and meaningful user experiences.

We created a flowchart to visually represent the steps in data input and provide a clear and easy-to-follow overview of the entire process.

Our final idea was presented in the form of storyboards. This allowed us to create narratives that focus on the end users and tell the story of their interactions with the product in an engaging way.

What we learned

Through our explorations of AI applications in the NHS, in the context of mental health, our working group gained some valuable insights.

Communication and trust building

Addressing patient scepticism and building trust in AI-based healthcare solutions requires clear communication.

Transparent dialogue about the benefits, limitations, and ethical safeguards is important to gain user acceptance.

It is important to empower the user to oversee their safety and data settings.

As brought to the conversation by Mark Branigan at the research stage, it was important for us to make sure that the user had the option to opt out at various stages of the experience, so this was added at three stages throughout our user journey.

Ethical considerations in healthcare AI

Exploring the integration of AI in healthcare highlighted the utmost importance of ethical considerations. Finding the right balance between innovation and ethical use is essential.

This was highlighted in particular at the later stage of the flowchart, where we explored the different levels of severity of mental health problems.

It brought a few questions to the surface such as, for example, if the user had opted out from feeding back any data to the NHS, what would be the ethical path to ensure they still got help?

We wanted to make sure that the user was always in control but also had options to get support in different ways.

A low-fidelity prototype

In response to the considerations above, the AI-driven mental health app we sketched out was designed to provide nuanced and sensitive suggestions to users based on their interactions.

For instance, if the app detected signs of a mental health emergency, it might nudge the user to reach out to their emergency contact for immediate support.

Additionally, for users who expressed openness to seeking professional help, the app could suggest linking up with an online therapist for more personalized assistance.

Understanding that some users may opt out of sharing data with the NHS, we incorporated respectful nudges into the system. For example, a user who has opted out but exhibits signs suggesting a need for professional intervention might receive a prompt like this:

“We noticed you opted out of sharing data with the NHS, but based on your interactions, it seems you would benefit from discussing your mental health with your GP. Would you be open to us contacting your GP on your behalf?”

These nudges were crafted with a commitment to both user autonomy and responsible mental health support.

Acknowledging concerns around patient trust, our vision positions the application as a supportive complement to traditional healthcare services.

Integrating the app with already existing services such as Mind plan could allow for users to transition between digital and in-person support when needed.

Serving as an add-on, it could facilitate a symbiotic relationship with the NHS, suggesting follow-ups and alleviating the burden on healthcare resources.

Our conclusions

We were excited by our prototype but putting something like this in front of patients is a big step.

The moment a product or service becomes patient facing, the stakes increase.

It’s good to experiment but, for now, there are a lot of problems to solve before we can feel totally comfortable introducing AI into clinical pathways around mental health.

Even if 99.9% of users find an application like this useful, what happens when somebody gets bad advice that leads them to harm? It’s a terrible thought, and probably where most of our effort should be focused.

Equally, at present, the door is open for less careful, less ethical providers to meet a clear demand for quick, instant, easy-to-access mental health support. That’s arguably even more worrying.

I’d like to thank my partners on the 6-week discovery, who were John Cowen, Cosmina Gherghe, Tony Charalambous and Ashley Brown.