Machine minds: can AI play a role in mental health therapy? – Irish Times, August 23 2018

A welcome conversation surrounding mental health has arisen but as more people make the decision to reach out, too few find a supportive hand.

Not a week passes without a report on Ireland’s mental health system, where lengthy waiting lists, staff shortages and inadequate facilities are the rule rather than the exception. Minister of State with special responsibility for mental health Jim Daly recently announced plans to pilot mental health “web therapy”; signalling a growing recognition of the need for novel approaches.

The capabilities of technology in the mental health sphere continue to flourish and developing therapeutic applications based upon systems driven by artificial intelligence (AI), particularly chatbots, is one arena that’s rapidly expanding. Yet, if you needed to open up, would you reach out to a robot?

Bot benefits

While not specifically focused on AI, a study from the Applied Research for Connected Health (Arch) centre at UCD shows 94 per cent of Irish adults surveyed would be willing to engage with connected mental health technology.

Study co-author Dr Louise Rooney, a postdoctoral research fellow at Arch, says AI-based systems with a research and a patient-centred focus could be beneficial.

“I don’t think AI is the answer to everything or that it could fully replace therapy intervention but I think there’s a place for it at every juncture through the process of treatment; from checking in, to remote monitoring, even down to people getting information,” she says.

The latest Mental Health Commission report shows waiting times for child and adolescent mental health services can reach 15 months. Rooney believes AI-based therapy could be particularly useful for young people who “respond very well to connected mental health technology”. The anonymity of such platforms could also break down barriers for men, who are less likely to seek help than women.

Prof Paul Walsh from Cork Institute of Technology’s department of computer science feels that AI-driven tools can “improve the accessibility to mental health services” but won’t fully replace human therapy.

“For those who are vulnerable and need help late at night, there’s evidence to show [therapy chatbots using AI and NLP] can be an effective way of calming people,” says Walsh, who is currently researching how to build software and machine learning systems for people with cognitive disorders. “If someone’s worried or stressed and needs immediate feedback, it’s possible to give real-time response and support without visiting a therapist.”

Professor of psychiatry at Trinity College Dr Brendan Kelly says AI-based platforms such as chatbots can help people to take control of their wellbeing in a positive manner.

“They can help people to take the first step into an arena that may be scary for them but I feel there will come a point that this is combined with, or replaced by, a real therapist,” adds the consultant psychiatrist based at TallaghtHospital.

Privacy concerns

Using AI-driven mental health therapy doesn’t come without concerns, one being privacy.

“Clearly it’s a very important issue and people shouldn’t use something that compromises their privacy but it’s not a deal-breaker,” says Kelly. “There are ways to ensure privacy which must be done but [fears and challenges] shouldn’t sink the boat.”

Being completely transparent with users about data collection and storage is key, Rooney adds.

Whether AI can determine someone’s ability to consent to therapy is another potential caveat raised by Rooney. However, she feels that forming “watertight legislation” for this technology and ensuring it’s backed by research can help to overcome this and other potential pitfalls.

While most current tools in this field focus on mental wellbeing and not severe problems, Walsh raises the potential of false negatives should AI decide somebody has a chronic illness. To avoid this, it’s important to keep a human in the loop.

“Many machine-learning systems are really hard to analyse to see how they make these judgements,” he adds. “We’re working on ways to try to make it more amenable to inspection.”

As potentially anybody can engineer a system, Walsh recommends avoiding anything without a “vast paper trail” of evidence.

“These will have to go through rigorous clinical trials,” he says. “We need policing and enforcement for anything making medical claims.”

Humans could become attached to a therapy chatbot, as was the case with Eliza, a chatbot developed at Massachusetts Institute of Technology in the 1960s. However, Walsh doubts they will ever be as addictive or as great a threat as things like online gambling.

While the sentiment that AI-based therapy will assist rather than replace human therapy is quite universal, so is the view it can have a great impact.

“Achieving optimum mental health involves being open to all different ingredients, mixing it up and making a cake. AI can be part of that,” says Rooney.

If well regulated, Walsh says AI can augment humans in terms of treating people.

“I’m hopeful that benefits would be accentuated and the negatives or risks could be managed,” says Kelly. “The fact that it’s difficult and complex doesn’t mean we should shy away, just that we must think how best to capture the benefits of this technology.”

Brains behind the bots

Stanford psychologist and UCD PhD graduate Dr Alison Darcy is the brains behind Woebot: a chatbot combining artificial intelligence and cognitive behavioural therapy for mental health management.

“The goal is to make mental health radically accessible. Accessibility goes beyond the regular logistical things like trying to get an appointment,” explains the Dublin native, who conducted a randomised control trial of Woebot before launching. “It also includes things like whether it can be meaningfully integrated into everyday life.”

Darcy is clear that Woebot isn’t a replacement for human therapy, nor will he attempt to diagnose. In the interest of privacy, all data collected is treated as if users are in a clinical study.

Not intended for severe mental illness, Woebot is clear about what he can do. If he detects someone in crisis, Woebot declares the situation is beyond his reach and provides helplines and a link to a clinically-proven suicide-prevention app.

Originally from Wexford, Máirín Reid has also harnessed the capabilities of AI in the mental health sphere through Cogniant. Founded in Singapore with business partner Neeraj Kothari, it links existing clinicians and patients to allow for non-intrusive patient monitoring between sessions.

It’s currently being utilised by public health providers in Singapore with the aim of preventing relapses and aiding efficiency for human therapists. As Cogniant is recommended to users by human therapists, decisions on consent capabilities are formed by humans.

“Our on-boarding process is very clinically-driven,” says Reid. “We’re not there to replace, but to complement.”

While not intended for high-risk patients, Cogniant has an escalation process that connects any highly-distressed users to their therapist and provides supports. There’s also a great emphasis on privacy and being transparent from the offset.

“Clinicians are saying it drives efficiency and they can treat patients more effectively. Patients find it’s non-intrusive and not judgmental in any form.”

(First published by the Irish Times on August 23 2018. Available online at: https://www.irishtimes.com/news/science/machine-minds-can-ai-play-a-role-in-mental-health-therapy-1.3598546)

Advertisements

Would you trust a robot with your mind? – Asian Scientist, August 3 2018

When it comes to allowing others inside our heads, most of us only crack open the door for a select few, likely close family members or trusted psychologists. But if you were really struggling, would you consider sharing your innermost thoughts with a robot?

Robot therapists aren’t as far-fetched as you might think. In the 1960s, Joseph Weizenbaum of the Massachusetts Institute of Technology’s Artificial Intelligence Laboratory developed ELIZA, an early chatbot that could emulate the conversations of a psychotherapist. Since then, many increasingly sophisticated applications bringing artificial intelligence (AI) into the mental health realm have emerged.

The brainchild of Stanford psychologists and AI experts, Woebot combines machine learning and natural language processing (NLP) to assess users’ moods and offer them appropriate cognitive behavioral therapy. Emotionally intelligent chatbot Wysa, developed by Indian entrepreneurs Jo Aggarwal and Ramakant Vempati, uses AI and NLP techniques to track users’ emotions and act as their virtual mental wellness coach. Singapore-born Cogniant integrates AI technology with face-to-face therapy and aims to prevent mental illness relapses by monitoring existing patients and assisting them with therapy goals.

AI and mental health: what are the risks?

In 2018, an estimated 340 million people in Asia will require mental health services. With professional help shortages, rural isolation, high costs and stigma being the main obstacles to treatment, AI-centered mental health innovations could be particularly pertinent in the region. Yet, could involving AI in something as potentially delicate as mental health pose any threat?

Given that AI is currently used for mental health diagnostics and wellness coaching rather than treatment, Professor Pascale Fung of the Hong Kong University of Science and Technology says privacy is the main concern.

“For AI to do a good job, it needs access to patient records, past history and family medical knowledge. Security and safety of this data is very important. There are concerns about AI being hacked or data being stolen for other purposes,” she says. “On the other hand, that’s something we should be worried about when dealing with patient records anyway.”

Indeed, researchers have noted that the misuse of sensitive information shared between a patient and AI can have significant consequences, for both the user and the profession’s integrity. To avoid distrust, it’s important for developers to fully disclose data policies to users from the beginning, says Mr. Neeraj Kothari, co-founder of Cogniant. He says that users can then make an informed decision about what they share.

“We have signed an agreement to say we won’t sell data to a third party,” he adds. “The best way we can progress is to demonstrate through action that we are here to help, not to harm.”

Another risk is that humans could potentially become attached to a therapy chatbot, as was the case for many of ELIZA’s ‘patients,’ who believed that they were truly conversing with a human. This led to the coining of the phrase ‘the ELIZA effect,’ which describes the tendency of people to assume computer behaviors are equivalent to human behaviors. However, while acknowledging the need for more research in this area, Fung doesn’t believe this problem is unique to AI—people also become attached to devices such as mobile phones and television sets, she says.

“In every generation, there have always been concerns with new technology. When it becomes obsessive, people go to professionals for help.”

People may also joke with or lie to AI, but such instances can be minimized through the use of deception detection techniques like facial recognition, says Kothari.

“In general, if AI is designed for the benefit of patients and is non-judgmental and non-intrusive, there will be no reason to lie to the AI.”

The possibility of deception has more to do with human responsibility than technological downfalls, adds Fung.

“[AI] is a tool and what people decide to do with it is up to humans.”

Complementing, not replacing

Most researchers in the field acknowledge that AI is not a therapist replacement, but believe that it can be a supporting tool. Professor Zhu Tingshao of the Chinese Academy of Sciences and his colleagues, for example, developed an AI-based system currently integrated into Weibo that recognizes people who express suicidal thoughts; subsequently, it sends them hotline numbers and messages of support. While the researchers can’t determine if people subsequently seek help, Zhu says the technology is still a proactive step towards suicide prevention.

“Right now when it comes to suicide intervention, we need the [suicidal] people to do the contacting themselves. [But] few people with a problem want to actively ask for help,” says Zhu, who adds that the tool has received positive feedback so far. “We cannot take the place of psychologists or counselling professionals, but we can help people know their mental health status and, if needed, provide some help in time [to prevent suicide].”

Ms. Bhairavi Prakash, founder of The Mithra Trust, an Indian organization that runs wellbeing initiatives combining technology and community engagement, believes that AI can be useful for promoting wellness. However, she too doesn’t think that it can provide complete treatment, especially for severe mental illness. Attempting to apply it to such cases could be dangerous, and should not be attempted until the technology is more sophisticated, she says.

“You don’t know if AI is triggering the person, you can’t see their facial reactions,” says the work and organizational psychologist. “If someone is delusional or hallucinating and talking to the AI, it won’t know how much is real.”

Legislation also needs to catch up before AI is assigned more tasks, adds Prakash. For example, human psychologists may be required by law to notify the authorities if a patient shows signs of wanting to harm others, but it’s unclear how such cases should be handled by AI.

Fung echoes the need for clearer legislation, saying that humans must still remain in the loop for important decisions such as medication prescriptions.

“Machines aren’t perfect. Humans aren’t either, but we do have laws or regulations [to deal with] human error or medical accidents. We don’t really have very good regulations for machine error.”

In the future, Fung envisions AI helping to create better personalized treatment plans, while Zhu says it will make mental health services more efficient. Prakash feels AI-based tools will encourage people to make the initial step in seeking help.

“In conversations people have with [AI], they are so open because there is zero judgment. They can talk about anything. That is extremely liberating and great for mental wellness.”

(First published by Asian Scientist on August 3 2018. Available online at: https://www.asianscientist.com/2018/08/features/artificial-intelligence-mental-health/)

How microbes may influence our behaviour – The Scientist, September 2017

Stress, anxiety, and depression are emotions we all feel at some point in our lives, some people to a greater degree than others. Part of the human experience, right?

“It may seem odd that my research focuses on the gut if I’m interested in the brain,” says  John Cryan, a researcher at the APC Microbiome Institute at University College Cork in Ireland. “But when we think of how we express emotion in language, through sayings like ‘butterflies in your tummy’ and ‘gut feeling,’ it isn’t surprising that they’re connected.”

In a recent study, Cryan and his colleagues reported a link between the microbiome and fear. By examining mice with and without gut bacteria, they discovered that the germ-free mice had blunted fear responses (Mol Psychiatr, doi:10.1038/mp.2017.100, 2017). Their findings may pave the way for the development of novel treatments for anxiety-related illnesses, including posttraumatic stress disorder.

Researchers at Kyushu University in Japan were the first to show, in 2004, that bacteria in the gut can influence stress responses, prompting many subsequent investigations. Yet despite mounting research, scientists remain uncertain about exactly how the gut microbiome affects the brain. While some bacteria influence the brain through the vagus nerve, other strains seem to use different pathways. It is known, however, that the population of the gut microbiome begins in early life, and recent research suggests that disruptions to its normal development may influence future physical and mental health (Nat Commun, 6:7735, 2015).

Researchers are finding that this gut-brain connection could have clinical implications, as influencing the gut microbiome through diet may serve to ameliorate some psychiatric disorders. Together with University College Cork colleague Ted Dinan, Cryan coined the term “psychobiotics” in 2013 to describe live organisms that, when ingested, produce health benefits in patients with psychiatric illness. These include foods containing probiotics, live strains of gut-friendly bacteria.

While there are many rodent studies linking probiotics and mental health, UCLA biologist Emeran Mayer and his colleagues were the first to test them in humans, using functional magnetic resonance imaging (fMRI) scans to assess the results. After administering probiotic yogurt to a group of healthy women twice a day for four weeks, the researchers found that the women had a reduced brain response to negative images (Gastroenterology, 144:1394-401, 2013).

“We reanalysed the data several times and convinced ourselves that it’s real,” Mayer says. “You can almost say it was a career-changer for me.”

Having conducted this study on healthy participants, Mayer is reluctant to conclude that probiotics can cure mental illnesses such as anxiety. “It’s a complex emotion, not just a reflex behavior like in the mouse,” he says. However, Mayer says he’s very supportive of the potential of prebiotics—fiber-rich foods  that promote the growth of beneficial bacteria in the gut.

Researchers at Deakin University in Australia recently trialed a Mediterranean-style diet, which is predominately plant-based and fiber-rich, in a group of adults with major depression. They found that one-third of the participants reported a significant improvement in symptoms after 12 weeks on the diet (BMC Medicine, 15:23, 2017). One of them was Sarah Keeble from Melbourne. “I’ve suffered from depression for 17 years. At the start of this study, I was right at the bottom of the barrel,” she recalls. “After a few weeks, that sinking feeling slowly lifted, and my motivation and enthusiasm improved.”

Just as activity in the gut seems to affect the brain, mental stress can lead to intestinal problems. Scientists have demonstrated this in research on irritable bowel syndrome. For example, a study by Mayer and colleagues linked early-life emotional trauma to an increased risk of developing the bowel disorder (Clin Gastroenterol Hepatol, 10:385-90, 2012).

As data on the brain-gut axis accumulates, many scientists are taking notice. Trinity College Dublin researcher Shane O’Mara says that there is “great potential” in this area, but cautions that it’s too early to say whether targeting the microbiome will play a role in psychiatric treatment. University of Manitoba gastroenterologist Charles Bernstein also feels the research is promising but believes we are “far from manipulating the microbiome to treat mental health disorders.”

Those spearheading this research are equally aware of the need for more studies, particularly in human subjects, but they are hopeful that change lies ahead. “I’m almost certain that in several years, diet will be considered one branch of therapy for many mental illnesses, alongside medication and psychiatric treatments,” says Mayer.

“People with severe mental illness will still need something very strong, but this is a useful adjunctive,” agrees Cryan. “I think when we go to our GP in future, we will not only have blood tests, we will have the microbiome tested.”

“Within five years, I hope to see more clinical trials that demonstrate the efficacy of prebiotics and probiotics on mental health disorders,” says University of Chicago microbial ecologist Jack Gilbert. “There needs to be a revolution in how we deal with mental illness in our society.”

(First published in The Scientist magazine September 2017. Also available online at: http://www.the-scientist.com/?articles.view/articleNo/50146/title/How-Microbes-May-Influence-Our-Behavior/)

Mayor of Wexford Cllr Frank Staples speaks out about mental health in light of suicide figures – Wexford People, November 19 2016

Mayor of Wexford Cllr Frank Staples says that speaking out about his mental illness was like ‘taking away a mask’ and is encouraging people to open up.

Cllr Staples, who has been open about his own battle with depression in recent years, said that suffering in silence is like wearing a mask – an ordeal that it can become very exhausting for a person over time.

‘It’s so exhausting to be covering up mental illness day in and day out,’ he said. ‘When you talk about it, it can only be described as taking away a mask. You feel instantly better. I encourage people to let down the mask and talk about it.’

‘For me as mayor to speak out about my own battle with depression, it has made a huge difference to me,’ he continued. ‘It feels really good to be open. I know now that if I’m not feeling well, I can talk about it. I don’t feel like I am making excuses.’

Cllr Staples acknowledged that it can be difficult for people to seek help themselves when they are suffering from a mental illness. With this in mind, he said it is important for everyone in the community to play their part in tackling the issue.

‘I have said before that I feel that anyone with a mental illness is strong. They have to face daily battles but they can’t keep fighting forever,’ he said.

‘We expect people to ask for help when they are struggling but not everyone is able to do that. We all have a huge part to play. It’s important to ask those close to you how they are and even though they might not speak out the first time, it might encourage them to eventually open up,’ he said.

At a higher level, Cllr Staples said that establishing a 24/7 mental health unit in Wexford is vital, not only for those suffering from mental illness, but for those close to them.

‘If somebody is suffering from depression for example, it’s good for their family to know that there is 24/7 support available. It gives them reassurance that there is somewhere that they can go if their loved one is in difficulty,’ he said. ‘Mental illness doesn’t only affect those suffering.’

Cllr Staples reiterated earlier reports that the County Council are currently in talks with the government on the possibility of an alternative use for St Senan’s Hospital. He said He would like to see a 24/ 7 unit on the grounds of Wexford General Hospital.

‘A 24/7 unit is definitely needed in Wexford and I don’t think anyone is going to argue that. There has been a lot of speculation about where it should be but I would like to see it on the grounds of Wexford General Hospital as you have access to other services there,’ he said.

‘I would love to see it soon but I am under no illusions; it takes a lot of time and money. Finding a building is no problem but hiring staff costs a lot of money,’ he continued. ‘But if we don’t at least talk about it, it will never happen.’

Improving education on mental health is also necessary, according to Cllr Staples, who said that many young people may not know if they are suffering with depression.

‘More could be done for teens and young people. They might be suffering from depression but don’t know the symptoms. There are so many symptoms of depression. They could be going around feeling terrible and not knowing why,’ he said.

Cllr Staples made his comments following the release of figures from the CSO, which showed that 405 people lost their lives to suicide in Wexford between 1995 and 2015. Commenting on the figures, he said that they were shocking and very high but said it is likely that they are even higher in reality. He said it is difficult to know for sure why Wexford has one of the highest rates of suicide in the country.

‘We have a high rate of unemployment and I am sure that isn’t helping,’ he said. ‘But that’s just speculation. I imagine it is linked in some way or another as unemployment puts more financial pressure on people.’

(First published in the Wexford People newspaper: print edition. Also available online at: http://www.independent.ie/regionals/newrossstandard/news/mayor-of-wexford-cllr-frank-staples-speaks-out-about-mental-health-in-light-of-suicide-figures-35215238.html)