In recent years, there has been an increase in interest and explorations of the use of machine learning (ML) to assist in the diagnosis of mental health problems; and for improving access to, engagement with, and the outcomes of, therapeutic treatment. Amongst a wide range of diagnosable mental disorders, affective disorders – such as depression, bipolar and anxiety – are the most common. For these disorders, distortions and inconsistencies in a person’s emotional state (their mood) present the primary cause for disruptions to their life. Here, ML promises to offer new routes for improving the identification of health risk factors; the prediction of disease progression; and the development of personalised health interventions. Research to date has started to explore the identification of mental health problems through inferences about peoples’ behaviours on social media, online searches, or mobile phone app uses; as well as varied approaches to assess, or continuously monitor, a person’s mental health and related symptoms by measuring sleep, mood, stress or physical activity via audio, visual or physiological signal processing.
Despite great potential, the realization of effective ML-enabled applications for mental health remains a hugely challenging area for research and development. Among the very many challenges in this domain are the need for a stronger focus on real-world applications and user-centred design processes to aid the identification of real healthcare needs that can sensibly be supported through ML; and accordingly, careful choices in data collection and the design of reliable and fair algorithmic models. Especially in mental health, where data and ML-supported decisions can have far reaching personal, social and economic impact, we need to be very critical of what reasonable inferences can be drawn from specific data; design interfaces that help people to appropriately interpret system inferences; and ensure that, ultimately, humans remain in control over, and accountable for, important ML-informed decisions.
This workshop seeks to bring together an inter-disciplinary group of researchers and practitioners from academia and industry to discuss the unique opportunities and challenges for developing effective, ethical and trustworthy ML- approaches and interventions for the diagnosis and treatment of affective disorders. The specific focus will be on (but it is not limited to):
We invite submissions of short papers (2-4 pages) in ACII paper format. Submissions will be reviewed by members of the organising and program committee based on relevance to the workshop and potential for contributing to discussions. Workshop proceedings will be published by IEEE Xplore.
Please submit your paper to the "Machine Learning for the Diagnosis and Treatment of Affective Disorders" track via the easychair submission system.
HCI Researcher
Microsoft Research Cambridge
Main contact: anthie [at] microsoft.com
Researcher in Healthcare ML
Microsoft Research Cambridge
Associate Professor in Computer Science and Statistics at Trinity College Dublin,
UX Director SilverCloud Health
Professor of Art + Design
Northeastern University
Assistant Professor in Interactive Computing
Georgia Institute of Technology
Research Manager of Visualisation and Interaction
Microsoft Research Redmond
Assistant Professor in Electrical Computer Engineering and Computing Science
Rice University
Saeed Abdullah, Penn State University, US
Talayeh Aledavood, University of Helsinki, FI
Angel Enrique, Silvercloud Health, IRL
Nadia Bianchi-Berthouze, University College London, UK
Rafael Calvo, Imperial College London, UK
Prerna Chikersal, Carnegie Mellon University, US
Afsaneh Doryab, Carnegie Mellon University, US
Jean Marcel Dos Reis Costa, Cornell University, US
Marzyeh Ghassemi, University of Toronto, CA
Martin Gjoreski, Jožef Stefan Institute, SVN
Mark Matthews, HealthRythms, US
Tristan Naumann, Microsoft Research Redmond, US
Temitayo Olugbade, University College London, UK
Pablo E. Paredes, Standford University, US
Koustuv Saha, Georgia Institute of Technology, US
Björn W. Schuller, University of Augsburg, GER
Greg Wadley, University of Melbourne, AUS
Steffen Walter, University of Ulm, GER
Assistant Professor of Psychological Science
University of California, Irvine
Despite advances in the development of evidence-based mental health interventions, we have seen little movement in the prevalence of mental health disorders. Many have proposed that technology can help address problems in mental health services by making them more accessible, available, and cheaper. However, these proposals have not addressed what might be an unfortunate reality in that mental health services may not be as good as we think. Beyond a lack of engagement, mental health services suffer from a lack of quality and measurement. Advances in machine learning offers the potential to create “smarter” mental health services by improving our way to assess mental health issues and building personalized and more efficacious services. In this talk I will present research focused on both the assessment and treatment of mental health issues through technology including the use of sensor-based technologies to identify mental health issues and the IntelliCare platform, a suite of mini-apps capable of creating personalized sequences of technology-enabled care. I will discuss implications for the application of machine learning to mental health services as well as challenges in the design and implementation of these tools.
PRESENTATION SLIDESReader
University College London
Today's mobile phones are far from mere communication devices they were just ten years ago. Equipped with sophisticated sensors and advanced computing hardware, phones can be used to infer users' location, activity, social setting and more. As devices become increasingly intelligent, their capabilities evolve beyond inferring context to predicting it, and then reasoning and acting upon the predicted context. Information about users’ behaviour can also be gathered by means of wearables and IoT devices as well as by sensors embedded in the fabric of our cities. Inference is not only limited to physical context and activities, but in the recent years mobile phones have been increasingly used to infer users' emotional states. The applications of these techniques are several, from positive behavioural intervention to more natural and effective human-mobile device interaction. In this talk I will discuss the work of my lab in the area of mobile sensing for modelling and predicting human behaviour and emotional states. I will present our ongoing projects in the area of mobile systems for mood monitoring and mental health. In particular, I will show how mobile phones can be used to collect and analyse mobility patterns of individuals in order to quantitatively understand how mental health problems affect their daily routines and behaviour and how potential changes in mood can be automatically detected from sensor data in a passive way. Finally, I will discuss our research directions in the broader area of anticipatory mobile computing, outlining the open challenges and opportunities.
Nonverbal behavior is multimodal and interpersonal. In several studies, I addressed the dynamics of facial expression, head, and body movement for emotion communication, social interaction, and clinical applications. Leveraging recent advances in AI and machine learning, my work focuses on developing computational models to automatically analyze, recognize, and interpret multimodal social and communicative behavior. By modeling multimodal and interpersonal communication my work seeks to inform affective computing, social interaction, and behavioral health informatics. In this talk, I will focus on our recent work that has addressed computational methods for multimodal assessment of depression severity in participants undergoing treatment for depression.
Professor at Dyson School of Design Engineering
Imperial College London
Substantial investment from government and commercial organisations in health technology reflects the opportunities to promote uniquely tailored, data-rich and autonomy-supportive tools at scale. Still many digital technologies for health and wellbeing go un-evaluated, lack an evidence base, or fall short of achieving the impact intended. Among the challenges to success is achieving the deep interdisciplinarity involved, which often requires continuous collaboration among medical professionals, psychologists, HCI researchers, user experience designers, software developers and end-users. While some projects lack theoretical grounding or an evidence-base, others fail to involve users effectively in order to understand their needs, perceptions and contexts, resulting in technologies that go unused. Working together, researchers in HCI, health and the social sciences can improve the processes by which digital technologies are developed and distributed for the benefit of population-wide health and wellbeing. In this presentation, I will share some of the multidisciplinary evidence-based approaches to the development of health technologies formerly taken at the Wellbeing Technologies Lab in Sydney, Australia, and now starting at Imperial college London. I will describe some ideas on the contributions HCI can make to this field, and share a number of case studies in the domains of chronic illness, mental health and doctor-patient communication.
LOCATION: William Gates Building
ROOM: FW26 (First Floor, West, Room 26)
LINK TO MAP: https://www.cl.cam.ac.uk/maps/
9:00 - 9:15 Welcome & Introductions
9:15 - 10:00 Keynote: Stephen Schuller (University of California, Irvine)
10:00 - 10:30 Workshop Talks: Depression & Suicide
Analysis of Online Suicide Risk with Document Embeddings and Latent Dirichlet Allocation
N. Jones, N. Jaques, P. Pataranutaporn, A. Ghandeharioun, R. Picard
Gram Matrix Trajectories of Body Shape Motion: An Application for Depression Severity Assessment -- PRESENTATION SLIDES
M. Daoudi, Z. Hammal, A. Kacem, J.F. Cohen
10:30 - 11:00 Coffee Break
11:00 - 12:00 Workshop Talks: Stress Recognition
An Effectiveness Comparison between the Use of Activity State Data and That of Activity Magnitude Data in Chronic Stress Recognition
Y. Nakashima, M. Tsujikawa, Y. Onishi, T. Umematsu
A Stress Recognition System Using HRV Parameters and Machine Learning Techniques -- PRESENTATION SLIDES
G. Giannakakis, K. Marias, M. Tsiknakis
Synthesizing Physiological and Motion Data for Stress and Meditation Detection
Md Taufeeq Uddin, S. Canavan
A Novel Multi-Kernel 1D Convolutional Neural Network for Stress Recognition from ECG (DWNet1D) -- PRESENTATION SLIDES
G. Giannakakis, E. Trivizakis, M. Tsiknakis, K. Marias
12:00 - 12:15 Workshop Talk: Diagnosing ADHD
Machine Learning Stop Signal Test (ML-SST): ML-based Mouse Tracking Enhances Adult ADHD Diagnosis
A. Leontyev, T. Yamauchi, M. Razavi
12:15 - 12:30 Workshop Talk: Predicting Future Wellbeing
Toward End-to-end Prediction of Future Wellbeing using Deep Sensor Representation Learning -- PRESENTATION SLIDES
B. Li, H. Yu, A. Sano
12:30 - 14:00 Lunch Break
14:00 - 14:20 Invited Talk: Mirco Musolesi (University College London)
14:20 - 14:40 Invited Talk: Zakia Hammal (Carnegie Mellon University)
14:40 - 15:00 Invited Talk: Rafael Calvo (Imperial College London)
15:00 - 15:30 Coffee Break
15:30 - 15:45 Defining Key Topics for Discussion
15:45 - 17:00 Group Discussions Formation of 3-4 discussion groups: 45 minutes discussion of key topic + 10 minutes for reporting per group
17:00 - 17:30 Summary
The workshop formed part of the ACII 2019 conference in Cambridge (UK), and was well attended. It had an extensive program that started with a keynote on digital mental health services in the US; followed by multiple presentations of accepted workshop papers; invited talks of established researchers in the field; group discussions; and conversations about future steps to continue experience sharing within, and growth of, the wider research community on ML4AD.
Topics and applications areas ranged from detections of depression from body movements (Daoudi et al. 2019) and online suicide risk predictions on Reddit (Jones et al. 2019), to various approaches to assist stress recognition (Giannakakis et al. 2019; Nakashima et al. 2019; Taufeeq Uddin & Canavan 2019), e.g., through biofeedback such as HRV (Giannakakis et al. 2019); as well as one study of an impulse suppression task to help detect people suffering from ADHD (Leontyev et al. 2019); and strategies for generating better ‘wellbeing features’ for end-to-end predictions of future wellbeing (Li et al. 2019).
Discussions of the workshop papers, keynote and invited speaker presentations foregrounded many important challenges in the field. Amongst others, they related to: (i) international difference (e.g., in how mental health services are compensated; cross-cultural values); (ii) the importance of supporting the interpretability of ML systems; (iii) defining, selecting or developing good quality mental health measures; (iv) ethical issues when developing or deploying ML applications; and also challenges in (v) data processing and feature extraction.
Towards the end of the workshop, attendees formed discussion groups to dive deeper into conversations about topics of ‘measurement’ and ‘ethics’.
The topic of measurement and associated challenges frequently occurred throughout the day and ranged from described difficulties in labelling data and establishing ‘ground truth’; to limitations involved in defining, selecting or developing specific mental health measures and targets; and also related ethical considerations as to what may be considered as ‘safe’ measures to administer to study participants, or people who are perhaps self-managing a condition as part of everyday life. Two additional areas of debate received particular attention:
In terms of research foci and the types of ML tools that we develop, there appears to be a trend on detection and diagnosis, which may partly be explained by the availability of data and measurement tools in this space. This informs how research targets are shaped. At the moment, the trend seems to be to try and match the data about a person to a diagnosis (e.g. broad category of depression) – whereas the more important task or goals might be the mapping of the person to a relevant treatment. Furthermore, if we look at mental illness such as depression as one broad category, we may not be able to account for the many variations of depression combinations and factors; meaning that building a model for monitoring depression is only vaguely defined. This suggests perhaps targeting simpler definitions of the problem(s) that we are trying to solve. Related to these discussion points were key questions such as: What is the health/ medical problem that we are trying to address? Are we asking the right questions? How to ensure that the (often complex) models/ solutions we develop in computing science really meet a clinical need? What are ‘the right’ use cases for ML? How to define/ select/ develop good quality measures?
Excitement about often passively and continuously data captured about peoples behaviours through multimodal sensors, including biofeedback, or content they create online started to shape perceptions of new, advanced data approaches to provide ‘more objective’ insights; especially when compared to other more subjective methods such as self-report questionnaires. Discussions pointed out that we cannot strictly define what is subjective or objective. Thus, instead of looking at these approaches in competition, perhaps a more promising route is to look at interesting relations that come about through the combination and triangulation of different methods; and what each may say about the person (e.g., questionnaires about stress say one thing; the physiological data says something else – how can this be explained).
Related to this were conversations about the role of clinicians and how ML systems can empower them? For example, while we can detect schizophrenia from mobile phone data and clinicians would evaluate such data as potentially useful, it can be difficult for clinicians to make sense of a lot of data that patients may only share when presenting to them once a month. For these kinds of scenarios it is really important that the data in presented in ways that is helpful to clinicians and can complement them in their work practices, rather than being overwhelming. Related to this were questions such as: How can we empower clinicians through data tools (what is helpful, relevant, actionable)? How can we help clinicians to trust us? How can the results of ML help make concrete actions/ interventions for clinicians/ patients?
Inevitably, when discussing the role of ML and possibilities of ML-enabled interventions for use as part of real-world mental health services, the conversation would turn to ethical issues. The following two themes were most prevalent:
A key conversation topic evolved around communications of ML-detected or diagnosed mental health disorders or mental health risks
(e.g. of suicide on social media) to people, who may not have been aware of a mental health problem, or risk.
On the one hand, being able to detect problems (early) can help raise awareness, validate the person’s experience,
invite help-seeking and a better management of a condition. On the other hand, people may not necessarily be aware of their data traces
being analysed for the purposes of mental health screening, or may not want to be ‘diagnosed’ with a psychiatric condition due to associated stigma,
and also potential negative effects on their personal or work life. For example, a diagnosis of a mental disorder can have severe implications
for people in work professions such as the police force, or firefighting. Thus, how to balance both peoples “right to be left alone” and “right to be helped”?
Related to these discussion points were key questions such as:
How to sensibly communicate the detection/ diagnosis of a mental health problem or disorder?
Should passive data only be collected and used for self-reflection and self-care of the person?
How to show risk factors to people in ways that is actionable (e.g., a diagnosis alone may not be helpful unless the person knows what they can do about it)?
What kinds of behavioural interventions should perhaps not be developed or tested with people in-the-wild (e.g., without appropriate safeguards in place)?
It is hard to predict what unanticipated consequences a new ML-enabled mental health intervention might have on the person, their live, or society at large. Partly this is due to the way in which we tend to study well-defined problems whose solutions may not transfer to other contexts outside of those for which they’ve been designed, or trained for. For example, in the context of developing an emotion recognizer that was based on a person’s facial expressions, it was discussed what the implications might be, if someone was re-purposing this technology e.g. to identify children who are not working enough at school, or employees who appear to be less productive at work.
In terms of misuse, examples discussed further included: difficulties to prevent the (mis)use of developed tools with low clinical accuracy in clinical practice; and challenges related to user consent and data control, where it may not be possible or transparent for users to know who has (and owns) information about them; how data is used, re-used, or how they could take their data back? Thus, key questions included: How to responsibly design and develop ML systems? How can we help reduce the risk of misuse for the technologies we develop? How to re-think consent processes and support user control over their data?
Responding to the question how we can – in practice – reduce risks of our tools being misused, or having negative impact, Rafael Calvo also pointed to a psychological framework that suggests four sensitives, and how they can be addressed: Anticipatory: interdisciplinary work Reflective: slow down and think Deliberative: participatory work Responsive: notice and adapt