In this blog, Verena Hinze, a postdoctoral research fellow at the University of Oxford, shares her takeaways from each of the talks during the MQ Data Science Meeting in September.
In this talk, Greg Farber has highlighted two problems that impede scientific progress:
First, he has highlighted a need for good theoretical frameworks to drive future research. This point resonated with my own experiences, as a lot of (particularly exploratory) research is being published without an underlying understanding of mechanisms of change. Moving forward, appropriate theories that take the full complexity of mental health into account (ideally informed by both existing evidence and key stakeholder views) should be at the heart of future research. This also aligns with the updated Medical Research Council guidance on developing and evaluating complex interventions (see http://dx.doi.org/10.1136/bmj.n2061).
Second, he has emphasised that a critical problem for data science is the heterogeneity in which the data has been collected. Many scales are typically used to measure the same underlying construct. As data harmonisation efforts can be challenging and require immense expertise, he has highlighted using one agreed set of outcome measures in all new studies. This way, data can be easily combined across studies, providing the required sample size and ensuring that findings are reproducible. If researchers still wish to evaluate new measures, they can add them to the study in addition to these agreed measures to explore the psychometrics.
In the discussion, questions about the appropriateness of using this set of agreed measures in different cultural contexts were raised. Whilst it was suggested that any noise related to a specific cultural context would likely be cancelled out if we have a large enough sample size, I would like to challenge this idea; Although I believe that a standard set of measures is crucial moving forward, I think that a list of a few measures does not fully capture the importance of the context in which mental health often occurs.
For example, in some cultural or also clinical contexts (e.g., patients living with a disability), their item endorsement may be due to other causes than an underlying depression. If we don’t consider this is using a more tailored, personalised approach, we might lose important information that could be key to informing the provision of appropriate future interventions.
Another challenge is the emergence of momentary ecological assessments, which allow us to capture a person’s experiences in the moment when their thoughts, feelings or behaviours occur. I believe this technology has considerable potential for driving future research forward, yet, I wonder how these methods (that primarily rely on the use of single items) can fit in with this agreed set of outcome scales? I think an essential start has been made to identify the problem and suggest possible solutions. Still, I feel that much more work is needed to translate the approach to the diversity of settings researchers find themselves in.
I particularly appreciated the reading chaired by Prof. Ann John. Being a suicide researcher myself and working mainly with data to better understand risk and resilience towards suicidal risk in young people, this reading really touched me. It again made me realise why this work is so important and why it is even more critical not to forget that behind each of these numbers in a dataset, there is a tragic story affecting not only the individual but their whole family and broader society.
Listening to the talks from the research funders before, I realised even more, why it is so important to have people with a lived experience involved in the research at every stage of the process to identify the questions that really matter to patients and their wider social context.
Moving forward, I feel that we should give more people with lived experience a chance to speak at conferences and engage in the conversation around those topics that affect them so profoundly, as only then will we be able to really move forward and produce research that has a meaningful impact on the lives of those affected.
I was impressed by the efforts taken in the MindKind project to involve young people in the process at every stage to give young people a voice in the research process.
It is excellent to learn that young people are generally happy for their data to be shared for research if it is in the public interest. This is also an experience I have had when working with young people and their parents.
As this study included adolescents above the age of 16, I wonder whether these findings would also translate to younger people (<16 years)? Also, in adolescents younger than 16, parents will be required to give informed consent for their child. I wonder whether parental views align with these findings or whether additional challenges might arise when including participants younger than 16 years old. This group is particularly of interest for early intervention/ prevention initiatives, as we know that many mental health disorders might have their first onset before the age of 16 years.
In this talk, I was particularly impressed by the findings of a reversed trend when considering short-term vs long-term risk trajectories.
Longitudinal research is often restricted to relatively short timeframes (e.g., max. 1-2 years). This has highlighted for me that it is crucial to have studies that capture even more extended periods to understand risk trajectories in detail. This is crucial in the context of Covid. Still, it is similarly essential to consider young people’s mental health, where we know that some processes might take much longer to unfold and could, therefore, not be captured if we considered only short time frames.