About the author: Matthew Fenech is medical safety lead at Ada Health, a global digital health company making personalised health accessible for all. His main interest is developing frameworks that ensure health tech tools are safe, effective, and acceptable to patients and healthcare providers alike.
Through COVID-19 many digital health companies felt duty-bound to do something, and to do it fast, given how rapidly the pandemic was accelerating. Companies knew they could make a real difference, but only if solutions were deployed rapidly: the virus would not wait.
This need for speed was true across many areas of healthcare - just look at the emergency approval - which was later revoked - of hydroxychloroquine as a treatment for COVID-19.
It’s clear that there’s a balance to be struck between the impulse to act quickly and the need to ‘first do no harm’. Whilst it’s fantastic to see widespread recognition of the potential for digital health, we must also learn from what we’ve seen so far and take a moment to reflect on the bigger picture.
Digital health is facing a maturity challenge
Alongside such accelerated growth, it is crucial that the digital health sector properly matures in order to achieve its potential in the long-term.
How do we ensure we are building tools that are safe, both now and in the long-term? How do we protect patients and users in every way possible, from their health to their data security? How do we build trust among patients, clinicians and policymakers?
As with any safety-critical industry, proper regulation is vital. We have to ensure that we are holding each other to the highest medical and ethical standards, so that individuals can have absolute confidence in digital health solutions.
These tools fall under the purview of medical device regulation, but much of this has been designed predominantly with physical devices such as pacemakers in mind. Even though the European Union’s Medical Device Regulation has been heralded as a major step forward in ensuring patient safety, concerns were raised about the industry’s readiness for its full application by May this year, in part because of a lack of guidance and resources provided by the regulators themselves. Moreover, standards and requirements for healthcare IT systems were written for more traditional software. These regulations, standards, and requirements largely fail to account for the novel challenges posed by emerging technologies such as AI.
Modernising these regulations will take time, and applying existing regulations to the current realities of the digital health space is no easy feat. Therefore, companies have an even greater responsibility to emphasise the ‘health’ in health tech, to ensure clinical best practice and get proactive about assessment and evaluation.
Everyone has a role to play in tackling this challenge
As a field, we are only as good as our weakest link. We need to level up together, and we need companies to commit to continuous and rigorous assessment and evaluation at every stage.
First and foremost, prioritising safety all starts in your own backyard. We cannot simply wait for external regulation. The EU’s MDR was set to be a real milestone this year, and its delay means it is even more vital that all players strive for rigorous evaluation internally, as well as externally. At the very least, companies must involve healthcare professionals at every level, conduct rigorous risk assessments and constantly review user feedback.
Beyond this, we have a responsibility to work together to ensure that the external methods used for evaluation are held to the same high standards and expectations of rigour as the solutions they are designed to test. We need as many players as possible to participate in research, evaluation studies and industry-wide initiatives in order to drive positive change.
Projects such as the World Economic Forum’s Chatbots RESET mark important steps in this process, bringing together stakeholders from multiple areas in tech, medicine, academia and policy to design frameworks for governing chatbots used in healthcare.
Transparency about failure is key to success
Given the recent drive to release products rapidly, the post-market aspects of regulation, such as post-market surveillance and clinical follow-up, could not be more important.
Not only are robust, responsive and transparent feedback loops vital to this, but companies need to commit to the rapid and effective investigation of reported issues. These processes are fundamental to increasing trust in digital health as a whole, and encourage open lines of communication between developers and the populations they are trying to serve.
As we look to the future, we must also take the time to review the impact that the swathe of digital health solutions released in the last few months have actually had.
We need to be honest, transparent, and self-critical in this review process, so we make sure we learn what worked, what didn’t, and why. It is very difficult to find any silver linings when dealing with a pandemic that continues to devastate millions, so we absolutely cannot miss this opportunity to learn from this experience.
An open dialogue, and a lasting future for digital health
What’s clear is that we must champion a culture of quality and safety, and encourage an open dialogue around these topics. As MDR comes into effect next year, it will be vital that players across the industry welcome it as a positive step, while remaining transparent with both regulators and other stakeholders about the subsequent challenges and opportunities.
These are complex issues and there are no easy fixes, so it’s crucial that we address them in collaboration with the populations we are aiming to serve, allowing us to really maximise the potential of digital health and enabling the safe and widespread adoption of digital solutions around the world.
By establishing safe and effective regulation on a global scale, we will be able to truly leverage emerging technologies as a lasting solution to global health issues.