Why patients and the public should be part of the conversation around AI in healthcare

color blob

As a physician, I learned that the patients are the experts. They live with their illnesses and know what their needs are. Since healthtech companies serve care providers, it makes sense that they adopt the same attitude: recognise and involve patients.

After having been an NHS doctor and a patient myself, I have now taken the role of a healthtech innovator with an artificial intelligence (AI) company. I thus find it valuable to use my experience and role to help break down barriers to patient (and medical professional) engagement. Above all, this is a way to contribute to healthcare improvements.

In my view, healthtech should be accessible and comprehensible to all, without requiring prior experience or technical knowledge. Why exactly? And how to set that up? With this piece, I’d like to open the conversation around engaging patients in the design and development of AI medical solutions.

Why engagement matters

Why is patient engagement important? For starters, it is our responsibility as a healthtech company to help patients understand how the technology we build plays a role in the care they receive. Beyond that, it can answer two needs:

Public awareness

There are noticeable knowledge gaps and reticent attitudes about new technologies. Research on public attitudes (e.g. surveys conducted in the UK) suggests low public awareness of how these technologies operate. If the way in which intelligent devices process personal data is unclear, the public may reject them.

I realised when talking to patient advocacy groups that technology seems alien to patients because they don’t hear about it. What I also notice is that media stories are more often the preferred source of information for the public rather than the AI companies or doctors. Some articles in the media add to the concerns, with titles such as An invisible hand: Patients aren’t being told about the AI systems advising their care. In clinical practice, radiologists have been using Computer-Aided Diagnosis (CAD)/AI tools to help decision-making for years. Perhaps this is why it does not seem a priority to mention tech use to patients.

Workforce support

Let’s look at it differently. By informing patients about the tools used in their care (AI-based, or even older CAD systems), we will start to normalise the use of healthtech. And we should, because the increasing presence of technology in healthcare is a positive development. It allows us to maintain and support healthcare systems facing numerous challenges.

For example, the radiology task force is under pressure. It plays an essential role in almost all care pathways – including covid-19-induced lung damage – so the additional strain on healthcare providers translates into high demand for medical images. At the same time, there are not enough radiologists to report on these scans. Intelligent solutions are needed to help the existing workforce to achieve timely and consistent reporting of radiological examinations.

Worth mentioning here that the use concerns second and concurrent AI tools that aid diagnosis decisions and operational efficiencies. These are not autonomous tools replacing a human healthcare professional. Any prediction that this would happen remains unproven.

Illustration of care.

 

What patient engagement can bring

There is a lot to gain for healthcare innovators engaging with patients during the development of their solutions. I can think of at least three things:

A sense of purpose

Tech-focused teams working in the medical device industry have most likely not worked in a hospital before. Although, as I see with my colleagues, there is a strong drive to help caregivers and patients, the direct connection with the care pathway is harder to establish. Meeting the public would help AI developers see why their work makes a difference. Adding to their sense of purpose can only benefit workplace satisfaction and quality of work.

Better access to data

Healthcare data is generated daily, across modalities and systems, to inform clinical decision-making. This data is invaluable to research and development. By using it appropriately, we can develop new technologies and improve existing ones. It is how we drive progress in healthcare.

AI development relies on access to large, diverse, and high-quality datasets. Companies using these datasets to develop medical solutions have the legal and ethical responsibility to protect patient data and preserve its integrity and confidentiality. Our recent article explains the regulatory frameworks and standards around cybersecurity. It also reviews ways of addressing security risks from both a medical device regulation and a GDPR perspective.

I may be overly ambitious here, but by involving patients and the public in the design and development of an AI medical device, we may build trust and wider acceptance of data access. For one, we would have the opportunity to communicate a clear, GDPR-based definition of what patient data is used for and how. We are aware of outcomes from different publications highlighting the key points that the public finds relevant about the use of data: value, privacy, risk minimisation, data security, transparency, control, information, trust, responsibility and accountability. These are the perfect starting points.

Patient perspective on design

As Aidence, we understand the importance of a user-centric design. We, therefore, work closely with physicians and value their input during the development and improvement of our AI solutions.

With input from patients, we can do even better. Understanding patient perspectives on AI solutions would help companies identify improvements and unmet needs. Or it might help prioritise specific features. These are just a few examples of relevant insights to collect from patients or patient groups:

  • Patient priorities in disease management;
  • Effects of clinical decision-making on quality of life;
  • The patient’s view on the use of clinical data in product development.

As a result, we will optimise the usability of the AI tool for both healthcare providers and patients.

Illustraion of a relaxed patient in a scanner.

Ways of engagement

There is no guidebook on how to approach patient engagement today. Nonetheless, I’ve been laying the foundation with efforts in the past and ideas for the future.

Accessible information

Patient advocacy group Lung Cancer Europe (LuCE) work tirelessly to provide accessible information for the public on lung cancer and to lobby for equality of care across Europe. In November last year, I supported the creation of their 4th report and joined them in presenting the findings to the European Parliament and lobby for the introduction of lung cancer screening programmes. (Screening for lung cancer – effectively and sustainably – is a topic I deeply care about and on which I focused in my previous articles.)

Opportunities like this are fantastic for meeting patients/the public and explaining the role I have as an innovator in healthtech. Whilst I am not surprised that technology’s role in screening is unfamiliar, I am usually met with interest and enthusiasm once I explain what I do.

Open dialogue

Another point on our agenda is office visits. I have invited patients to talk to the team about their diagnoses and to show how our solution Veye Lung Nodules helps detect and segment early-stage lung cancer. Unfortunately, the visits had to be postponed due to covid-19. I acknowledge that this initiative is not scalable. Still, attempting ways to communicate directly with patients checks all the benefits I listed above.

A more scalable initiative was writing a guide for patients in which I explained the role of AI in screening and which was shared among LuCE’s network. I see an opportunity for a dialogue with patients on the use of AI in healthcare pathways. It should include measures taken to secure their data, and compliance with regulations.

Collaboration

These past activities have paved the way for an engagement project focused on lung cancer. By engaging with members of the public, lung cancer patients, and healthcare professionals who specialise in lung cancer, I hope to achieve three goals: uncover current perceptions of healthcare technology and data, encourage open conversations, and work together to improve healthcare.

Patients know best

This quote from Matthew Gould, National Director for Digital Transformation in NHS England echoes the basic principle in this piece:

“The whole point of improving technology in the NHS and social care is to make life better for the people who use these systems. So it has to make sense to involve them in the design of digital services. Because no one knows how people will respond to an appointment booking service, or a diabetes management tool, or a care plan better than they do themselves.

Our patient engagement initiative is currently taking shape, and a lot of work is still pending. Please get in touch if you’re interested to hear more about it.

 

About Lizzie

Lizzie Barclay was Medical Director at AidenceLizzie trained as a doctor in the NHS and became interested in healthtech during her Radiology specialty training. She joined Aidence in 2019 and, as Medical Director, has worked closely with NHS Trusts to support the roll-out of lung screening (Targeted Lung Health Checks) programmes. She also led our successful NHSX AI Award application. Lizzie is driven by the potential that technology has to improve patient outcomes and reduce health inequalities. She hopes to help nurture a responsible, trustworthy culture in AI for healthcare.

Connect with Lizzie on

Aidence is now DeepHealth. Learn more about our AI-powered health informatics portfolio on deephealth.com

X

Book a demo

    * Required fields.
    For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.