The UK National Standards Body, BSI, has published guidance designed to build greater digital trust in the AI products used to diagnose or treat patients, ranging from medical devices to smartphone chatbots or in-home monitoring tools.
With the onset of more innovative AI tools, clinicians and health providers will have the opportunity to efficiently make informed diagnostic decisions to intervene, prevent and treat diseases, ultimately improving patients’ quality of life and benefiting society.
Scott Steedman, Director General, Standards, BSI, said: “The new guidance can help build digital trust in cutting edge tools that represent enormous potential benefit to patients and the professionals diagnosing and treating them..”
As debate continues globally regarding the appropriate use of AI, Validation framework for the use of AI within healthcare – Specification (BS 30440) is being published to help increase confidence among clinicians, healthcare professionals and clinical providers that the tools developed in a safe, effective and ethical way. The auditable standard, which can be used globally, is targeted explicitly at products whose key function is to enable or provide treatment, diagnosis, or support the management of health conditions.
Forecasts suggest the global healthcare AI market size could exceed $187.95 billion by 2030. Healthcare providers and clinicians may be facing time and budgetary constraints or lacking the in-house capability and capacity to conduct assessments of AI products, so the specification can support decision-making around what tools to use. It can help clinicians and patients evaluate healthcare AI products by considering criteria including clinical benefit, standards of performance, successful and safe integration into the clinical work environment, ethical considerations and socially equitable outcomes.
It covers healthcare AI products used in a range of settings, from regulated medical devices (such as software as a medical device) but also user-facing products (such as imaging software) or patient-facing (such as smartphone chatbots using AI). It also encompasses AI products that are used in the home (such as monitoring products) or in community, primary, secondary or tertiary care settings. The specification applies to products, models, systems or technologies that use elements of AI, including machine learning and is also relevant to the AI system supplier and product auditors.
The document has been developed by a panel of experts partnering to identify best practice, including clinicians, software engineers, AI experts, ethicists and healthcare leaders. It draws together existing guidance literature and good practice, then translates the assessment of complex functionality into an auditable framework against which an AI system can be assessed for conformity. Healthcare organisations can mandate BS 30440 certification as a requirement in their procurement processes, thus ensuring these systems have met a known standard.
Jeanne Greathouse, Global Healthcare Director, BSI said: “This standard is highly relevant to organisations in the healthcare sector and those interacting with it. As AI becomes the norm, it can potentially be transformative for healthcare. With the onset of more innovative AI tools, and AI algorithms’ ability to digest and accurately analyse copious amounts of data, clinicians and health providers can efficiently make informed diagnostic decisions to intervene, prevent and treat diseases, ultimately improving patients’ quality of life.”
The specification answers a need for an agreed validation frameworks for AI development and clinical evaluation in healthcare. It builds on a framework first trialled by experts at Guy’s and St. Thomas Cancer Centre and revised through subsequent discussions with stakeholders engaged in the field of AI and Machine Learning. In parallel BSI has been working in partnership with regulators, healthcare organisations and other bodies to consider the role of standards in supporting the regulation and governance of AI in healthcare.