Black box medicine and transparency

a camera lens as a human eye

Like other complex algorithms, machine learning can be ‘black box medicine’ - where conclusions (that may influence decisions related to care) are made without users  understanding why.

Though such systems can help medical research and healthcare in many ways, the amount of data they use and their complexity may mean that how they make decisions cannot be explicitly understood by patients or health professionals. This raises a number of legal and ethical issues.

To further understanding of the black box medicine problem, the PHG Foundation was awarded seed funding from the Wellcome Trust to examine interpretability in the context of healthcare and relevant regulation.

The Foundation has now published a set of reports for legal, regulatory and health policy audiences and created a dedicated Interpretability by design framework for developers of machine learning models for healthcare.

Read Black box medicine and transparency >>


To read more information, click here.

The PHG Foundation is a health policy think tank with a special focus on how genomics and other emerging health technologies can provide more effective, personalised healthcare and deliver improvements in population health.

PHG Foundation