Black box medicine and transparency

Like other complex algorithms, machine learning can be ‘black box medicine’ - where conclusions (that may influence decisions related to care) are made without users  understanding why.

a camera lens as a human eye

Though such systems can help medical research and healthcare in many ways, the amount of data they use and their complexity may mean that how they make decisions cannot be explicitly understood by patients or health professionals. This raises a number of legal and ethical issues.

To further understanding of the black box medicine problem, the PHG Foundation was awarded seed funding from the Wellcome Trust to examine interpretability in the context of healthcare and relevant regulation.

The Foundation has now published a set of reports for legal, regulatory and health policy audiences and created a dedicated Interpretability by design framework for developers of machine learning models for healthcare.

Read Black box medicine and transparency >>

 



Read more

Looking for something specific?