Algorithmic bias in medical devices could be hurting us

The more we rely on medical devices to treat us, the greater the risk that tiny flaws in their underlying algorithms could lead to misdiagnosis – or worse. Most of these algorithms are trained on insufficiently representative patient data and consequently are unable to address common differences in physiological symptoms across a diverse sample of patient populations. As a result the diagnoses they produce are sometimes ineffective and potentially harmful.

These biases – often hidden deep inside the complex layers of machine learning and biometric processing that lie at the heart of these devices – can be especially insidious in underrepresented demographics.

Take the pulse oximeter, for example. They became a common household appliance post-Covid thanks to their ability to easily measure oxygen saturation – an important early warning indicator of the silent onset of disease. A recent study found that these devices are less accurate in patients with darker skin – to the extent that African-American patients are three times less likely to detect hypoxemia than Caucasians. It is hard to measure how this affected the death rate during the pandemic

The fact that these tools have been tested extensively on lighter skinned people means their algorithms are tuned to the light absorption and reflection characteristics of lighter skin tones. As a result they perform poorly on darker shades. The same was found to be the case with melanoma detection algorithms, which performed poorly on darker skin tones for the same reason—resulting in significantly delayed detection or, in some cases, missing carcinoma altogether. .

Then there is the electrocardiogram (ECG) machine, an essential part of all modern healthcare facilities. While it may appear that they simply transfer the human heartbeat to a line on a graph, in reality, they use quite complex algorithms that simulate physical signals to produce what they do. explain a series of

There is a growing body of evidence that suggests these ECG algorithms are significantly less effective on obese patients – generating inaccurate readings leading to misdiagnosis and ineffective treatment. Since obese people are naturally prone to heart disease, delay in treatment can be fatal in many cases.

Similar biases exist because of gender differences. It is a fact that women experience heart disease differently than men – and so their symptoms are often wrongly attributed to non-cardiac causes. This results in incorrect treatment and delayed diagnosis of heart conditions – a problem often exacerbated by medical algorithms trained primarily on male data. Gender misrepresentation during the preclinical stages of drug development routinely fails to capture how women respond to these new drugs, resulting in misconceptions that these drugs cause side effects in women as well. So will their overall effectiveness. The implications of such prejudice are far-reaching indeed – potentially extending to half the human population.

These examples are just the tip of the iceberg.

While most of the problems discussed here are due to historical circumstances—the actual lack of demographically disaggregated training data at the time they were being developed—it is exacerbated by the proprietary nature of medical devices and their underlying algorithms. Is. A lack of transparency – both about how these algorithms work and the data they were originally trained on – hinders our ability to do the root cause analysis of a problem that would allow us to improve them. As a result, even if they wanted to, physicians and medical practitioners don’t know how to compensate for what their equipment is telling them.

An obvious solution would be to make these algorithms more transparent so that OEM device makers can better understand the machines they are building and modify them appropriately to cover the diverse demographics they are addressing. are going to do. Where possible, this should include releasing the underlying algorithms as open source so that researchers, software engineers and medical practitioners alike can analyze them, detect biases that exist and, where possible, identify them for patients. can be better modified for the population of But. This will not only foster deeper collaboration between the various disciplines required for the development of these devices, but also spur innovation by providing wider access to the tools needed to build better, more inclusive medical devices.

But it is easier said than done. Proprietary algorithms are often the result of significant investment in research and development – ​​money that companies cannot monetize. As a result, any effort to make these algorithms more accessible will need to balance a legitimate need for the wider community of research with the need to protect the intellectual property inherent in their creation.

We stand at the beginning of a new era in healthcare – one that will be powered by artificial intelligence and smart devices. We have to ensure that the whole of humanity is benefited from these new technologies. To do this we need to balance competing interests to ensure that everyone benefits, regardless of skin colour, gender or geographic location. The choices we make today – will determine the care we receive tomorrow.

Rahul Maithon is a partner at Trilegal and has a podcast called Ex Machina. His Twitter handle is @matthan.

catch ’em all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less