US healthcare algorithm used to decide care for 200MILLION patients each year is accused of being racially bias against black people
- Algorithm disproportionately advised medics to give more care to white people
- Less than half of black patients who needed care were put forward, study found
- Software helps medics decide future of roughly 200m patients in US each year
An algorithm used by hospitals in the US to identify patients with chronic diseases has a significant bias against black people, researchers have claimed.
The artificial intelligence, sold by health firm Optum, disproportionately advised medics to give more care to white people even when black patients were sicker.
It means just 18 per cent of black patients were suggested for a continued care programme, when 47 per cent of them were in desperate need of it.
The software helps medics decide the future of roughly 200million patients across the US each year.
An algorithm used to help medics decide the future of roughly 200million patients across the US each year has a significant bias against black people (file image)
Scientists from universities in Chicago, Boston and Berkeley flagged the error in their study, published in the journal Science, and are working with Optum on a fix.
They said the algorithm – designed to help patients stay on medications or out of the hospital – was not intentionally racist because it specifically excluded ethnicity in its decision-making.
Rather than using illness or biological data, the tech uses cost and insurance claim information to predict how healthy a person is.
The computer system has been programmed to believe the more money spent, the sicker the patient is.
But the data it was working with showed less was being spent on black patients because they receive less care.
Black patients incurred roughly $1,800 (£1,400) less in medical costs annually than white patients with the same level of sickness.
HOW DOES ARTIFICIAL INTELLIGENCE LEARN?
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.
Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.
The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.
Researchers say this is due to a myriad of factors, including lack of insurance, lack of access and even unconscious biases from doctors.
Therefore, the machine ranked white patients equally at risk of future health woes as black patients who were actually much sicker.
Correcting these biases in the algorithm would more than double the number of black patients flagged for additional care, the study showed.
Following the findings, the study was replicated on a different dataset of 3.7million patients.
It found black patients collectively suffered from 48,772 additional chronic diseases compared with white people.
Although this study was conducted on just one healthcare algorithm, the researchers say similar biases probably exist across a number of industries.
Lead researcher Sendhil Mullainathan, a professor of computation and behavioral science at the University of Chicago, said: ‘It’s truly inconceivable to me that anyone else’s algorithm doesn’t suffer from this.
‘I’m hopeful that this causes the entire industry to say, “Oh, my, we’ve got to fix this.”‘
Optum said it welcomed the research and claimed it would be useful to creators of other healthcare algorithms, many of which use similar systems.
Spokesman Tyler Mason said: ‘Predictive algorithms that power these tools should be continually reviewed and refined, and supplemented by information such as socio-economic data, to help clinicians make the best-informed care decisions for each patient.
‘As we advise our customers, these tools should never be viewed as a substitute for a doctor’s expertise and knowledge of their patients’ individual needs.’
Source: Read Full Article