DWP using machine algorithm to decide whether people should

The government is trialling a machine algorithm to predict whether universal credit claimants should recieve benefits based on their perceived likelhood of committing fraud in the future.

Campaigners have warned that marginalised or vulnerable groups are in danger of being unfairly penalised and having their benefits stopped before they are even paid out under the algorithm, which the Department for Work and Pensions (DWP) has been trialling over the past year.

The department’s 2021-22 accounts, published last Thursday, revealed that it had trialled a “risk model” to “detect fraud” in universal credit advances claims by analysing information from historical fraud cases to predict which cases are likely to be fraudulent in the future.

The document states that this analysis was performed by a “machine learning algorithm”, which “builds a model based on historic fraud and error data in order to make predictions, without being explicitly programmed by a human being”.

In 2021-22 the model has been run to detect fraud in advances claims already in payment, and the department expects to trial the model on claims before any payment has been made early in 2022-23.

“If successful this could improve its ability to prevent fraud before these benefts are paid out, avoiding the need to seek recovery,” the accounts state.

A separate report by the National Audit Office on the DWP’s accounts, also published last Thursday, revealed that the DWP was aware of the potential for such a model to generate “biased outcomes” that could have an “adverse impact on certain claimants”.

“For instance, it is unavoidable that some cases flagged as potentially fraudulent will turn out to be legitimate claims. If the model were to disproportionately identify a group with a protected characteristic as more likely to commit fraud, the model could inadvertently obstruct fair access to benefits,” the report states.

It also pointed out the potential for legal risks if the department were found in breach of its obligations regarding transparency or data protection.

Ariane Adam, legal director of the Public Law Project, said: “Departments across government need to commit to a great deal more than just being ‘aware’ of the risks. We need a clear commitment that all government departments will be transparent about how they use algorithms.”

She said the lack of transparency around the new algorithm was “very problematic”.

“Despite many requests under the Freedom of Information Act, the DWP has previously refused to provide details about its use of automation to assess universal credit applications,” she said.

“Without transparency there can be no evaluation, and without evaluation it is not possible to tell if a system works reliably, lawfully or fairly.”

Ms Adam added that there was a “massive risk” that the policy would have a discriminatory impact.

“Using algorithms fed by historic big data to make decisions on welfare benefit claims carries a danger of unfairly penalising and discriminating against marginalised or vulnerable groups,” she said.

“In the midst cost-of-living crisis, people could have benefits stopped before they are even paid out because a computer algorithm said ‘no’.”

A DWP spokesperson said: “We do not use artificial intelligence to make decisions on how a universal credit claim should progress and continue to work hard to be transparent as possible about our claims process without compromising our ability to identify fraud.

“It is right that we keep up with fraud in today’s digital age so we can prevent, detect and deter those who would try to cheat the system and more importantly, improve our support for genuine claimants.”