Uber Eats courier’s fight against AI bias shows justice under UK law is hard won

Share via:


On Tuesday, the BBC reported that Uber Eats courier Pa Edrissa Manjang, who is Black, had received a payout from Uber after “racially discriminatory” facial recognition checks prevented him from accessing the app, which he had been using since November 2019 to pick up jobs delivering food on Uber’s platform.

The news raises questions about how fit UK law is to deal with the rising use of AI systems. In particular, the lack of transparency around automated systems rushed to market, with a promise of boosting user safety and/or service efficiency, that may risk blitz-scaling individual harms, even as achieving redress for those affected by AI-driven bias can take years.

The lawsuit followed a number of complaints about failed facial recognition checks since Uber implemented the Real Time ID Check system in the U.K. in April 2020. Uber’s facial recognition system — based on Microsoft’s facial recognition technology — requires the account holder to submit a live selfie checked against a photo of them held on file to verify their identity.

Failed ID checks

Per Manjang’s complaint, Uber suspended and then terminated his account following a failed ID check and subsequent automated process, claiming to find “continued mismatches” in the photos of his face he had taken for the purpose of accessing the platform. Manjang filed legal claims against Uber in October 2021, supported by the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).

Years of litigation followed, with Uber failing to have Manjang’s claim struck out or a deposit ordered for continuing with the case. The tactic appears to have contributed to stringing out the litigation, with the EHRC describing the case as still in “preliminary stages” in fall 2023, and noting that the case shows “the complexity of a claim dealing with AI technology”. A final hearing had been scheduled for 17 days in November 2024.

That hearing won’t now take place after Uber offered — and Manjang accepted — a payment to settle, meaning fuller details of what exactly went wrong and why won’t be made public. Terms of the financial settlement have not been disclosed, either. Uber did not provide details when we asked, nor did it offer comment on exactly what went wrong.

We also contacted Microsoft for a response to the case outcome, but the company declined comment.

Despite settling with Manjang, Uber is not publicly accepting that its systems or processes were at fault. Its statement about the settlement denies courier accounts can be terminated as a result of AI assessments alone, as it claims facial recognition checks are back-stopped with “robust human review.”

“Our Real Time ID check is designed to help keep everyone who uses our app safe, and includes robust human review to make sure that we’re not making decisions about someone’s livelihood in a vacuum, without oversight,” the company said in a statement. “Automated facial verification was not the reason for Mr Manjang’s temporary loss of access to his courier account.”

Clearly, though, something went very wrong with Uber’s ID checks in Manjang’s case.

Worker Info Exchange (WIE), a platform workers’ digital rights advocacy organization which also supported Manjang’s complaint, managed to obtain all his selfies from Uber, via a Subject Access Request under UK data protection law, and was able to show that all the photos he had submitted to its facial recognition check were indeed photos of himself.

“Following his dismissal, Pa sent numerous messages to Uber to rectify the problem, specifically asking for a human to review his submissions. Each time Pa was told ‘we were not able to confirm that the provided photos were actually of you and because of continued mismatches, we have made the final decision on ending our partnership with you’,” WIE recounts in discussion of his case in a wider report looking at “data-driven exploitation in the gig economy”.

Based on details of Manjang’s complaint that have been made public, it looks clear that both Uber’s facial recognition checks and the system of human review it had set up as a claimed safety net for automated decisions failed in this case.

Equality law plus data protection

The case calls into question how fit for purpose UK law is when it comes to governing the use of AI.

Manjang was finally able to get a settlement from Uber via a legal process based on equality law — specifically, a discrimination claim under the UK’s Equality Act 2006, which lists race as a protected characteristic.

Baroness Kishwer Falkner, chairwoman of the EHRC, was critical of the fact the Uber Eats courier had to bring a legal claim “in order to understand the opaque processes that affected his work,” she wrote in a statement.

“AI is complex, and presents unique challenges for employers, lawyers and regulators. It is important to understand that as AI usage increases, the technology can lead to discrimination and human rights abuses,” she wrote. “We are particularly concerned that Mr Manjang was not made aware that his account was in the process of deactivation, nor provided any clear and effective route to challenge the technology. More needs to be done to ensure employers are transparent and open with their workforces about when and how they use AI.”

UK data protection law is the other relevant piece of legislation here. On paper, it should be providing powerful protections against opaque AI processes.

The selfie data relevant to Manjang’s claim was obtained using data access rights contained in the UK GDPR. If he had not been able to obtain such clear evidence that Uber’s ID checks had failed, the company might not have opted to settle at all. Proving a proprietary system is flawed without letting individuals access relevant personal data would further stack the odds in favor of the much richer resourced platforms.

Enforcement gaps

Beyond data access rights, powers in the UK GDPR are supposed to provide individuals with additional safeguards, including against automated decisions with a legal or similarly significant effect. The law also demands a lawful basis for processing personal data, and encourages system deployers to be proactive in assessing potential harms by conducting a data protection impact assessment. That should force further checks against harmful AI systems.

However, enforcement is needed for these protections to have effect — including a deterrent effect against the rollout of biased AIs.

In the UK’s case, the relevant enforcer, the Information Commissioner’s Office (ICO), failed to step in and investigate complaints against Uber, despite complaints about its misfiring ID checks dating back to 2021.

Jon Baines, a senior data protection specialist at the law firm Mishcon de Reya, suggests “a lack of proper enforcement” by the ICO has undermined legal protections for individuals.

“We shouldn’t assume that existing legal and regulatory frameworks are incapable of dealing with some of the potential harms from AI systems,” he tells TechCrunch. “In this example, it strikes me…that the Information Commissioner would certainly have jurisdiction to consider both in the individual case, but also more broadly, whether the processing being undertaken was lawful under the UK GDPR.

“Things like — is the processing fair? Is there a lawful basis? Is there an Article 9 condition (given that special categories of personal data are being processed)? But also, and crucially, was there a solid Data Protection Impact Assessment prior to the implementation of the verification app?”

“So, yes, the ICO should absolutely be more proactive,” he adds, querying the lack of intervention by the regulator.

We contacted the ICO about Manjang’s case, asking it to confirm whether or not it’s looking into Uber’s use of AI for ID checks in light of complaints. A spokesperson for the watchdog did not directly respond to our questions but sent a general statement emphasizing the need for organizations to “know how to use biometric technology in a way that doesn’t interfere with people’s rights”.

“Our latest biometric guidance is clear that organisations must mitigate risks that come with using biometric data, such as errors identifying people accurately and bias within the system,” its statement also said, adding: “If anyone has concerns about how their data has been handled, they can report these concerns to the ICO.”

Meanwhile, the government is in the process of diluting data protection law via a post-Brexit data reform bill.

In addition, the government also confirmed earlier this year it will not introduce dedicated AI safety legislation at this time, despite prime minister Rishi Sunak making eye-catching claims about AI safety being a priority area for his administration.

Instead, it affirmed a proposal — set out in its March 2023 whitepaper on AI — in which it intends to rely on existing laws and regulatory bodies extending oversight activity to cover AI risks that might arise on their patch. One tweak to the approach it announced in February was a tiny amount of extra funding (£10 million) for regulators, which the government suggested could be used to research AI risks and develop tools to help them examine AI systems.

No timeline was provided for disbursing this small pot of extra funds. Multiple regulators are in the frame here, so if there’s an equal split of cash between bodies such as the ICO, the EHRC and the Medicines and Healthcare products Regulatory Agency, to name just three of the 13 regulators and departments the UK secretary of state wrote to last month asking them to publish an update on their “strategic approach to AI”, they could each receive less than £1M to top up budgets to tackle fast-scaling AI risks.

Frankly, it looks like an incredibly low level of additional resource for already overstretched regulators if AI safety is actually a government priority. It also means there’s still zero cash or active oversight for AI harms that fall between the cracks of the UK’s existing regulatory patchwork, as critics of the government’s approach have pointed out before.

A new AI safety law might send a stronger signal of priority — akin to the EU’s risk-based AI harms framework that’s speeding towards being adopted as hard law by the bloc. But there would also need to be a will to actually enforce it. And that signal must come from the top.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Uber Eats courier’s fight against AI bias shows justice under UK law is hard won


On Tuesday, the BBC reported that Uber Eats courier Pa Edrissa Manjang, who is Black, had received a payout from Uber after “racially discriminatory” facial recognition checks prevented him from accessing the app, which he had been using since November 2019 to pick up jobs delivering food on Uber’s platform.

The news raises questions about how fit UK law is to deal with the rising use of AI systems. In particular, the lack of transparency around automated systems rushed to market, with a promise of boosting user safety and/or service efficiency, that may risk blitz-scaling individual harms, even as achieving redress for those affected by AI-driven bias can take years.

The lawsuit followed a number of complaints about failed facial recognition checks since Uber implemented the Real Time ID Check system in the U.K. in April 2020. Uber’s facial recognition system — based on Microsoft’s facial recognition technology — requires the account holder to submit a live selfie checked against a photo of them held on file to verify their identity.

Failed ID checks

Per Manjang’s complaint, Uber suspended and then terminated his account following a failed ID check and subsequent automated process, claiming to find “continued mismatches” in the photos of his face he had taken for the purpose of accessing the platform. Manjang filed legal claims against Uber in October 2021, supported by the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).

Years of litigation followed, with Uber failing to have Manjang’s claim struck out or a deposit ordered for continuing with the case. The tactic appears to have contributed to stringing out the litigation, with the EHRC describing the case as still in “preliminary stages” in fall 2023, and noting that the case shows “the complexity of a claim dealing with AI technology”. A final hearing had been scheduled for 17 days in November 2024.

That hearing won’t now take place after Uber offered — and Manjang accepted — a payment to settle, meaning fuller details of what exactly went wrong and why won’t be made public. Terms of the financial settlement have not been disclosed, either. Uber did not provide details when we asked, nor did it offer comment on exactly what went wrong.

We also contacted Microsoft for a response to the case outcome, but the company declined comment.

Despite settling with Manjang, Uber is not publicly accepting that its systems or processes were at fault. Its statement about the settlement denies courier accounts can be terminated as a result of AI assessments alone, as it claims facial recognition checks are back-stopped with “robust human review.”

“Our Real Time ID check is designed to help keep everyone who uses our app safe, and includes robust human review to make sure that we’re not making decisions about someone’s livelihood in a vacuum, without oversight,” the company said in a statement. “Automated facial verification was not the reason for Mr Manjang’s temporary loss of access to his courier account.”

Clearly, though, something went very wrong with Uber’s ID checks in Manjang’s case.

Worker Info Exchange (WIE), a platform workers’ digital rights advocacy organization which also supported Manjang’s complaint, managed to obtain all his selfies from Uber, via a Subject Access Request under UK data protection law, and was able to show that all the photos he had submitted to its facial recognition check were indeed photos of himself.

“Following his dismissal, Pa sent numerous messages to Uber to rectify the problem, specifically asking for a human to review his submissions. Each time Pa was told ‘we were not able to confirm that the provided photos were actually of you and because of continued mismatches, we have made the final decision on ending our partnership with you’,” WIE recounts in discussion of his case in a wider report looking at “data-driven exploitation in the gig economy”.

Based on details of Manjang’s complaint that have been made public, it looks clear that both Uber’s facial recognition checks and the system of human review it had set up as a claimed safety net for automated decisions failed in this case.

Equality law plus data protection

The case calls into question how fit for purpose UK law is when it comes to governing the use of AI.

Manjang was finally able to get a settlement from Uber via a legal process based on equality law — specifically, a discrimination claim under the UK’s Equality Act 2006, which lists race as a protected characteristic.

Baroness Kishwer Falkner, chairwoman of the EHRC, was critical of the fact the Uber Eats courier had to bring a legal claim “in order to understand the opaque processes that affected his work,” she wrote in a statement.

“AI is complex, and presents unique challenges for employers, lawyers and regulators. It is important to understand that as AI usage increases, the technology can lead to discrimination and human rights abuses,” she wrote. “We are particularly concerned that Mr Manjang was not made aware that his account was in the process of deactivation, nor provided any clear and effective route to challenge the technology. More needs to be done to ensure employers are transparent and open with their workforces about when and how they use AI.”

UK data protection law is the other relevant piece of legislation here. On paper, it should be providing powerful protections against opaque AI processes.

The selfie data relevant to Manjang’s claim was obtained using data access rights contained in the UK GDPR. If he had not been able to obtain such clear evidence that Uber’s ID checks had failed, the company might not have opted to settle at all. Proving a proprietary system is flawed without letting individuals access relevant personal data would further stack the odds in favor of the much richer resourced platforms.

Enforcement gaps

Beyond data access rights, powers in the UK GDPR are supposed to provide individuals with additional safeguards, including against automated decisions with a legal or similarly significant effect. The law also demands a lawful basis for processing personal data, and encourages system deployers to be proactive in assessing potential harms by conducting a data protection impact assessment. That should force further checks against harmful AI systems.

However, enforcement is needed for these protections to have effect — including a deterrent effect against the rollout of biased AIs.

In the UK’s case, the relevant enforcer, the Information Commissioner’s Office (ICO), failed to step in and investigate complaints against Uber, despite complaints about its misfiring ID checks dating back to 2021.

Jon Baines, a senior data protection specialist at the law firm Mishcon de Reya, suggests “a lack of proper enforcement” by the ICO has undermined legal protections for individuals.

“We shouldn’t assume that existing legal and regulatory frameworks are incapable of dealing with some of the potential harms from AI systems,” he tells TechCrunch. “In this example, it strikes me…that the Information Commissioner would certainly have jurisdiction to consider both in the individual case, but also more broadly, whether the processing being undertaken was lawful under the UK GDPR.

“Things like — is the processing fair? Is there a lawful basis? Is there an Article 9 condition (given that special categories of personal data are being processed)? But also, and crucially, was there a solid Data Protection Impact Assessment prior to the implementation of the verification app?”

“So, yes, the ICO should absolutely be more proactive,” he adds, querying the lack of intervention by the regulator.

We contacted the ICO about Manjang’s case, asking it to confirm whether or not it’s looking into Uber’s use of AI for ID checks in light of complaints. A spokesperson for the watchdog did not directly respond to our questions but sent a general statement emphasizing the need for organizations to “know how to use biometric technology in a way that doesn’t interfere with people’s rights”.

“Our latest biometric guidance is clear that organisations must mitigate risks that come with using biometric data, such as errors identifying people accurately and bias within the system,” its statement also said, adding: “If anyone has concerns about how their data has been handled, they can report these concerns to the ICO.”

Meanwhile, the government is in the process of diluting data protection law via a post-Brexit data reform bill.

In addition, the government also confirmed earlier this year it will not introduce dedicated AI safety legislation at this time, despite prime minister Rishi Sunak making eye-catching claims about AI safety being a priority area for his administration.

Instead, it affirmed a proposal — set out in its March 2023 whitepaper on AI — in which it intends to rely on existing laws and regulatory bodies extending oversight activity to cover AI risks that might arise on their patch. One tweak to the approach it announced in February was a tiny amount of extra funding (£10 million) for regulators, which the government suggested could be used to research AI risks and develop tools to help them examine AI systems.

No timeline was provided for disbursing this small pot of extra funds. Multiple regulators are in the frame here, so if there’s an equal split of cash between bodies such as the ICO, the EHRC and the Medicines and Healthcare products Regulatory Agency, to name just three of the 13 regulators and departments the UK secretary of state wrote to last month asking them to publish an update on their “strategic approach to AI”, they could each receive less than £1M to top up budgets to tackle fast-scaling AI risks.

Frankly, it looks like an incredibly low level of additional resource for already overstretched regulators if AI safety is actually a government priority. It also means there’s still zero cash or active oversight for AI harms that fall between the cracks of the UK’s existing regulatory patchwork, as critics of the government’s approach have pointed out before.

A new AI safety law might send a stronger signal of priority — akin to the EU’s risk-based AI harms framework that’s speeding towards being adopted as hard law by the bloc. But there would also need to be a will to actually enforce it. And that signal must come from the top.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Norwegian startup Factiverse wants to fight disinformation with AI

In the wake of the U.S. 2024 presidential...

A  study found that X’s algorithm now loves two...

Elon Musk’s X may have tweaked its algorithm...

Gurman: Apple ‘evaluating’ idea of releasing a TV set...

According to this weekend’s Power On newsletter from...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!