Site icon Pro Well Technology

Gig platform report calls for transparency to fix abuse – ProWellTech

Gig platform report calls for transparency to fix abuse – ProWellTech 1

A not-for-profit set up by a former Uber driver who successfully challenged the ride-hailing giant’s misclassification of drivers’ employment status in the UK, has published a timely report pressing the case for proper oversight of the algorithms and data used to remotely surveil and control the labor of platform workers.

Timely — given the European Union has just proposed legislation to enhance algorithm transparency on digital labor platforms as a lever to tackle problematic working conditions.

At the same time, lawmakers in the UK — which now sits outside the bloc — are consulting on lowering domestic data protection standards. Including, potentially, stripping out existing rights associated with automated decision-making; and removing the requirement to carry out a data protection impact assessment prior to processing sensitive personal data — which the not-for-profit warns would amount to “a hammer-blow for precarious workers who already have long been denied basic employment rights who could now be robbed of the means to hold rogue employers to proper account”, as the report puts it.

The report, co-authored by researcher Cansu Safak and former Uber driver James Farrer, who founded the Worker Information Exchange (WIF), is entitled Managed by Bots: Data-Driven Exploitation in the Gig Economy.

It contains a number of case studies which illustrate the obstacles and obfuscation faced by regional gig workers seeking to obtain data access rights to try to assess the fairness of platforms’ data-driven decisions about them and their labor (up to and including termination of their ability to work on the platform).

Examples discussed include ‘antifraud’ facial recognition checks that appear racially biased; drivers notified of fraud strikes on their accounts without being given clear, immediate information about what is triggering warnings that can also lead to account termination; and numerous instances of drivers not being provided with all their requested data while platforms work hard to frustrate their requests.

WIF’s report goes on to argue that “woefully inadequate levels of transparency about the extent of algorithmic management and automated decision making” is enabling exploitation of workers in the gig economy.

Litigation is one (extant) route for regional gig workers to try to obtain their rights — including employment protections and data rights — and we’ve seen plenty of both already in Europe.

Most notably, Farrer’s own employment classification litigation against Uber which forced the platform to finally recognize UK drivers as workers earlier this year.

However Uber’s self-serving interpretation still avoids paying drivers for the time they spend waiting for the next trip. (Aka: “Failure to pay for waiting time as working time enables platforms to take advantage of the immediacy of availability to drive up customer response time while driving down worker earnings,” as the report neatly sumarizes it.)

While — even in the UK — such workers are not protected against instant dismissal by (unfair) algorithm.

So the report makes the point that worker status is not itself a panacea against opaque algorithmic management — with the co-authors warning that “the recent gains in the courts do not fully protect workers against its harms”.

WIF is also supporting a number of challenges to gig platforms’ algorithmic control of workers, as we’ve reported before — but that’s also an expensive and time-consuming process for precarious workers who typically lack resources to fight platform giants through the courts.

So its overarching point is that current protections for individuals subject to algorithmic decision-making, such as those contained in Europe’s General Data Protection Regulation, do not go far enough — allowing platforms to concoct convoluted justifications for withholding the workings of algorithms from those subject to opaque management and control by AI.

Here the report calls out platforms’ conflation of fraud management with performance management, as one example.

“The fact that such ‘fraud’ indicators are used as variables for work allocation and that the behaviours generating them are allowed to continue on the platform demonstrates that these are not instances of criminal fraud, but mechanisms of control, which assess how well workers are performing against the opaque metrics set by companies,” the report notes in a section subtitled “Surveillance Arms Race” — which discusses a variety of systems used by ride-hailing platforms Uber, Bolt and Free Now.

“We suggest that any ‘fraud’ terminology used in these contexts also function as part of the misclassification game, designed to conceal the employment relationship,” it adds, further arguing there has been “widespread proliferation and a disproportionate use of worker surveillance in the name of fraud prevention’”.

The report makes the case for increased digital rights protections to steer the industry in a better direction — allowing gig workers to gain “equality in digitally mediated work” vs today’s abuse-enabling power imbalance where platforms pull all the strings, and deploy denial and/or dark patterns to frustrate workers’ attempts to leverage existing (weak) legal protections.

Categories of data that platforms process about workers — which WIF’s report notes are typically made explicit in platform guidance documents and privacy policies — are often not shared with drivers when they download their data or make subject access requests under GDPR, underlining the discrepancy between existing levels of data processing by platforms vs transparency.

“In our experience, when workers seek out this information, gig platforms aim to make the process difficult and burdensome by engaging in a variety of non-compliant behaviour,” it writes. “Workers seeking comprehensive data have to navigate exceedingly complex and obstructive website architectures and need to circumvent further frustration efforts by support agents, who unnecessarily prolong simple administrative processes or provide automated responses that fail to adequately answer queries.

“These procedures can be described as ‘dark patterns’ designed to guide workers away from exercising their rights as data subjects. On the occasions where workers are able to obtain their data, it is often either missing considerable segments or presented in inconsistent and non-machine readable formats, making analysis effectively impossible. These acts of obstruction force workers to make repeated requests which companies ultimately use as a reason for discrediting them.”

“In all of the DSAR [data subject access request] returns we have seen, no employer has given a full and proper account of automated personal data processing,” the report adds. “This is particularly important in areas that can determine security of employment such as work allocation, performance management, safety and security, as discussed through this report.”

Again, platforms have sought to shield their algorithms from litigation seeking data on the AIs’ logic, inputs and outputs by claiming the “safety and security” of their services could be compromised if such information was disclosed to workers.

(And — in London at least — platforms such as Uber appear to have been pushed toward even tighter algorithmic surveillance of drivers (and use of flawed facial recognition technology) as a result of a safety-focused intervention by the transport regulator, TfL, which has, since 2017, has denied Uber a full licence to operate, citing safety concerns… )

But the report argues it’s the opposite that’s true, writing: “In our view, safety and security can only be enhanced when platforms transparently set rules and performance standards rather than relying on covert surveillance and summary dismissals, which are some of the key motivators of DSARs.”

Commenting in a statement, the report’s lead author, Safak, added: “The many worker cases we document in this report make it undeniably clear that the harms of algorithmic management are real and affect the most vulnerable. Gig platforms are collecting an unprecedented amount of data from workers through invasive surveillance technologies. Every day, companies make allegations of ‘algorithmic wrongdoing’ which they do not offer any evidence for. They block and frustrate workers’ efforts to obtain their personal data when they try to defend themselves. This is how gig platforms maintain exploitative power.”

In another supporting statement, Farrer said: “As gig economy platforms mature and regulatory pressure builds, we are seeing employers roll out intensive surveillance and opaque automated management decision making systems to exercise ever more hidden forms of control over workers. This report shows how the latest wave of employment misclassification tactics involves employers telling workers they are truly independent in their jobs while at the same time management control is wielded as forcefully as ever but from behind the digital curtain.”

WIF, along with the digital rights campaign group Privacy International and the App Drivers & Couriers Union is seeking to raise public awareness over the issue as regional lawmakers consider their next steps — launching a public campaign and petition that calls for greater algorithmic transparency and accountability from platform employers.

As part of the petition the groups say they will be writing to a number of gig platforms — including Uber, Just Eat, Amazon Flex, Free Now, Bolt, Ola and Deliveroo — to “demand answers” and “ensure that the unprecedented surveillance that gig-economy workers are facing from their employers ends”.

Exit mobile version