Penny-Matthias_Shapiro-cc_by

The question of whether paid crowd work violates U.S. employment and minimum wage laws may finally make it into court thanks to Christopher Otey, an Oregon resident who is suing CrowdFlower Inc. for wages he claims the company owes him as an “employee.”

You can (and should) read the full text of Otey’s complaint or coverage of the story on Crowdsourcing.org or MissionLocal.

I have a few preliminary, and mostly mixed, feelings about this. However, I should preface everything by saying that (1) I have known one of the defendants named in the suit, CrowdFlower CEO Lukas Biewald, for many years through mutual acquaintances at Stanford, where we were both enrolled at the same time; and (2) I worked as a paid, independent consultant with CrowdFlower on several projects between 2008-2011. That said, I have never held, nor hold at this time, any material interest, financial or otherwise, in the company.

My initial reaction is that I can’t believe it’s taken this long for someone, somewhere in the United States to sue one of the companies engaged in distributing paid crowdsourcing work for violation of the Fair Labor Standards Act (FLSA). Smart lawyers like Alek Felstiner and Jonathan Zittrain have been making some form of the argument that this is a major issue for Crowdsourcing for at least three years now. Felstiner even made the case in a series of posts on CrowdFlower’s blog here, here, and here in 2010. I am hardly the only person to regard as remarkable the fact that a whole venture-funded industry has sprung up around a set of activities that, on the surface, seem to resemble a massive minimum wage violation scheme.

At the same time, there are a lot of reasons to believe that crowdsourcing represents a fundamentally different sort of phenomenon than the varieties of “work” and workplace abuses the US congress sought to regulate with the FLSA back in 1938. For starters, crowd work is radically flexible – in terms of time and location – as well as minimal in terms of the commitment, skill, and obligations required of workers. As a result, it’s not clear that the relationships established between requesters and providers of work in this context are really anything like relational contracts that exist between traditional employers and employees. Crowd workers do what they for a variety of reasons, in a variety of ways, and under a variety of conditions, making it pretty hard to determine whether they ought to be considered employees of the organizations that may play a role in compensating them for their efforts (and this is potentially an important point since CrowdFlower plays something of a middle-man role between the individuals and companies that post tasks to its site and those who complete the tasks and receive compensation in exchange for their labor).

One particular challenge posed by the suit and the fact that Otey and his attorneys have chosen to seek compensation under US minimum wage laws ($7.50 per hour). Depending on the outcome, the impact of a ruling against CrowdFlower could therefore make paid crowd work as it exists today financially impractical within the United States. While such a ruling might represent a crucial step in enforcing legal, ethical, and financial standards of fairness in online environments, it might also undermine the growth of a valuable source of future innovation, employment, research, and creativity. Crowd-based systems (whether paid or unpaid) of distributed information creation, processing, and distribution have accounted for some of the most incredible accomplishments in the short history of the Internet, including Wikipedia, ReCaptcha, Flickr, Threadless, Innocentive, Kiva, Kickstarter, YouTube, Twitter, and the Google search engine.

As some colleagues and I have argued in a forthcoming paper, The Future of Crowd Work, there are many ways in which paid crowd work as it exists today does not look like the kind of job you would necessarily want your child to take on as a career.  And yet, while crowd work is very, very far from ideal by almost any standard, I would be disappointed if the impact of this case somehow resulted in the destruction of the industry and the stifling of the innovative research and applications that have developed around it. The outcome will boil down to the ways in which paid labor – even flexible, remote, and relatively straight-forward tasks that are paid only $0.01 – is regulated as compared with volunteer labor.

I recently had a pilot version of a crowdsourcing task fail pretty spectactularly, but after discussing the failure with Mako I’ve concluded that my experience helps illustrate some interesting comparisons between labor relations in a distributed online market and more traditional sorts of employment and jobs.

The failure in this case started early: I did a mediocre job designing the task. It’s not really worth going into any details except to say that (out of laziness) I made it really easy for workers to either (a) purposefully respond with spammy results; (b) slack off and not provide responses; (c) try to complete the task but unintentionally do a bad job and therefore provide poor quality results; or (d) try to complete the task and do so successfully. I also did not do a good job incorporating any effective means of differentiating between whether the workers who did not provide accurate results were spamming, shirking, or simply failing

So why does this experience have anything to do with the nature of employment relations?

First, think about it from the employer’s (or the work “requester’s”) point of view. A major part of creating an effective crowdsourcing job consists in minimizing the likelihood or impact of (a)-(c) either by means of algorithmic estimation and/or clever task design. It’s not necessary that every worker provide you with perfect results or even perfect effort, but ideally you find some way to identify and/or remove work and workers that introduce unpredictable sources of bias into your results. Once you know what kind of results you’ve got, it’s possible to make appropriate corrections in the event that some worker has been feeding you terrible data or maybe just unintentionally sabotaging your task by doing a bad job.

In other words, low quality results can provide employer-requesters with useful information if (and only if) the employer-requester finds a way to identify it and use it to their advantage. This means that a poorly designed task is not just one that doesn’t elicit optimal performance from workers, but also one that doesn’t help an employer-requester differentiate between spammers, slackers, passive saboteurs, and those workers who really are trying and (at least most of the time) completing a given task successfully.

When I design a job I always assume that a relatively high proportion of the workers are trying to complete the task in good faith (sure, there are some spammers and slackers out there, but somehow they don’t seem to make up the majority of the labor pool when there’s a clear, well-designed, reasonably compensated task to be done). As a result, if I get predominantly crap responses back from the workers, I assume that they are (maybe somewhat less directly than I might like) providing me with negative feedback on my task design.

Now from the workers’ point of view, I suspect the situation looks a bit different. They have fewer options for dealing with employer-requesters who are trying to scam them. Most distributed labor markets lack features that would support anything resembling collective bargaining or collective action on the part of workers. Communications by workers to employer-requesters are limited and, consequently, there usually aren’t robust mechanisms for offering or coordinating feedback or complaints.

As a result, the most effective communications tool the workers possess is their work itself. Not surprisingly, some of them seem to use their work to engage acts of casual slacking and sabotage that resemble online versions of the “weapons of the weak” described by James C. Scott in his book on everyday resistance tactics among rural peasants.

The ease with which crowdsourcing workers can pursue these relatively passive forms of resistance and tacit feedback relates to a broader, more theoretically important point: in most situations, a member of an online crowd should have a much easier time quitting or resisting than workers in (for example) a factory when they decide they’re unhappy with an employment relationship for any reason. Why?  First off, crowdsourcing workers usually don’t have personal ties to a company, brand, co-workers, managers, etc. Second of all, the structure of online labor markets makes the cost of leaving any one job extraordinarily low. An office worker who (upon being confronted by, e.g., an unpleasant or unethical task) leaves her position risks giving up not only valuable resources like future wages or benefits, but also loses physical stability in her life, contact with friends and colleagues, and the respect or professional support of her superiors. In contrast, a worker in an online crowd who decides to leave her job loses almost nothing. While there is some risk associated with actively spamming or slacking (in some crowdsourcing markets, workers with low quality ratings can be banned or prevented from working on certain jobs), it’s still substantially easier to just walk away and find another task to do.

These are just some of the reasons why theoretical predictions from classical wage and employment economics – for example, that a $0.01 decrease in wages will result in some proportion of employees leaving their jobs – don’t hold up in traditional or crowdsourcing labor markets. The interesting point is that the reasons why these classical theories don’t hold up in crowdsourcing systems don’t have much to do with the complications introduced by social relations since social relations (between workers and employers as well as between workers and workers) are severely constrained in most online labor markets.

 

(Note: The first version of this post was written pretty late at night, so I didn’t include many links to sources. I’ll be trying to add them over the next few days.)