Crowd labor relations revisited

April 23, 2012

I recently had a pilot version of a crowdsourcing task fail pretty spectactularly, but after discussing the failure with Mako I’ve concluded that my experience helps illustrate some interesting comparisons between labor relations in a distributed online market and more traditional sorts of employment and jobs.

The failure in this case started early: I did a mediocre job designing the task. It’s not really worth going into any details except to say that (out of laziness) I made it really easy for workers to either (a) purposefully respond with spammy results; (b) slack off and not provide responses; (c) try to complete the task but unintentionally do a bad job and therefore provide poor quality results; or (d) try to complete the task and do so successfully. I also did not do a good job incorporating any effective means of differentiating between whether the workers who did not provide accurate results were spamming, shirking, or simply failing

So why does this experience have anything to do with the nature of employment relations?

First, think about it from the employer’s (or the work “requester’s”) point of view. A major part of creating an effective crowdsourcing job consists in minimizing the likelihood or impact of (a)-(c) either by means of algorithmic estimation and/or clever task design. It’s not necessary that every worker provide you with perfect results or even perfect effort, but ideally you find some way to identify and/or remove work and workers that introduce unpredictable sources of bias into your results. Once you know what kind of results you’ve got, it’s possible to make appropriate corrections in the event that some worker has been feeding you terrible data or maybe just unintentionally sabotaging your task by doing a bad job.

In other words, low quality results can provide employer-requesters with useful information if (and only if) the employer-requester finds a way to identify it and use it to their advantage. This means that a poorly designed task is not just one that doesn’t elicit optimal performance from workers, but also one that doesn’t help an employer-requester differentiate between spammers, slackers, passive saboteurs, and those workers who really are trying and (at least most of the time) completing a given task successfully.

When I design a job I always assume that a relatively high proportion of the workers are trying to complete the task in good faith (sure, there are some spammers and slackers out there, but somehow they don’t seem to make up the majority of the labor pool when there’s a clear, well-designed, reasonably compensated task to be done). As a result, if I get predominantly crap responses back from the workers, I assume that they are (maybe somewhat less directly than I might like) providing me with negative feedback on my task design.

Now from the workers’ point of view, I suspect the situation looks a bit different. They have fewer options for dealing with employer-requesters who are trying to scam them. Most distributed labor markets lack features that would support anything resembling collective bargaining or collective action on the part of workers. Communications by workers to employer-requesters are limited and, consequently, there usually aren’t robust mechanisms for offering or coordinating feedback or complaints.

As a result, the most effective communications tool the workers possess is their work itself. Not surprisingly, some of them seem to use their work to engage acts of casual slacking and sabotage that resemble online versions of the “weapons of the weak” described by James C. Scott in his book on everyday resistance tactics among rural peasants.

The ease with which crowdsourcing workers can pursue these relatively passive forms of resistance and tacit feedback relates to a broader, more theoretically important point: in most situations, a member of an online crowd should have a much easier time quitting or resisting than workers in (for example) a factory when they decide they’re unhappy with an employment relationship for any reason. Why?  First off, crowdsourcing workers usually don’t have personal ties to a company, brand, co-workers, managers, etc. Second of all, the structure of online labor markets makes the cost of leaving any one job extraordinarily low. An office worker who (upon being confronted by, e.g., an unpleasant or unethical task) leaves her position risks giving up not only valuable resources like future wages or benefits, but also loses physical stability in her life, contact with friends and colleagues, and the respect or professional support of her superiors. In contrast, a worker in an online crowd who decides to leave her job loses almost nothing. While there is some risk associated with actively spamming or slacking (in some crowdsourcing markets, workers with low quality ratings can be banned or prevented from working on certain jobs), it’s still substantially easier to just walk away and find another task to do.

These are just some of the reasons why theoretical predictions from classical wage and employment economics – for example, that a $0.01 decrease in wages will result in some proportion of employees leaving their jobs – don’t hold up in traditional or crowdsourcing labor markets. The interesting point is that the reasons why these classical theories don’t hold up in crowdsourcing systems don’t have much to do with the complications introduced by social relations since social relations (between workers and employers as well as between workers and workers) are severely constrained in most online labor markets.

 

(Note: The first version of this post was written pretty late at night, so I didn’t include many links to sources. I’ll be trying to add them over the next few days.)

Leave a comment