Think big! What would it take to make crowdsourcing and crowdwork a more sustainable, fulfilling, and efficient sector of economic and social production? (photo by John McNabb, cc-by-nc-nd)

This weekend, Andrés and I attended the CrowdCamp Workshop at CHI in Austin, Texas. The workshop was structured a lot like a hackathon, with the objective being to work in teams to produce projects, papers, or research.

The group I worked with coalesced around a proposal made by Niki Kittur, who suggested that we envision how crowdsourcing and distributed work contribute to solving grand challenges, such as economic inequality and the ongoing impact of the 2008 financial crisis.

We then spent the better part of the weekend outlining an ambitious set of scenarios and goals for the future of crowdwork.

While many moments of our conversation were energizing, the most compelling aspects derived from the group’s shared desire to imagine crowdwork and distributed online collaboration as potentially something more than the specter of alienated, de-humanized piece-work that it is frequently depicted to be.

To spur our efforts, we used a provocative thought experiment: what it would take for crowdwork to facilitate fulfilling, creative, and sustainable livelihoods for us or our (hypothetical or real) children?

Despite the limits of this framing, I think it opened up a discussion that goes beyond the established positions in debates about the ethics and efficiencies of paid crowdsourcing, distributed work, and voluntary labor online (all of which are, to some extent, encompassed under the concept of crowdwork in this case). It also hellped us start imagining howwe, as designers and researchers of crowdwork platforms and experiences, would go about constructing an ambitious research agenda on the scale of a massive project like the Hadron Collider.

If everything goes according to plan, this effort will result in at least a paper within the coming few weeks. Assuming that’s the case, our group will be sharing more details about the workshop and our vision of the future of crowdwork soon.

In doing some reading about collective action, cooperation, and exchange theory, I encountered (gated link) the figures below:

If you happen to be the kind of person who spends a lot of time around research combining social dilemmas, evolutionary models of cooperation, and econometric production functions, these may seem completely intuitive and you probably do not even need to read the paper to get the gist of Professor Heckathorn’s argument.

Otherwise, the images may feel a bit more like conceptual art. The Plot labeled “C” at the bottom right is my runaway favorite. I am also a big fan of the mysterious “arch” shape and the large “X” that appear in the first figure.

n.b., Professor Heckathorn does an admirable job explaining these images in the paper and my point here, is not to provide a Tuftean critique of  some rather ornate visualizations. Instead, I wanted to try to communicate the sensation I felt when I encountered these images in the context of an extraordinarily sophisticated and abstract simulation-based analysis of the social dilemmas used to analyze the theoretical conditions under which people may be more likely to cooperate and contribute to public goods.

That’s right, these figures are part of a model modeling models. Given that I am singling out this particular model for attention, they are also, you might say, part of a model model. Given that Professor Heckathorn’s work in this area is highly sophisticated and compelling, you might even say that these figures are part of a model model model (modeling models).

Academic peer review tends to be slow, imprecise, labor-intensive, and opaque.

A number of intriguing reform proposals and alternative models exist and hopefully, some of these ideas will lead to improvements. However, whether they do or not, I suspect that some form of peer review will continue to exist (at least for the duration of my career) and that many reviewers (myself included) will continue to find the process of doing the reviews to be time-consuming and something of a hassle.

The most radical solution is to shred the whole process T-Rex style.

Gideon Burton, 2009, cc-by-sa

This is sort of what has already happened in disciplines that use arXiv or similar open repositories where working papers can be posted and made available for immediate critique and citation. Such systems have their pros and cons too, but if nothing else they decrease the amount of time, money and labor that go into reviewing for journals and conferences, while increasing the transparency. As a result, they provide at least a useful complement to existing systems.

Over a conversation at CrowdConf in November, some colleagues and I came up with a related, but slightly less radical proposal: maybe you could keep some form of academic peer review, but do it without the academic peers?

Such a proposition calls into question one of the core assumptions underlying the whole process – that reviewers’ years of training and experience (and credentials!) have endowed them with special powers to distinguish intellectual wheat from chaff.

Presumably, nobody would claim that the experts make the right judgment 100% of the time, but everybody who believes in peer review agrees (at least implicitly) that they probably do better than non-experts would (at least most of the time).

And yet, I can’t think of anybody who’s ever tested this assumption in a direct way. Indeed, in all the proposals for reform I’ve ever heard, “the peers” have remained the one untouchable, un-removable piece of the equation.

That’s what got us thinking: what if you could reproduce academic peer review without any expertise, experience or credentials? What if all it took were a reasonably well-designed system for aggregating and parsing evaluations from non-experts?

The way to test the idea would be to try to replicate the outcomes of some existing review process using non-expert reviewers. In an ideal world, you would take a set of papers that had been submitted for review and had received a range of scores along some continuous scale (say, 1 to a protocol to distribute the review process across a pool of 5 – like papers reviewed for ACM Conferences). Then you would develop non-expert reviewers (say, using CrowdFlower or some similar crowdsourcing platform). Once you had review scores from the non-experts, you could aggregate them in some way and/or compare them directly against the ratings from the experts.

2007 diylibrarian cc-by-nc-sa

Would it work? That depends on what you would consider success. I’m not totally confident that distributed peer review would improve existing systems in terms of precision (selecting better papers), but it might not make the precision of existing peer review systems any worse and could potentially increase the speed. If it worked at all along any of these dimensions, implementing it would definitely reduce the burden on reviewers. In my mind, that possibility – together with the fact that it would be interesting to compare the judgments of us professional experts against a bunch of amateurs – more than justifies the experiment.

This is kind of a shot in the dark, but the project I work on at the Berkman Center for Internet and Society at Harvard University is hiring RA’s.

The lead researcher on the project is Professor Yochai Benkler

Read the full job posting and contact me if you have questions

You don’t need to be a Harvard affiliate to get the position, but you do need to be able to attend regular meetings in Cambridge, MA between now and mid-December. Details are in the posting.

Ethan Zuckerman’s post on Google “Insights for Search” demonstrates how a few graphs can lead to hours of geeky fun.

By allowing you to visualize the google search history for a given term across a few geographic and temporal dimensions, the tool lends itself to some wonderful applications – including this one – a glance at social networking around the world by a Swedish firm named Pingdom – which inspired Ethan’s post in the first place.

As Ethan points out, though, the search insights data also provokes a question about what it means to search in the first place (my emphasis):

The Insight data isn’t measuring traffic to those sites, or their number of active members, just the number of folks searching for those sites via Google. That may or may not be an effective proxy for interest in those networks. I’m a Facebook user, and I have the site bookmarked, so I rarely would find myself searching for the site – it’s possible that the search data is a more effective proxy for the strength of a brand in a particular market, or the level of interest from non-participants in a specific site

I had a good time playing with these theories by using the comparative graphing features to consider where different political blogs attracted a greater relative volume of searches. Sorry the maps display a bit small, but you should be able to download the files (or simply re-create the search) to get a closer look.

Here’s a fun, obvious one:

great orange satan reigns supreme?

great orange satan reigns supreme?

(note: the data is also normalized in relation to the highest value occurring within the query range).
Check out how they both spike after the 2004 election, but then the relative volume of searches for Kos stay consistently higher (peaking after the 2006 mid-term elections) than those for Insty.

Make of that what you will. I wouldn’t even hazard a guess without knowing more about the reasons people turn to the Internet (and Google) to find political information.

I think the story gets even more interesting when you compare the high traffic states for each blog.

Here’s Insty:

And here’s Kos:

If you ignore the ratios for a second and just focus on the states in each list:

  • Instapundit:     DC, TN, NH, VA, MD, KS, NC, NY, WA, CT
  • Kos:                 VT, DC, OR, WA, NM, MT, ME, NY, WY, CA

Once again, extrapolate at your own risk. All I know is that it does not track perfectly with voting patterns and that the overlaps (DC, NY, WA) are at least as interesting as the extreme mismatches (WY, VT, MT).

Catching up on my RSS feeds, I followed one of Eszter Hargittai’s links to this thought provoking chart from Dave Eaves at the SEO Company that looks at the inlink/outlink ratio for major traditional news media sites.

The creators of the chart suggest that the deep inequality in inlinks/outlinks among the oft-villified “MSM” reflects some sort of scandalous refusal to play by the rules of the blogosphere. They have a point, but I want to think this through a bit more.

As Yochai Benkler, Matthew Hindeman and others have discussed in their writings, citation links (in-text links to other sites – contrasted with “static” blogroll-type links) function as key structural determinants of popularity and visibility on the Internet. Even though Hindeman’s notion of a strict “googlearchy” whereby citation links create search engine rankings which create power is overly stated for my taste, the fact remains that the structure of the net drives large masses of eyeballs in predictable directions along the pathways set by hyperlinks. For my money, Benkler does a more effective job in not overstating the case by situating his argument about the structure of discourse on the Internet in relation to the structure of discourse in the era of traditional broadcast media (see ch 6 and ch 7 of The Wealth of Networks).

Similarly, as the recent attempt by the Associated Press to squelch Fair Use for bloggers makes clear, many traditional news organizations do not want to play by the rules of the Net. Hell, in some cases, it seems like they and their shareholders would be happier if the Internet had just never happened.

For some organizations, the dearth of outlinks reflects the standard aversion of traditional journalistic writing style to the use of hyperlinked citations in stories. This is consistent with the widespread perception that the “MSM” has a willful disregard for Netiquette. It also makes Eaves’s conclusions (applying an equation to calculate the Pearson product-moment correlation coefficient) that out-linking behavior predicts in-links in return that much more suggestive. If the out-linking practice truly predicts in-linking, these news organizations risk slipping into the dustbin of information history as the rest of the Internet slowly ceases to pay attention to them.

The truth, I suspect, is more complicated. While traditional news outlets may not yet take advantage of the practical benefits of out-linking, they enjoy a comparative advantage in terms of social status and network centrality (among politicians, news organizations, businesses, intellectuals, etc.). This social status and network centrality should (I predict) translate into a steady stream of hits and in-links from other sites no matter what standard practices predominate across the rest of the networked public sphere.

To put that in less abstract terms: even if CNN and the Washington Post continue to refuse to use out-links in their primary coverage, their corresponding level of in-links is unlikely to decline to zero simply because they are still CNN and the Washington Post.

Whether this is the case or not, the fact remains that the traditional media are all scrambling to figure out why they can’t seem to stay afloat on the Internet. By identifying another potential factor in the equation, Eaves’s study makes a useful contribution to the debat

Ned Gulley (Mathworks) and Karim Lakhani (Harvard Business School) presented some forthcoming work on Collaborative Innovation today at Harvard’s Berkman Center for Internet and Society.

The paper builds on Ned’s work at Mathworks developing collaborative programming competitions for the MATLAB community. Adopting “the perspective of the code” it analyzes what happens when you set a horde of geeks loose on a fun, challenging programming problem in a networked collaborative environment.

To sum up my reactions really briefly, I thought the paper was an exciting step in the process of looking under the hood of collaborative knowledge production. Gulley and Lakhani argue that as programmers improved the performance of code relative to a discreet problem, they did so through “tweaks” and “leaps.”

“Tweaks” represent small refinements that improve the performance of existing code; “Leaps” represent more sudden and large-scale advances in performance (usually driven by introducing a more substantive or extensive change in the code).

Tweakers and Leapers benefit from each other’s work, but the biggest beneficiary of their combined interactions was the code itself. Within one week of the competitions, thousands of eyeballs had produced startling solutions to complex algorithmic problems.

There’s a lot more to be learned from this kind of work – especially from the sort of experimental data created in the setting of these sort of large-scale collaborative games. In particular, I’m interested in thinking about how programmers (whether as individuals or communities) adapted to the challenges over time. It seems like it might be possible to design a game that could test whether efficient collaborative problem solving techniques “evolved” over the course of the game(s). In addition, it would be fascinating to test the results of this kind of collaboration against those produced by more hierarchical or individuated models of innovative work.

Look for links to the soon-to-be-published version of the paper on the “publications” section of Karim’s HBS faculty page.

In the meantime, I’m told that video and audio of today’s presentation should be available on the Berkman Center’s “interactive section” by tomorrow afternoon at the latest.

An interesting article at OpenDemocracy – a new and improved attempt to define the journalistic component of what Yochai Benkler has called “the networked public sphere.”

Does anyone have any?

Obviously, there are a slew of industry-sponsored studies that tell us how much profit is lost through trademark and copyright infringements (all of them employing questionable methods and un-rigorous theories at best).

There are also numerous press-releases like this one, demonstrating that IPR police can bring down those evil counterfeiters.

But what about a peer-reviewed empirical study that actually supports the hypothesis that punishment is the best way to deal with unauthorized reproduction and use of intangible assets?

I can’t think of any.

Meanwhile, there are a number of studies demonstrating the benefits of opening up knowledge-based stuff in order to enable innovation and profits (PDF).

Seems like the US congress, courts, private sector firms, and trade officials ought to test their enforcement hypothesis sometime.

UC Berkeley just announced that it will host yet another public-private research venture. This time, the “partners” are Intel and Microsoft, who have agreed to fund a $20 million parallel computing lab at UCB. This is small potatoes compared to the $500 million deal with BP that Berkeley landed in the fall, but the same problems and questions apply. The UC regents and the individual campus units continue the trend of depending on the private sector for a larger and larger portion of their annual operating expenses without engaging in a serious public debate about the issues this raises.

The biggest concerns that the website and all the happy press releases don’t say anything about are (1) the governance arrangement of the center, and (2) the status of the intellectual property the new center will create. It’s all fine and good that the University gets to say it has a new building and does cutting-edge stuff, but the concrete impact of these centers on the campus depends more on how they fit (or don’t) within the campus’ existing governing structure. Will the lab get to hire and fire new faculty? Who will make decisions about their tenure? How much teaching will they do? Who will pay their salaries? The IP-related questions only make matters more complicated. The revenue from any patents and products that emerge from the lab are likely to exceed the value of the lab itself several times over. Who gets to keep that? Also, irrespective of the answer to that question, should a supposedly public university contribute to the enclosure of scientific knowledge?