This weekend, Andrés and I attended the CrowdCamp Workshop at CHI in Austin, Texas. The workshop was structured a lot like a hackathon, with the objective being to work in teams to produce projects, papers, or research.
The group I worked with coalesced around a proposal made by Niki Kittur, who suggested that we envision how crowdsourcing and distributed work contribute to solving grand challenges, such as economic inequality and the ongoing impact of the 2008 financial crisis.
We then spent the better part of the weekend outlining an ambitious set of scenarios and goals for the future of crowdwork.
While many moments of our conversation were energizing, the most compelling aspects derived from the group’s shared desire to imagine crowdwork and distributed online collaboration as potentially something more than the specter of alienated, de-humanized piece-work that it is frequently depicted to be.
To spur our efforts, we used a provocative thought experiment: what it would take for crowdwork to facilitate fulfilling, creative, and sustainable livelihoods for us or our (hypothetical or real) children?
Despite the limits of this framing, I think it opened up a discussion that goes beyond the established positions in debates about the ethics and efficiencies of paid crowdsourcing, distributed work, and voluntary labor online (all of which are, to some extent, encompassed under the concept of crowdwork in this case). It also hellped us start imagining howwe, as designers and researchers of crowdwork platforms and experiences, would go about constructing an ambitious research agenda on the scale of a massive project like the Hadron Collider.
If everything goes according to plan, this effort will result in at least a paper within the coming few weeks. Assuming that’s the case, our group will be sharing more details about the workshop and our vision of the future of crowdwork soon.
April 13, 2012
Zombie trade agreements: According to some documents acquired by the organization European Digital Rights (EDRi), it appears the G8 has decided to do a Dr. Frankenstein impression and reanimate some of the most thoughtless portions of ACTA’s Internet provisions. This latest instantiation of the ACTA agreement wants control over intellectual property, technology devices, network infrastructure, and YOUR BRAINS.
An awesome experiment on awards (published in PLoS ONE) by Michael Restivo and Arnout van de Rijt – both in the Sociology department at SUNY Stony Brook – shows that receiving an informal award (a barnstar) from a peer may have a positive effect on highly active Wikipedians’ contributions. The paper is only three pages long, but if you want to you can also read the Science Daily coverage of it.
Mako’s extensive account of his workflow tools is finally up on Uses This. The post is remarkable for many reasons. First of all, Mako puts more care and thought into his technology than anybody I know, so it’s great to see the logic behind his setup explained more or less in full. Secondly, I found it extra remarkable because I have been collaborating (and even living!) closely with Mako for a while now and I still learned a ton from reading the post. My favorite detail is unquestionably the bit about his typing eliciting a noise complaint while he was in college. As a rather loud typist myself, I have been subject to snark and snubbery from various quarters over the years, but I’ve never had anybody call the cops on me!
The Soviet Union lives on! But maybe not quite where you’d expect it. My friends and former Oakland neighbors Daniel Gallegos and Zhanara Nauruzbayeva have recently moved themselves and their incredible Artpologist project to New York. Upon arrival, they found themselves surrounded by a post soviet reality that most New Yorkers or Americans simply do not know exists at all, much less in the epicenter of finance capital. Their latest project, My American New York, chronicles this “post soviet America” through photos, stories, Daniel’s beautiful sketches, drawings, and paintings (e.g. the image at the top of this post), all wrapped up in a series of urban travelogues.
Philosophy Quantified: Kieran Healy has done a series of elegant and thoughtful guest posts on Leiter Reports in which he explores data from the 2004 and 2006 Philosophical Gourmet Report (PGR) surveys in an effort to generate some preliminary insights about the relationships between department status and areas of specialization.
Matt Salganik and Karen Levy (both of the Princeton Sociology Department) recently released a working paper about what they call “Wiki Surveys” that raises several important points regarding the limitations of traditional survey research and the potential of participatory online information aggregation systems to transform the way we think about public opinion research more broadly.
Their core insight stems from the idea that traditional survey research based on probability sampling leaves a ton of potentially valuable information on the table. This graph summarizes that idea in an extraordinarily elegant (I would say brilliant) way:
Think of the plot as existing within the space of all possible opinion data on a particular issue (or set of issues). No method exists for collecting all the data from all of the people whose opinions are represented by that space, so the best you – or any researcher – can do is find a way to collect a meaningful subset of that data that will allow you to estimate some characteristics of the space.
The area under the curve thus represents the total amount of information that you could possibly collect with a hypothetical survey instrument distributed to a hypothetical population (or sample) of respondents.
Traditional surveys based on probability sampling techniques restrict their analysis to the subset of data from respondents for whom they can collect complete answers to a pre-defined subset of closed-ended questions (represented here by the small white rectangle in the bottom left corner of the plot). This approach loses at least two kinds of information:
- the additional data that some respondents would be happy to provide if researchers asked them additional questions or left questions open-ended (the fat “head” under the upper part of the curve above the white rectangle);
- the partial data that some respondents would provide if researchers had a meaningful way of utilizing incomplete responses, which are usually thrown out or, at best, used to make estimates about the characteristics of whether attrition from the study was random or not (this is the long “tail” under the part of the curve to the right of the white rectangle).
Salganik and Levy go on to argue that many wiki-like systems and other sorts of “open” online aggregation platforms that do not filter contributions before incorporating them into some larger information pool illustrate ways in which researchers could capture a larger proportion of the data under the curve. They then elaborate some statistical techniques for estimating public opinion from the subset of information under the curve and detail their experiences applying theses techniques in collaboration with two organizations (the New York City Mayor’s Office and the Organization for Economic Cooperation and Development, or OECD).
If you’re not familiar with matrix algebra and Bayesian inference, the statistical part of the paper probably won’t make much sense, but I encourage anyone interested in collective intelligence, surveys, public opinion, online information systems, or social science research methods to read the paper anyway.
Overall, I think Salganik and Levy have taken an incredibly creative approach to a very deeply entrenched set of analytical problems that most social scientists studying public opinion would simply prefer to ignore! As a result, I hope their work finds a wide and receptive audience.
February 19, 2012
Lin-sanity notwithstanding, this is a time of year when I always find myself wanting more as a sports fan in America. The memories of the Super Bowl and BCS Championship game have already started to fade; March madness remains a long way off; pitchers and catchers have yet to report for Spring Training; and both the NBA and NHL have just passed the midpoint of their respective regular seasons. Add that it’s the middle of Winter (even an historically mild one), and these factors combine to make mid February a less than thrilling few weeks.
Lately, I’ve partially solved my urge for non-stop sports entertainment by turning to leagues that have much less popularity and almost no visibility in mainstream U.S. media coverage.
First, during a brief trip to Brazil for a conference, I enjoyed watching some early round action in the Paulistão, or the elite soccer league of São Paulo state. With historically dominant teams like Corinthians, Santos, and Palmeiras, São Paulo boasts one of the most competitive state-level championships within Brazil and usually includes several young players who will become international superstars with household names within a few years (e.g. if you haven’t heard of Neymar yet, just be patient, the teenage phenom will likely figure prominently in the Brazilian national team’s efforts when the country hosts the World Cup in 2014).
Then, the week after I returned from Brazil, I spent a few afternoons watching the final games of the Serie del Caribe, an international tournament that wraps up the Winter leagues in the Dominican Republic, Mexico, Puerto Rico, and Venezuela. The games were tight, competitive and included a number of Major League players who seemed either to have chosen to return home as triumphant stars or to hone their skills among Latin America’s most competitive leagues.
Despite the fact that you’ll never see your local ESPN network cover either of these events, both have a ton of history behind them and tremendous fan-bases (ESPN’s Brazilian and regional Latin American affiliates cover both). They are also extraordinarily competitive and played at a very high skill level.
Latin American soccer and baseball are not the only options. There are also a whole range of winter sports that never show up on U.S. television schedules until the Olympics. In other words, the only thing preventing you from watching terrific, exciting sporting events in the middle of the annual mid-Winter lull is the fact that you would probably either need to pay an inordinate sum for satellite coverage or seek out unauthorized streams on websites that serve sketchy advertisements and mal-ware along with the game.
At the risk of making a very Ethan Zuckerman-esque point, the Internet makes it theoretically trivial to solve this problem, but that theoretical triviality only underscores a much bigger problem in the way our attention is distributed and canalized by a combination of cultural habits and incumbent media networks. In other words, maybe you’d be more likely to watch Neymar and Santos take on Palmeiras if either your local television network would it or if you could easily find a high quality stream broadcasting in English (I also enjoy watching these things online because I get to listen to Portuguese and Spanish language announcers). Indeed, as long as somebody is streaming a broadcast of any of these games anywhere around the world, there’s no practical reason that it isn’t possible to watch that stream anywhere else. But for a whole variety of reasons that I don’t fully understand, that just doesn’t happen yet.
My point is that American sports fans live in a media ecosystem that has not yet figured out what to do with its (long) tail. There has to be a better, less monopolistic solution than satellite and cable providers charging high rates for access to particular sports packages or leagues. This model ensures that only existing fans who are willing to pay to watch teams they already like will ever subscribe to such services, condemning these sports and teams to continued obscurity. Instead, it would be great to see some affordable way for fans to take advantage of existing Internet streams to experiment with new sports, new leagues, and new cultures by tuning into otherwise less popular or less well-known events when their hometown favorites are not in season.
January 15, 2012
Jeremy Freese (who I met last week during a brief trip to Evanston and who turns out to be as awesome in person as he is online and in print!) and the scatterplotters revealed this week (gasp!) that nobody who’s anybody pays attention to the page limit guidelines for ASA submissions.
This page limit absorbed way too much of a close friend’s time this week, but the fact that many ASA submitters do not pay any attention to it is not a shocker.
Indeed, many ASA attendees treat the conference like you might treat an annoying relative: fundamentally flawed in ways that are both too numerous to mention and too deep to try to be repaired, but nonetheless sufficiently unavoidable once a year that you reconcile your differences and do what you need to do in order to visit.
Having also spent a little bit of time at conferences that are not sociology conferences, I can say that ASA is not extraordinarily bad. Aspects of ICA, CHI, and CSCW are equally broken and all the brokenness serves as a vivid reminder that institution-building remains a hard difficult process – even for people who study institutions, collaboration, and human behavior.
That said, there are some pieces of ASA that work quite well and maybe, if as olderwoman and Jeremy note in the comments, we want to inform future policy decisions around these issues, it’s worth distinguishing between what’s broken and what’s not a little more clearly.
So, with that in mind, here are a few things that I like about ASA:
- Socializing with colleagues and peers (In particular, I recommend the Berkeley Sociology department’s annual party).
- One-stop-shop access to colleagues and friends who you never see in one place otherwise.
- Cross-generational dialogues with scholars and students of all ages.
- The occasional great presentation or conversation about research.
And here are some negatives (beyond the page limit):
- Socializing with colleagues and peers (has its dark side too).
- A bizarrely large program that is painful to read and navigate.
- Soul-crushingly boring & nearly uniform format of panels and presentations.
- An arbitrary, unblind, single review process for submissions.
- The horrible tools and information made available to conference attendees for searching presentations and panels.
I’d be curious what pieces of other peoples’ positive and negative ASA experiences I’m missing. Other thoughts? Feedback? See you in the comments…
December 11, 2011
Academic peer review tends to be slow, imprecise, labor-intensive, and opaque.
A number of intriguing reform proposals and alternative models exist and hopefully, some of these ideas will lead to improvements. However, whether they do or not, I suspect that some form of peer review will continue to exist (at least for the duration of my career) and that many reviewers (myself included) will continue to find the process of doing the reviews to be time-consuming and something of a hassle.
The most radical solution is to shred the whole process T-Rex style.
This is sort of what has already happened in disciplines that use arXiv or similar open repositories where working papers can be posted and made available for immediate critique and citation. Such systems have their pros and cons too, but if nothing else they decrease the amount of time, money and labor that go into reviewing for journals and conferences, while increasing the transparency. As a result, they provide at least a useful complement to existing systems.
Over a conversation at CrowdConf in November, some colleagues and I came up with a related, but slightly less radical proposal: maybe you could keep some form of academic peer review, but do it without the academic peers?
Such a proposition calls into question one of the core assumptions underlying the whole process – that reviewers’ years of training and experience (and credentials!) have endowed them with special powers to distinguish intellectual wheat from chaff.
Presumably, nobody would claim that the experts make the right judgment 100% of the time, but everybody who believes in peer review agrees (at least implicitly) that they probably do better than non-experts would (at least most of the time).
And yet, I can’t think of anybody who’s ever tested this assumption in a direct way. Indeed, in all the proposals for reform I’ve ever heard, “the peers” have remained the one untouchable, un-removable piece of the equation.
That’s what got us thinking: what if you could reproduce academic peer review without any expertise, experience or credentials? What if all it took were a reasonably well-designed system for aggregating and parsing evaluations from non-experts?
The way to test the idea would be to try to replicate the outcomes of some existing review process using non-expert reviewers. In an ideal world, you would take a set of papers that had been submitted for review and had received a range of scores along some continuous scale (say, 1 to a protocol to distribute the review process across a pool of 5 – like papers reviewed for ACM Conferences). Then you would develop non-expert reviewers (say, using CrowdFlower or some similar crowdsourcing platform). Once you had review scores from the non-experts, you could aggregate them in some way and/or compare them directly against the ratings from the experts.
Would it work? That depends on what you would consider success. I’m not totally confident that distributed peer review would improve existing systems in terms of precision (selecting better papers), but it might not make the precision of existing peer review systems any worse and could potentially increase the speed. If it worked at all along any of these dimensions, implementing it would definitely reduce the burden on reviewers. In my mind, that possibility – together with the fact that it would be interesting to compare the judgments of us professional experts against a bunch of amateurs – more than justifies the experiment.
November 13, 2011
I just read this short piece by Richard Van Noorden in Nature about the rising number of retractions in medical journals over the past five years and it got me thinking about the different ways in which researchers fail to deal with failure (the visualizations that accompany the story are striking).
The article specifies two potential causes behind the retraction boom: (1) increased access to data and results via the Internet facilitating error discovery; and (2) creation of oversight organizations charged with identifying scientific fraud (Van Noorden points to the US Office of Research Integrity in the DHHS as an example). It occurred to me in reading this that, a third, complementary cause could be the political pressure exerted on universities and funding agencies as a result of the growing hostility towards publicly funded research. In the face of such pressure, self-policing would seem more likely.
Apparently, the pattern goes further and deeper than Van Noorden is able to discuss within the confines of such a short piece. This Medill Reports story by Daniel Peake from last year has a graph of retractions that goes all the way back to 1990, showing that the upturn has been quite sudden.
All of these claims about the causes of retractions are empirical and should/could be tested to some extent. The bigger question, of course, remains: what to do about the reality of failure in scientific research? As numerous people have already pointed out, in an environment where publication serves as the principal metric of production, the institutions, organizations & individuals that create research – universities, funding agencies, peer reviewed journals, academics & publishers – have few (if any) reasons to identify and eliminate flawed work. The big money at stake in medical research probably compounds these issues, but that doesn’t mean the social sciences are immune. In fields like Sociology or Communication where the stakes are sufficiently low (how many lives were lost in FDA trials because of the conclusions drawn by that recent AJS article on structural inequality?), the social cost of falsification, plagiarism, and fraud remain insufficient to spur either public outrage or formal oversight. Most flawed social scientific research probably remains undiscovered simply because, in the grand scheme of policy and social welfare, this research does not have a clear impact.
Presumably, stronger norms around transparency can continue to provide enhanced opportunities for error discovery in quantitative work (and I should have underscored earlier that these debates are pretty much exclusively about quantitative work). In addition, however, I wonder if it might be worth coming up some other early-detection and response mechanisms. Here were some ideas I started playing with after reading the article:
Adopt standardized practices for data collection on research failure and retractions. I understand that many researchers, editors, funders, and universities don’t want the word to get out that they produced/published/supported anything less than the highest quality work, but it really doesn’t seem like too much to ask that *somebody* collect some additional data about this stuff and that such data adhere to a set of standards. For example, it would be great to know if my loose allegations about the social sciences having higher rates of research failure and lower rates of error discovery are actually true. The only way that could happen would be through data collection and comparison across disciplines.
Warning labels based on automated meta-analyses. Imagine if you read the following in the header of a journal article: “Caution! The findings in this study contradict 75% of published articles on similar topics.” In the case of medical studies in particular, a little bit of meta-data applied to each article could facilitate automated meta-analyses and simulations that could generate population statistics and distributions of results. This is probably only feasible for experimental work, where study designs are repeated with greater frequency than in observational data collection.
Create The Journal of Error Discovery (JEDi). If publications are the currency of academic exchange, why not create a sort of bounty for error discovery and meta-analyses by dedicating whole journals to them? At the moment, blogs like Retraction Watch are filling this gap, but there’s no reason the authors of the site shouldn’t get more formal recognition and credit for their work. Plus, the first discipline to have a journal that goes by the abbreviation JEDi clearly deserves some serious geek street cred. Existing journals could also treat error discoveries and meta-analyses as a separate category of submission and establish clear guidelines around the standards of evidence and evaluation that apply to such work. Maybe these sorts of practices already happen in the medical sciences, but they haven’t made it into my neighborhood of the social sciences yet.
April 16, 2009
The crazy-productive folks at Pew’s Internet and American Life project have a new survey published looking at The Internet’s Role in Campaign 2008.
There’s a lot of fun results to mine for anybody interested in political news consumption, participation and engagement via the Internet. I still need to read it more closely, but some of my favorite sound-bites so far:
- ~20% of those surveyed posted political commentary or content online
- ~20% of those surveyed reported seeking news sources that challenged their point of view
- A handy chart comparing where self-identified democrats and republicans get their online news. Statistically significant differences are marked with a “^” (Hint: look at CNN, Fox, Radio, and the Internet). Caveat: see my methodological comments below before interpreting this too deeply.
- This staggering time-series graph illustrating the decline of newspapers as a primary source of political news over the past 10 years or so (respondents were only allowed to mention their top two sources of news)
On a methodological note, it’s interesting that the surveyors chose to conduct the survey via land-line telephones only.
Some of you might recall that Pew also published some really interesting data in the middle of the campaign season suggesting that cell-only voters are disproportionately young, democratic, and Internet users.
Despite the fact that the surveyors weighted their results to try to reflect the demographics of telephone users in the U.S. as a whole, I take that to imply that the numbers in this latest survey should provide a conservative estimate the total Internet use in the population as a whole. At the same time, I think it undermines some of the comparisons between democratic and republican voters based on the land-line only data.