July 21, 2013
In a new paper, recently published in the open access journal PLOSONE, Benjamin Mako Hill and I build on new research in survey methodology to describe a method for estimating bias in opt-in surveys of contributors to online communities. We use the technique to re-evaluate the most widely cited estimate of the gender gap in Wikipedia.
A series of studies have shown that Wikipedia’s editor-base is overwhelmingly male. This extreme gender imbalance threatens to undermine Wikipedia’s capacity to produce high quality information from a full range of perspectives. For example, many articles on topics of particular interest to women tend to be under-produced or of poor quality.
Given the open and often anonymous nature of online communities, measuring contributor demographics is a challenge. Most demographic data on Wikipedia editors come from “opt-in” surveys where people respond to open, public invitations. Unfortunately, very few people answer these invitations. Results from opt-in surveys are unreliable because respondents are rarely representative of the community as a whole. The most widely-cited estimate from a large 2008 survey by the Wikimedia Foundation (WMF) and UN University in Maastrict (UNU-MERIT) suggested that only 13% of contributors were female. However, the very same survey suggested that less than 40% of Wikipedia’s readers were female. We know, from several reliable sources, that Wikipedia’s readership is evenly split by gender — a sign of bias in the WMF/UNU-MERIT survey.
In our paper, we combine data from a nationally representative survey of the US by the Pew Internet and American Life Project with the opt-in data from the 2008 WMF/UNU-MERIT survey to come up with revised estimates of the Wikipedia gender gap. The details of the estimation technique are in the paper, but the core steps are:
- We use the Pew dataset to provide baseline information about Wikipedia readers.
- We apply a statistical technique called “propensity scoring” to estimate the likelihood that a US adult Wikipedia reader would have volunteered to participate in the WMF/UNU-MERIT survey.
- We follow a process originally developed by Valliant and Dever to weight the WMF/UNU-MERIT survey to “correct” for estimated bias.
- We extend this weighting technique to Wikipedia editors in the WMF/UNU data to produce adjusted estimates of the demographics of their sample.
Using this method, we estimate that the proportion of female US adult editors was 27.5% higher than the original study reported (22.7%, versus 17.8%), and that the total proportion of female editors was 26.8% higher (16.1%, versus 12.7%). These findings are consistent with other work showing that opt-in surveys tend to undercount women.
Overall, these results reinforce the basic substantive finding that women are vastly under-represented among Wikipedia editors.
Beyond Wikipedia, our paper describes a method online communities can adopt to estimate contributor demographics using opt-in surveys, but that is more credible than relying entirely on opt-in data. Advertising-intelligence firms like ComScore and Quantcast provide demographic data on the readership of an enormous proportion of websites. With these sources, almost any community can use our method (and source code) to replicate a similar analysis by: (1) surveying a community’s readers (or a random subset) with the same instrument used to survey contributors; (2) combining results for readers with reliable demographic data about the readership population from a credible source; (3) reweighting survey results using the method we describe.
Although our new estimates will not help us us close the gender gap in Wikipedia or address its troubling implications, they give us a better picture of the problem. Additionally, our method offers an improved tool to build a clearer demographic picture of other online communities in general.
October 14, 2008
Keeping with the Open Access theme of the day, Steve Schultze is talking about Open Access to Govt docs and the law today at the Berkman Center luncheon series.
At the moment, Steve’s giving us a guided tour of the ironically named PACER (Public Access to Court Electronic Records) system that federal and appelate courts use to archive their documents. The site has a search engine that charges you per query (!) and per page.
It’s an absurdity given the fact that these documents are not copyrightable and that the intention of the folks who set this up was to provide legitimate public and open access to court documents. A classic case of high-minded goals and well-meaning government programs that need an overhaul.
As Steve points out, the courts face a serious challenge of cost recovery (all of this digitization and database archiving costs money), but in trying to resolve that issue with PACER, the judiciary has introduced obtuse barriers to entry and facilitated the rise of de-facto donwstream monopolies (folks re-archive and sell access to PACER documents on the web).
There’s got to be a way out, here. In the meantime, though, I’m going to sit back and enjoy Steve’s bar charts demonstrating how the cost recovery model currently employed by the federal judiciary is pretty much a crock.