March 25, 2012
Following Mako’s extremely ambitious lead, I have compiled an iron-clad list of iron laws. You may already be familiar with one of the two famous iron laws (one of oligarchy and the other of wages). Below, you will find some lesser known examples (that I believe would do Michels and Pareto proud):
The Iron Law of Prohibition
The Iron Law of Government Intervention
The Iron Law of Climate Politics
The Iron Law of History
The Iron Law of Responsibility
The Iron Law of Emulation
The Iron Law of Chaos
The Iron Law of Birtherism
The Iron Law of British Newspaper Stories
The Iron Law of the Horde
The Iron Law of Selfishness
The Iron Law of Tennis
The Iron Law of the Burden of Debt
The Iron Law of Bubbles
The Iron Law of Admissions
The Iron Law of Peonies
The Iron Law of Unintended Consequences
The Iron Law of Anti-Incumbency
The Iron Law of Fiefs
The Iron Law of Interest Rate Restrictions
The Iron Law of Evaluation Studies
The Iron Law of Nationalism and Federation
The Iron Law of Evaluation and Other Metallic Rules
The Iron Law of Full Faith and Credit
The Iron Law of Consensus
The Iron Law of Important Articles
The Iron Law of Currency Crises
The Iron Law of Imprisonment
The Iron Law of Paternalism
Thr Iron Law of Admissions
The Iron Law of Hollywood Dominance
The Iron Law of Competence Development
The Iron Law of Health Care Expenditures
The Iron Law of Happiness
March 18, 2012
As the Republican presidential candidates continue to duke it out in contentious primary elections around the country, I’ve started to notice the increasingly public signs that the Obama campaign is gearing up for battle. Not surprisingly, I tend to focus on the Obama re-election team’s uses of digital technologies, where a number of shifts may result in important changes for both the voter-facing and internal components of the Obama For America’s (OFA) digital operations. I started writing this post with the intent of reviewing some of the recent news coverage of the campaign, but it turned into a bit more of a long-form reflection about the relationship between the campaign’s approach to digital tools might mean for democracy.
OFA 2.0: Bigger, Faster, & Stronger (Data)
A fair amount of media coverage has suggested that the major technology-driven innovations within OFA and the Democratic party this election cycle are likely to consist of refined collection and analysis collection of vast troves of voter data as opposed to highly visible social media tools (such as My.BarackObama.com) that made headlines in 2008.
As Daniel Kreiss & Phil Howard elaborated a few years ago, database centralization and integration became core strategic initiatives for the Democratic National Committee after the 2000 election and the Obama campaign in 2008. These efforts have been expanded in big ways during the build-up to the current campaign cycle.
According to the bulk of the (often quite breathless) reporting on the semi-secretive activities of the 2012 Obama campaign, the biggest and newest initiatives represent novel applications of the big data repositories gathered by the campaign and its allies in previous years. These include the imaginatively named “Project Narwhal” aimed at correlating diverse dimensions of citizens’ behavior with their voting, donation, and volunteering records. There is also “Project Dreamcatcher,” an attempt to harness large-scale text analytics to facilitate micro-targeted voter outreach and engagement.
For a vivid example of what these projects mean (especially if you’re on any of the Obama campaign email lists), check out ProPublica’s recent coverage comparing the text of different versions of the same fundraising email distributed by the campaign two weeks ago (the narrative is here and the actual data and analysis are here).
(Side note: in general, Sasha Issenberg’s coverage of these and related aspects of the campaigns for Slate is great.)
What’s Next: “Gamified” and Quasi-open Campaign App Development
As the Republicans sort out who will face Obama in November, OFA will, of course, roll-out more social media content and tools. In this regard, last week’s release of heavily hyped “The Road We’ve Traveled” on YouTube was only the beginning of the campaign’s more public-facing phase.
The polished, professional video suggests that OFA will build on all of the social media presence and experience they built during and the last cycle as well as over the intervening years of Obama’s administration.
Less visible and less certain are whether any truly new social media tools or techniques will emerge from the campaign or its allies. Here, there are two recent initiatives that I think we might be talking about more over the course of the next six months.
The first of these started late last year, when OFA experimented with a relatively unpublicized initiative called “G.O.P. Debate Watch.” Aptly characterized by Jonathan Easley in The Hill as a “drinking game style fundraiser” the idea was that donors committed to give money for every time that a Republican candidate uttered particular, politicized keywords identified ahead of time (e.g. “Obamacare” or “Socialist”).
In its attempt to combine entertainment and a little bit of humor with small-scale fundraising, G.O.P. Debate Watch fits with a number of OFA’s other techniques aimed at using digital initiatives to lower the barriers to participation and engagement. At the same time, it incorporates much more explicit game-dynamics, setting it apart from earlier efforts and exemplifying the wider trend towards commercial gamification.
The second initiative, which only recently became public knowledge, has just begun with OFA opening a Technology Field Office in San Francisco last week.
The really unusual thing about the SF office is that it appears as though the campaign will use it primarily to try to organize and harness the efforts of volunteers who possess computer programming skills. This sort of coordinated, quasi-open tool-building effort is completely unprecedented, especially within OFA, which has historically pursued a secretive and closed model of innovation and internal technology development.
If the S.F. technology field office results in even one or two moderately successful projects – I imagine there will be a variety of mobile apps, games, and related tools that it will release between now and November – it may give rise to a wave of similar semi-open innovation efforts and facilitate an even closer set of connections between Silicon Valley firms and OFA.
Is This What Digital Democracy Looks Like?
I believe that the applications of commercial data-mining tools and gamification techniques to political campaigns have contradictory implications for democracy.
On the one hand, big data and social games represent the latest and greatest tools available for campaigns to use to try to engage citizens and get them actively involved in elections. Given the generally inattentive and fragmented state of the American electorate, part of me therefore believes that these efforts ultimately serve a valuable civic purpose and may, over the long haul, help to create a vital and digitally-enhanced civic sphere in this country.
At the same time, it is difficult to see how the OFA initiatives I have discussed here (and others occurring elsewhere across the U.S. political spectrum) advance equally important goals such as promoting cross-ideological dialogue, deliberative democracy, voter privacy, political accountability, or electoral transparency. (Along related lines, Dan Kreiss has blogged his thoughts about the 2012 Obama campaign and its embodiment of a certain vision of “the technological sublime.”)
All the database centralization, data mining, and gamified platforms for citizen engagement in the world will neither make a dysfunctional democratic government any more accountable to its citizens; erase broken aspects of the electoral system; nor generate a more deeply democratic and representative networked public sphere. Indeed, these techniques have generally been used to grow the bottom line of private companies with little or no concern for whether or not any broader public goods are created or distributed. Voters, pundits, President Obama, and the members of his campaign staff would all do well to keep that in mind no matter what happens this Fall.
March 11, 2012
It’s been a busy week. I spent two days of it attending the Truthiness and Digital Media symposium co-hosted by the Berkman Center and the MIT Center for Civic Media. As evidenced by the heart-warming picture above, the event featured an all-star crowd of folks engaged in media policy, research, and advocacy. Day 1 was a pretty straight-ahead conference format in a large classroom at Harvard Law School, followed on day 2 by a Hackathon at the MIT Media Lab. To learn more about the event, check out the event website, read the twitter hashtag archive, and follow the blog posts (which, I believe, will continue to be published over the next week or so).
In the course of the festivities, I re-learned an important, personal truth about conferences: I like them more when they involve a concrete task or goal. In this sense, I found the hackathon day much more satisfying than the straight-ahead conference day. It was great to break into a small team with a bunch of smart people and work on achieving something together – in the case of the group I worked with, we wanted to design an experiment to test the effects of digital (mis)information campaigns on advocacy organizations’ abilities to mobilize their membership. I don’t think we’ll ever pursue the project we designed, but it was a fantastic opportunity to tackle a problem I actually want to study and to learn from the experiences and questions of my group-mates (one of whom already had a lot of experience with this kind of research design).
The moral of the story for me is that I want to use more hackathons, sprints, and the like in the context of my future research. It is also an excellent reminder that I want to do some reading about programmers’ workflow strategies more generally. I already use a few programmer tools and tactics in my research workflow (emacs, org-mode, git, gobby, R), but the workflow itself remains a kludge of terrible habits, half-fixes, and half-baked suppositions about the conditions that optimize my putative productivity.
Matt Salganik and Karen Levy (both of the Princeton Sociology Department) recently released a working paper about what they call “Wiki Surveys” that raises several important points regarding the limitations of traditional survey research and the potential of participatory online information aggregation systems to transform the way we think about public opinion research more broadly.
Their core insight stems from the idea that traditional survey research based on probability sampling leaves a ton of potentially valuable information on the table. This graph summarizes that idea in an extraordinarily elegant (I would say brilliant) way:
Think of the plot as existing within the space of all possible opinion data on a particular issue (or set of issues). No method exists for collecting all the data from all of the people whose opinions are represented by that space, so the best you – or any researcher – can do is find a way to collect a meaningful subset of that data that will allow you to estimate some characteristics of the space.
The area under the curve thus represents the total amount of information that you could possibly collect with a hypothetical survey instrument distributed to a hypothetical population (or sample) of respondents.
Traditional surveys based on probability sampling techniques restrict their analysis to the subset of data from respondents for whom they can collect complete answers to a pre-defined subset of closed-ended questions (represented here by the small white rectangle in the bottom left corner of the plot). This approach loses at least two kinds of information:
- the additional data that some respondents would be happy to provide if researchers asked them additional questions or left questions open-ended (the fat “head” under the upper part of the curve above the white rectangle);
- the partial data that some respondents would provide if researchers had a meaningful way of utilizing incomplete responses, which are usually thrown out or, at best, used to make estimates about the characteristics of whether attrition from the study was random or not (this is the long “tail” under the part of the curve to the right of the white rectangle).
Salganik and Levy go on to argue that many wiki-like systems and other sorts of “open” online aggregation platforms that do not filter contributions before incorporating them into some larger information pool illustrate ways in which researchers could capture a larger proportion of the data under the curve. They then elaborate some statistical techniques for estimating public opinion from the subset of information under the curve and detail their experiences applying theses techniques in collaboration with two organizations (the New York City Mayor’s Office and the Organization for Economic Cooperation and Development, or OECD).
If you’re not familiar with matrix algebra and Bayesian inference, the statistical part of the paper probably won’t make much sense, but I encourage anyone interested in collective intelligence, surveys, public opinion, online information systems, or social science research methods to read the paper anyway.
Overall, I think Salganik and Levy have taken an incredibly creative approach to a very deeply entrenched set of analytical problems that most social scientists studying public opinion would simply prefer to ignore! As a result, I hope their work finds a wide and receptive audience.