January 14, 2013
Aaron Swartz’s suicide over the weekend is a tragedy. His death has affected many people very deeply, including many of my friends who were very close with Aaron.
Personally, I did not know Aaron well, but I regard him as an inspiration – as much for his quiet thoughtfulness and kindness as for his amazing achievements, intellect, projects, and democratic (small “d”) ideals.
I don’t have much to add to some of the heartfelt responses many people (including Cory Doctorow, Larry Lessig, and Matt Stoller) have posted elsewhere; however, as I have thought and read about Aaron over the past couple of days, I have decided that I want to commemorate his life and work through some concrete actions. Specifically, I have made some vows to myself about how I want to live, work, and relate to people in the future. Most of these vows are fundamentally democratic in spirit, which was part of what I find so inspiring about so much Aaron’s work. Not all of my commitments are coherent enough or sensible enough to list here, but I will put one out there as a public tribute to Aaron:
I will promote access to knowledge by ensuring that as much of my work as possible is always available at no cost and under minimally restrictive licenses that ensure ongoing access for as many people in as many forms as possible. I will also work to convince my colleagues, students, publishers, and elected or appointed representatives that they should embrace and promote a similar position.
This is a very small and inadequate act given the circumstances.
December 11, 2011
Academic peer review tends to be slow, imprecise, labor-intensive, and opaque.
A number of intriguing reform proposals and alternative models exist and hopefully, some of these ideas will lead to improvements. However, whether they do or not, I suspect that some form of peer review will continue to exist (at least for the duration of my career) and that many reviewers (myself included) will continue to find the process of doing the reviews to be time-consuming and something of a hassle.
The most radical solution is to shred the whole process T-Rex style.
This is sort of what has already happened in disciplines that use arXiv or similar open repositories where working papers can be posted and made available for immediate critique and citation. Such systems have their pros and cons too, but if nothing else they decrease the amount of time, money and labor that go into reviewing for journals and conferences, while increasing the transparency. As a result, they provide at least a useful complement to existing systems.
Over a conversation at CrowdConf in November, some colleagues and I came up with a related, but slightly less radical proposal: maybe you could keep some form of academic peer review, but do it without the academic peers?
Such a proposition calls into question one of the core assumptions underlying the whole process – that reviewers’ years of training and experience (and credentials!) have endowed them with special powers to distinguish intellectual wheat from chaff.
Presumably, nobody would claim that the experts make the right judgment 100% of the time, but everybody who believes in peer review agrees (at least implicitly) that they probably do better than non-experts would (at least most of the time).
And yet, I can’t think of anybody who’s ever tested this assumption in a direct way. Indeed, in all the proposals for reform I’ve ever heard, “the peers” have remained the one untouchable, un-removable piece of the equation.
That’s what got us thinking: what if you could reproduce academic peer review without any expertise, experience or credentials? What if all it took were a reasonably well-designed system for aggregating and parsing evaluations from non-experts?
The way to test the idea would be to try to replicate the outcomes of some existing review process using non-expert reviewers. In an ideal world, you would take a set of papers that had been submitted for review and had received a range of scores along some continuous scale (say, 1 to a protocol to distribute the review process across a pool of 5 – like papers reviewed for ACM Conferences). Then you would develop non-expert reviewers (say, using CrowdFlower or some similar crowdsourcing platform). Once you had review scores from the non-experts, you could aggregate them in some way and/or compare them directly against the ratings from the experts.
Would it work? That depends on what you would consider success. I’m not totally confident that distributed peer review would improve existing systems in terms of precision (selecting better papers), but it might not make the precision of existing peer review systems any worse and could potentially increase the speed. If it worked at all along any of these dimensions, implementing it would definitely reduce the burden on reviewers. In my mind, that possibility – together with the fact that it would be interesting to compare the judgments of us professional experts against a bunch of amateurs – more than justifies the experiment.
December 17, 2008
Chris Soghoian describes how he bumped up against Google’s questionable ad-sense trademark enforcement policies.
Soghoian’s story is troubling and it exposes yet another way in which the structure of web traffic has positioned Google as a de-facto arbiter of all kinds of legal speech, political salience, and good taste. More broadly, it demonstrates how key actors and institutions exercise influence in the networked public sphere.
For more on that idea, check out Matthew Hindman’s research. In his new book, The Myth of Digital Democracy, Hindman makes a related argument in a number of different ways, not the least of which is his compelling notion of “Googlearchy.” I disagree with Matt on a number of substantive points, but the significance of his analysis is undeniable. His work complements more established models for thinking about how social structure circumscribes certain kinds of thought and action.
One of the fascinating aspects of the Internet is that powerful forms of social order & status originate in seemingly innocuous expressions of aggregated opinions (e.g. the PageRank algorithm). Hindman’s work takes on the notion that such aggregated opinions are somehow equivalent to a utopian radical democracy or a free market of ideas.
In this sense, his argument parallels the work of economic sociologists, many of whom have analyzed the importance of the “embeddedness” of economic markets. Simply put, the thesis behind the concept of embeddedness is that the sorts of decentralized, disaggregated behaviors that occur in market-like settings are always an extension of the social and cultural contexts in which they occur. It’s a relatively simple idea, but it violates one of the core assumptions of neo-classical economic theory: that markets are a free and accurate expression of individual actors expressing rational preferences for the enhancement of their own wealth and welfare.
Sociologists such as Viviana Zelizer have shown how the economists’ assumptions break down in markets for deeply valued cultural goods such as intimacy and adoption. More recently, a number of scholars (including Marion Fourcade – a professor of mine at Berkeley) have taken up the idea that financial markets are also expressions of (economists’) cultural preferences and not merely an aggregated form of pure rationality.
Considering Hindman’s work and the continuing emergence of experiences like Soghoian’s, I think there’s a case to be made that research on the embeddedness of search technology might be a promising topic. Granted, I don’t know if there are many “neo-classical” information theorists out there that would be willing to defend the straw-man position that search technology serves up knowledge in a pure and rational form.
October 14, 2008
Open Access (OA) FAQ
“Why support OA?” Because there’s nothing exclusive about ideas – we can share them at no cost and still develop business models to make a living. A vibrant knowledge ecology will thrive if we learn not to treat intellectual property regulations like legal cudgels to beat others into submission.
“Why does OA matter?” Because Access to Knowledge – whether in the form of software, academic journal articles, patented medicines, technical designs, or cultural products – will facilitate education, economic development, and equitable wealth distribution throughout the world.
But this is just a blog, you’re not really doing anything about OA: Actually, I use my blog to promote and enact OA principles (for example, check out my creative commons attribution-share-alike license up in the top of the right-hand sidebar). Also, as a graduate student and a researcher, I try to publish in venues that support OA models of distribution and licensing.
Alright, fine. I’ll “get involved,” just spoon-feed me some more information, please! For details about the day go here and here. If you really want to drink from the firehose, I dare you to subscribe to Peter Suber’s blog. If you still want to learn more about the theories and ideas behind OA, read this article by UC Berkeley law professor Amy Kapczynski, Yochai Benkler’s Wealth of Networks (don’t worry, it’s quick!), and Larry Lessig’s Free Culture.
Aaron, you rampaging nerd, I don’t read – just point me somewhere I can give money! Okay, fine. Support and get involved in the activities of the following organizations: Knowledge Ecology International, Public Knowledge, Universities Allied for Essential Medicines, Public Library of Science, and Essental Action (esp. their Access to Medicines project).
Anything else? Don’t forget to vote this November and please tip your waiter.
June 16, 2008
Here’s an except that is characteristic of his analysis (my emphases):
Copyright enforcement weakens general law enforcement. And it’s expensive. The proposed ACTA treaty would create international legislation turning border guards into copyright police, charged with checking laptops, iPods, and other devices for possibly infringing content, and given the authority to confiscate and destroy equipment without even requiring a complaint from a rights-holder.
It’s characteristic of the dishonesty found in copyright law that the ACTA has been promoted as a treaty aimed to save people from dangerous fake medicine, which has very little to do with issues like “ISP responsibility.” While patents, trademarks, and copyright are significantly different in many respects, copyright industry lobbyists prefer to present their draconian enforcement strategies as a matter of “intellectual property” in general.
The real dispute, once again, is not between proponents and opponents of copyright as a whole. It is between believers and non-believers. Believers in copyright keep dreaming about building a digital simulation of a 20th-century copyright economy, based on scarcity and with distinct limits between broadcasting and unit sales. I don’t believe such a stabilization will ever occur, but I fear that this vision of copyright utopia is triggering an escalation of technology regulations running out of control and ruining civil liberties. Accepting a laissez-faire attitude regarding software development and communication infrastructure can prevent such an escalation.
This argument underscores several reasons why it is completely disingenuous to equate strict IP enforcement with anything resembling a “free market.” Such an equation was an integral part of the Washington Consensus obsession with “strong property rights” and infiltrated the global trade regime with the formation of the WTO and the TRIPS agreement.
It is high time to unbundle reigning notions of property and consider whether digital, informational assets deserve comparable treatment as scarce, physical resources.
Make no mistake: ACTA is an attempt to take corporate welfare for the copyright and trademark industries to a global level. As such, it threatens the wealth, welfare, and stability of the global political economy.
Cheers to Fleischer for providing another clear and articulate statement against ACTA. The question remains, will the USTR, EC, Japan, and the other ACTA negotiating parties listen?
June 13, 2008
Does anyone have any?
Obviously, there are a slew of industry-sponsored studies that tell us how much profit is lost through trademark and copyright infringements (all of them employing questionable methods and un-rigorous theories at best).
There are also numerous press-releases like this one, demonstrating that IPR police can bring down those evil counterfeiters.
But what about a peer-reviewed empirical study that actually supports the hypothesis that punishment is the best way to deal with unauthorized reproduction and use of intangible assets?
I can’t think of any.
Seems like the US congress, courts, private sector firms, and trade officials ought to test their enforcement hypothesis sometime.
May 21, 2008
The incomparable Jamie Love has an excellent post today on the definition (and mis-definition) of “counterfeit.” It may seem like an arcane concern, but in the context of debates about generic medicines, unlicensed software and music reproduction, as well as other kinds of exchange in informational goods, the terms we use and the conceptual framing of legal debates make a huge impact.