November 27, 2011
A little while ago, I thought that maybe I had an original idea. As usual, the Internet proved me wrong. This is almost certainly a common experience and may even be generalizable.
My idea was straightforward: having stumbled across a copy of Luc Sante’s NYRB Classics translation of Felix Feneon’s Novels in Three Lines, I believed that the book would make for a good twitter feed or something like that. To illustrate just one of the many reasons why Feneon and the Internet might get along, here is a portrait of Feneon painted by Paul Signac in 1890:
If you’re not familiar with Feneon, the French Wikipedia entry on him is a helpful start (and if, like me, you don’t really read French, the Google Translate version of the page is your friend). Other good places to read more about him and his work are this book review by Julian Barnes and this blogpost (authored by one “Mr. Whiskets”). Basically, Feneon was a Parisian anarchist, literature buff, translator, art critic, and journalist during the late 19th and early 20th centuries. Feneon’s “novels”were actually short “pointilist” reports of current events printed in the newspaper Le Matin. The texts are often funny, violent and ironic, maybe best characterized as a cross between the late, great “bus plunges off a cliff” NYT stories of yore; a poetic police blotter; and “metropolitan diaries” (sans smug New Yorker ‘tude). Here is another, somewhat more serious portrait of Feneon at work:
Nevermind all that, though. In fact, anything of substance about Feneon is besides the point here. A few quick searches revealed that I was late to the Feneon party. There exist no fewer than thirteen (!) Feneon twitter accounts <and> two Feneon-themed tumblr’s. As you would expect, some of this nouveau-Feneon content is good and some of it is crap. However, the point stands that long before the idea was even a glimmer in my eye, the Internet had already found Feneon and had reproduced, translated, imitated, and remixed him.
All of this suggests something like a Feneon Principle, or at least a Feneon Corollary to Rule 34 (H/T to Mako for that one).
Somebody on the Internet has already tried to turn anything you can think of into a meme.
Falsifiable hypotheses and empirical evidence to follow…
November 20, 2011
The past two weeks’ protests and police-led violence at UC Berkeley and UC Davis signal both the expansion of the occupy movement as well as the extent of the leadership vacuum at the country’s most prestigious public university. Participants and observers much more eloquent than I have offered thoughtful responses to the situation. However, after reading about the events and media reactions to them, I thought that some recent history behind these campus movements could clarify how things got so bad in California and what they might mean in the coming months.
Most news reports have depicted the protests and confrontations as an outgrowth of the occupy Wall Street and Oakland protests, but in fact, the campus movements has much deeper roots. Four years ago, UC President Mark Yudof and co. responded to the financial shortfall brought on by the California budget crisis with a series of highly unpopular initiatives designed to centralize administrative authority, slash funding for a variety of programs, and avoid any sort of public accountability or debate over these actions. The following year, the union of graduate students and academic staff faced a lengthy, contentious budget negotiation in which the university negotiating team repeatedly undermined the collective bargaining process. Around the same time, a series of unilateral tuition increases provoked rage across many of the campuses and, at Berkeley, culminated in a violent showdown between police and student protesters seeking to occupy a classroom building.
The resulting climate around the campuses has become tense and polarized as the mutual distrust between the administrations on one hand, and an alliance of highly mobilized students, faculty, and staff on the other, has escalated.
The student organizers at Berkeley made a smart tactical decision to harness the momentum of the occupy movements and, in particular, the widespread resentment against the violent police response to the occupation of Frank Ogawa plaza in Oakland. With the November 9 protests, they sought to keep the pressure on their campus administrators as the UC regents planned to approve a new round of tuition increases last week (the meeting, planned to take place in San Francisco, was canceled in the wake of the Berkeley violence).
Chancellor Birgeneau (Berkeley) and his staff, in contrast, failed to learn anything from either their own past mistakes with the budget crisis protests or the errors of mayors across the country in responding to the recent occupations. Faced with a group of students opposed to further university budget cuts, tuition increases, and the widening inequality gap in California and across the country, the administration deployed the UC and Alameda County police departments. In doing so, they chose to enforce the letter of campus rules at the cost of student and faculty safety. The resulting violence was predictable, avoidable, and (from the point of view of building a climate of constructive public debate on campus) counterproductive. Birgeneau’s subsequent defense of the brutality was inexcusable.
The Davis protesters looked to build on the momentum of their Berkeley peers, joining in non-violent solidarity against budget cuts, police brutality, and inequality. Somehow, Chancellor Katehi managed to respond in an even more ham-handed manner than Birgeneau. Not only did she deploy the police – who, along with their pepper spray, proceeded to make national headlines – but she didn’t even plan on facing protesters when she called a press conference later that evening. Not surprisingly, her actions provoked righteous anger (and a poignant, silent confrontation as she left her office) on the part of students and faculty alike.
Today, UC President Mark Yudof entered the fray, delivering slaps on the wrist to his colleagues along with some bland comments condemning the excessive use of force against students and professors. Announcing that he will hold meetings and convene committees to review the events, Yudof delivered what many have come to expect from him in times of systemic crisis: bureaucracy.
In this sense, Yudof’s response is not only inadequate to the situation, but fails to address the complete breakdown of trust that has now occurred between the UC administrators and their respective constituents. On both campuses, the interests of the administrative elite have become so far removed from those of the students and faculty that the two groups are, perhaps a little too literally, at war. As a result, both Birgeneau and Katehi should go. They should be replaced with leaders who understand how to adopt creative responses that defend free speech and student safety at the cost of bending a few campus restrictions. These new leaders should also undertake an immediate overhaul of UC police crowd management techniques.
To close with a speculative prediction: I suspect that the intensity and extent of the violence on two UC campuses this past week will galvanize support for the students and, by proxy, the occupy movement with which they have aligned themselves. As James Fallows notes, the images coming out of New York, Portland, Oakland, Berkeley and Davis have much in common with those from Selma and Birmingham half a century ago. For many Americans, this sort of violent repression of protest speech will not resonate as either a legitimate or democratic use of state power.
November 13, 2011
I just read this short piece by Richard Van Noorden in Nature about the rising number of retractions in medical journals over the past five years and it got me thinking about the different ways in which researchers fail to deal with failure (the visualizations that accompany the story are striking).
The article specifies two potential causes behind the retraction boom: (1) increased access to data and results via the Internet facilitating error discovery; and (2) creation of oversight organizations charged with identifying scientific fraud (Van Noorden points to the US Office of Research Integrity in the DHHS as an example). It occurred to me in reading this that, a third, complementary cause could be the political pressure exerted on universities and funding agencies as a result of the growing hostility towards publicly funded research. In the face of such pressure, self-policing would seem more likely.
Apparently, the pattern goes further and deeper than Van Noorden is able to discuss within the confines of such a short piece. This Medill Reports story by Daniel Peake from last year has a graph of retractions that goes all the way back to 1990, showing that the upturn has been quite sudden.
All of these claims about the causes of retractions are empirical and should/could be tested to some extent. The bigger question, of course, remains: what to do about the reality of failure in scientific research? As numerous people have already pointed out, in an environment where publication serves as the principal metric of production, the institutions, organizations & individuals that create research – universities, funding agencies, peer reviewed journals, academics & publishers – have few (if any) reasons to identify and eliminate flawed work. The big money at stake in medical research probably compounds these issues, but that doesn’t mean the social sciences are immune. In fields like Sociology or Communication where the stakes are sufficiently low (how many lives were lost in FDA trials because of the conclusions drawn by that recent AJS article on structural inequality?), the social cost of falsification, plagiarism, and fraud remain insufficient to spur either public outrage or formal oversight. Most flawed social scientific research probably remains undiscovered simply because, in the grand scheme of policy and social welfare, this research does not have a clear impact.
Presumably, stronger norms around transparency can continue to provide enhanced opportunities for error discovery in quantitative work (and I should have underscored earlier that these debates are pretty much exclusively about quantitative work). In addition, however, I wonder if it might be worth coming up some other early-detection and response mechanisms. Here were some ideas I started playing with after reading the article:
Adopt standardized practices for data collection on research failure and retractions. I understand that many researchers, editors, funders, and universities don’t want the word to get out that they produced/published/supported anything less than the highest quality work, but it really doesn’t seem like too much to ask that *somebody* collect some additional data about this stuff and that such data adhere to a set of standards. For example, it would be great to know if my loose allegations about the social sciences having higher rates of research failure and lower rates of error discovery are actually true. The only way that could happen would be through data collection and comparison across disciplines.
Warning labels based on automated meta-analyses. Imagine if you read the following in the header of a journal article: “Caution! The findings in this study contradict 75% of published articles on similar topics.” In the case of medical studies in particular, a little bit of meta-data applied to each article could facilitate automated meta-analyses and simulations that could generate population statistics and distributions of results. This is probably only feasible for experimental work, where study designs are repeated with greater frequency than in observational data collection.
Create The Journal of Error Discovery (JEDi). If publications are the currency of academic exchange, why not create a sort of bounty for error discovery and meta-analyses by dedicating whole journals to them? At the moment, blogs like Retraction Watch are filling this gap, but there’s no reason the authors of the site shouldn’t get more formal recognition and credit for their work. Plus, the first discipline to have a journal that goes by the abbreviation JEDi clearly deserves some serious geek street cred. Existing journals could also treat error discoveries and meta-analyses as a separate category of submission and establish clear guidelines around the standards of evidence and evaluation that apply to such work. Maybe these sorts of practices already happen in the medical sciences, but they haven’t made it into my neighborhood of the social sciences yet.