Pride Parade, San Francisco 2012. Photo by torbakhopper.

This week’s edition of “Five Things” is brought to you by the letter Q and all the colors of the rainbow! That’s right, it’s pride week in San Francsico and the city has been celebrating in its usual colorful, costumed, semi-clothed and totally fabulous fashion. So put on your hottest, tightest, most colorful dancing socks, and away we go!

  • According to this fascinating article by David H. Freedman in the current issue of The Atlantic Monthly, the no-longer-so-notorious behavioral theories of B.F. Skinner are alive and well in the growing field of mobile health and dieting applications development. The article is well-written and raises a bunch of questions about everything from scientific ethics to the politics of technology to agency and theories of progress and well-being.
  • For you current or aspiring R users out there, I just came across this Cookbook for R by Winston Chang. It has some excellent code examples. Also, in the course of refining some figures for a paper earlier this week, I discovered that Hadley Wickham recently made some significant updates to ggplot2 and has released new documentation for the package.
  • Amara, a.k.a. the program-formerly-known-as Universal Subtitles is an awesome piece of free (as in freedom and beer) subtitling and transcription software. I’ve been tinkering with it over the past few weeks in an effort to help design a system for crowdsourced video transcription and while it isn’t quite optimized for that purpose just yet, it seems terrific and I hope to find more reasons to use it soon.

This post marks the latest installment of Mako’s (and, to a lesser extent, my) ongoing series on cliches in academic paper titles.

My selections this time around all incorporate the phrase “old wine in new bottles.” By the numbers, this phrase may not blow away the Iron Laws, Manhattan Projects, invisible hands, frailties, and tangos of the world, but it nonetheless seems to push authors to comparably dizzying heights of rhetorical inspiration.

My favorite examples all share a little bit of extra oenological boldness – instead of merely tacking the phrase “old wine in new bottles” onto a given topic (there are, literally, thousands of paper titles following that model), these authors take the liberty of ever-so-slightly altering the formula. The result is more than just old wine in new bottles – maybe “old wine in slightly cracked, twisted, and re-labeled bottles,” ….or something like that.
Without further ado, here we go:

Old Wine, Cracked Bottle?
New Bottles, Old Wine: Communicative Language Teaching in China
Pervcutaneous absorption after twenty-five years: or “old wine in new wineskins”
Carbon-motivated Border Tax Adjustments: Old Wine in Green Bottles?
Self-efficacy and expectancy: Old wine with new labels
Old wine in new bottles, or a new vintage?
Old wine in new bottles tastes better: A case study of TQM implementation in the IRS
Old wine or warm beer: Target-specific sentiment analysis of adjectives
The “new” growth theory: Old wine in new goatskins (!)
Coal tar therapy in paimiplantar psoriasis: old wine in an old bottle?
New Wine: The Cultural Shaping of Japanese Christianity
Old wine, new ethnographic lexicography
Territorial cohesion: Old (French) wine in new bottles?
Old Wine in Old Bottles–The Renaissance of the Contract Clause
New Wine Bursting from Old Bottles: Collaborative Internet Art, Joint Works, and Entrepreneurship
Cybercrimes: New wine, no bottles
Migration, dependency and inequality in the Pacific: Old wine in bigger bottles?

Iron Lawlapalooza

March 25, 2012

Following Mako’s extremely ambitious lead, I have compiled an iron-clad list of iron laws. You may already be familiar with one of the two famous iron laws (one of oligarchy and the other of wages). Below, you will find some lesser known examples (that I believe would do Michels and Pareto proud):

The Iron Law of Prohibition
The Iron Law of Government Intervention
The Iron Law of Climate Politics
The Iron Law of History
The Iron Law of Responsibility
The Iron Law of Emulation
The Iron Law of Chaos
The Iron Law of Birtherism
The Iron Law of British Newspaper Stories
The Iron Law of the Horde
The Iron Law of Selfishness
The Iron Law of Tennis
The Iron Law of the Burden of Debt
The Iron Law of Bubbles
The Iron Law of Admissions
The Iron Law of Peonies
The Iron Law of Unintended Consequences
The Iron Law of Anti-Incumbency
The Iron Law of Fiefs
The Iron Law of Interest Rate Restrictions
The Iron Law of Evaluation Studies
The Iron Law of Nationalism and Federation
The Iron Law of Evaluation and Other Metallic Rules
The Iron Law of Full Faith and Credit
The Iron Law of Consensus
The Iron Law of Important Articles
The Iron Law of Currency Crises
The Iron Law of Imprisonment
The Iron Law of Paternalism
Thr Iron Law of Admissions
The Iron Law of Hollywood Dominance
The Iron Law of Competence Development
The Iron Law of Health Care Expenditures
The Iron Law of Happiness

Previously in this series (by Mako): frailty, the invisible hand, science as dance.

Truth and conferences

March 11, 2012

Craig Newmark (with an assist from the Colbert-head-on-a-stick puppet) shares his feelings about what he'd like to tell people who use the Internet to spread nefarious lies and misinformation.

It’s been a busy week. I spent two days of it attending the Truthiness and Digital Media symposium co-hosted by the Berkman Center and the MIT Center for Civic Media. As evidenced by the heart-warming picture above, the event featured an all-star crowd of folks engaged in media policy, research, and advocacy. Day 1 was a pretty straight-ahead conference format in a large classroom at Harvard Law School, followed on day 2 by a Hackathon at the MIT Media Lab. To learn more about the event, check out the event website, read the twitter hashtag archive, and follow the blog posts (which, I believe, will continue to be published over the next week or so).

In the course of the festivities, I re-learned an important, personal truth about conferences: I like them more when they involve a concrete task or goal. In this sense, I found the hackathon day much more satisfying than the straight-ahead conference day. It was great to break into a small team with a bunch of smart people and work on achieving something together – in the case of the group I worked with, we wanted to design an experiment to test the effects of digital (mis)information campaigns on advocacy organizations’ abilities to mobilize their membership. I don’t think we’ll ever pursue the project we designed, but it was a fantastic opportunity to tackle a problem I actually want to study and to learn from the experiences and questions of my group-mates (one of whom already had a lot of experience with this kind of research design).

The moral of the story for me is that I want to use more hackathons, sprints, and the like in the context of my future research. It is also an excellent reminder that I want to do some reading about programmers’ workflow strategies more generally. I already use a few programmer tools and tactics in my research workflow (emacs, org-mode, git, gobby, R), but the workflow itself remains a kludge of terrible habits, half-fixes, and half-baked suppositions about the conditions that optimize my putative productivity.

Unknown Fiddler from Southern US Field Trip, 1959 (Lomax Collection, US Library of Congress)

  1. Supposedly, much of the Alan Lomax archive of music will eventually go online. Until then, I console myself with this tiny playlist from the album versions of his “Southern Journey” (and in particular the Fred McDowell track “What’s the Matter Now”).
  2. The New York Public Library released a stereogranimator.
  3. The experimental turk website includes a nice list of Mturk experimentation resources (via John Horton).
  4. Chris Blattman’s offered some sound recommendations on how to be a better reviewer & respondent.
  5. Henry Farrell (and many others) are taking a stand against Elsevier and you can join.

Jeremy Freese (who I met last week during a brief trip to Evanston and who turns out to be as awesome in person as he is online and in print!) and the scatterplotters revealed this week (gasp!) that nobody who’s anybody pays attention to the page limit guidelines for ASA submissions.

Page limit? What page limit? (photo 2009 by Sara Grajeda cc-by-nc-nd)

This page limit absorbed way too much of a close friend’s time this week, but the fact that many ASA submitters do not pay any attention to it is not a shocker.

Indeed, many ASA attendees treat the conference like you might treat an annoying relative: fundamentally flawed in ways that are both too numerous to mention and too deep to try to be repaired, but nonetheless sufficiently unavoidable once a year that you reconcile your differences and do what you need to do in order to visit.

Having also spent a little bit of time at conferences that are not sociology conferences, I can say that ASA is not extraordinarily bad. Aspects of ICA, CHI, and CSCW are equally broken and all the brokenness serves as a vivid reminder that institution-building remains a hard difficult process – even for people who study institutions, collaboration, and human behavior.

That said, there are some pieces of ASA that work quite well and maybe, if as olderwoman and Jeremy note in the comments, we want to inform future policy decisions around these issues, it’s worth distinguishing between what’s broken and what’s not a little more clearly.

So, with that in mind, here are a few things that I like about ASA:

  • Socializing with colleagues and peers (In particular, I recommend the Berkeley Sociology department’s annual party).
  • One-stop-shop access to colleagues and friends who you never see in one place otherwise.
  • Cross-generational dialogues with scholars and students of all ages.
  • The occasional great presentation or conversation about research.

And here are some negatives (beyond the page limit):

  • Socializing with colleagues and peers (has its dark side too).
  • A bizarrely large program that is painful to read and navigate.
  • Soul-crushingly boring & nearly uniform format of panels and presentations.
  • An arbitrary, unblind, single review process for submissions.
  • The horrible tools and information made available to conference attendees for searching presentations and panels.

I’d be curious what pieces of other peoples’ positive and negative ASA experiences I’m missing. Other thoughts? Feedback? See you in the comments…

A Modest Academic Fantasy

January 9, 2012

Image credit: curious zed (flickr)

For today’s post, I offer a hasty sketch of a modest academic fantasy: free syllabi.

As a graduate student, I have often found myself searching for and using syllabi to facilitate various aspects of my work.

Initially, syllabi from faculty in my department and others helped me learn about the discipline I had chosen to enter for my Ph.D. Later, I sought out syllabi to design my qualifying exam reading lists and to better understand the debates that structured the areas of research relevant to my dissertation. More recently, I have turned to syllabi yet again to learn about the curriculum and faculty in departments where I am applying for jobs and where I could potentially teach my own courses. When I design my own syllabi, I anticipate that I will, once again, search for colleagues’ syllabi on related topics in order to guide and advance my thinking.

The syllabi I find are almost always rewarding and useful in some way or another. The problem is that I am only ever able to find a tiny fraction of the syllabi that could be relevant.

This is mainly a problem of norms and partly a problem of infrastructure. On the norms side, there is no standard set of expectations or practices around whether faculty post syllabi in publicly accesible formats or locations.

Many faculty do share copies of recent course syllabi on their personal websites, but others post nothing or only a subset of the courses they currently teach.

I am not aware of any faculty who post all the course syllabi they have ever taught in open, platform independent file formats to well-supported, open archives with support for rich meta-data (this is the infrastructure problem).

Given the advanced state of many open archives and open education resources (OER) projects, I have to believe it is not completely crazy to imagine a world in which a system of free syllabi standards and archives eliminates these problems.

At minimum, a free syllabi project would require faculty to:

  • Distribute syllabi in platform independent, machine-readable formats that adhere to truly open standards.
  • Archive syllabi in public repositories
  • License syllabi for at least non-commercial reuse (to facilitate aggregation and meta-analysis!).

In a more extreme version, you might also include some standards around citation formats and bibliographic information for the sources and readings listed in the syllabi.

In any case, some sort of free syllabi project seems doable; useful; and relatively inexpensive (at least in comparison to some expensive, resource intensive projects that involve streaming full video and audio of classes).

Update: Joseph Reagle, who is – as usual – much better informed on these topics than I am, responded to my post over a Berkman Center email list. Since  Joseph’s message points to some really great ideas/references on this topic, I’m re-publishing it in full below (with his permission):

Aaron S’s posting today about “A Modest Academic Fantasy” [1] (free syllabi) reminded me I wanted to share a post of my own [2] in response to Greg Wilson’s question of “would it be possible to create a ‘GitHub for education’”? [3].

While a super-duper syllabus XML format might be great (as I’ve heard David W discuss) — but would have fork-merge-share problem’s as Wilson notes — I’ve always (since 2006) provided my syllabus online, in HTML, with an accompanying bibtex file for the reading list. I think this is the best way currently to share without waiting for a new standard.

On the course material front, I recently started sharing my class notes and slides. These are written in markdown — which makes them easy to collaborate on — put up at Github, and are used to generate HTML5 slides (e.g., [4]). I’ve also started putting up classroom best practices and exercises (e.g., [5]) on a personal wiki; I’d love to see something like this go collaborative.

For in class collaboration, I understand Sasha C[ostanza-Chock] has successfully used etherpad. The PiratePad variant even permits wiki-style links. I desperately want a light-weight synchronous editor with wiki-style links but none exist. (etherpad-lite is a great improvement on etherpad in terms of memory requirements, but does not have wiki-style links; I’ll probably end up using Google Docs because I don’t have to worry about any back-side maintenance.)

I’d love to hear from other people about what they are doing!?

[1]: https://fringethoughts.wordpress.com/2012/01/09/modest-academic-fantasy/
[2]: http://reagle.org/joseph/blog/career/teaching/fork-merge-share
[3]: http://software-carpentry.org/2011/12/fork-merge-and-share/
[4]: http://reagle.org/joseph/2011/nmc/class-notes.html
[5]: http://reagle.org/joseph/zwiki/teaching/Exercises/Tasks/Mindmap.html

Thanks, Joseph!

Academic peer review tends to be slow, imprecise, labor-intensive, and opaque.

A number of intriguing reform proposals and alternative models exist and hopefully, some of these ideas will lead to improvements. However, whether they do or not, I suspect that some form of peer review will continue to exist (at least for the duration of my career) and that many reviewers (myself included) will continue to find the process of doing the reviews to be time-consuming and something of a hassle.

The most radical solution is to shred the whole process T-Rex style.

Gideon Burton, 2009, cc-by-sa

This is sort of what has already happened in disciplines that use arXiv or similar open repositories where working papers can be posted and made available for immediate critique and citation. Such systems have their pros and cons too, but if nothing else they decrease the amount of time, money and labor that go into reviewing for journals and conferences, while increasing the transparency. As a result, they provide at least a useful complement to existing systems.

Over a conversation at CrowdConf in November, some colleagues and I came up with a related, but slightly less radical proposal: maybe you could keep some form of academic peer review, but do it without the academic peers?

Such a proposition calls into question one of the core assumptions underlying the whole process – that reviewers’ years of training and experience (and credentials!) have endowed them with special powers to distinguish intellectual wheat from chaff.

Presumably, nobody would claim that the experts make the right judgment 100% of the time, but everybody who believes in peer review agrees (at least implicitly) that they probably do better than non-experts would (at least most of the time).

And yet, I can’t think of anybody who’s ever tested this assumption in a direct way. Indeed, in all the proposals for reform I’ve ever heard, “the peers” have remained the one untouchable, un-removable piece of the equation.

That’s what got us thinking: what if you could reproduce academic peer review without any expertise, experience or credentials? What if all it took were a reasonably well-designed system for aggregating and parsing evaluations from non-experts?

The way to test the idea would be to try to replicate the outcomes of some existing review process using non-expert reviewers. In an ideal world, you would take a set of papers that had been submitted for review and had received a range of scores along some continuous scale (say, 1 to a protocol to distribute the review process across a pool of 5 – like papers reviewed for ACM Conferences). Then you would develop non-expert reviewers (say, using CrowdFlower or some similar crowdsourcing platform). Once you had review scores from the non-experts, you could aggregate them in some way and/or compare them directly against the ratings from the experts.

2007 diylibrarian cc-by-nc-sa

Would it work? That depends on what you would consider success. I’m not totally confident that distributed peer review would improve existing systems in terms of precision (selecting better papers), but it might not make the precision of existing peer review systems any worse and could potentially increase the speed. If it worked at all along any of these dimensions, implementing it would definitely reduce the burden on reviewers. In my mind, that possibility – together with the fact that it would be interesting to compare the judgments of us professional experts against a bunch of amateurs – more than justifies the experiment.

A must-read for anyone that ever has to do a slide-show presentation.

Lessig’s blog has a sort of guest-post from a phycisist named Chris Tunnell. In the post, Tunnell breaks down the elements of a Larry Lessig-style slide presentation. I will be studying Lessig’s techniques (and Tunnell’s description of them) in preparation for a conference in a couple of weeks. If you want people to listen to you the next time you have to give a talk, you’ll do the same.

If you’ve never seen a Lessig slide-show, here’s one he did a while back on why Google books should be covered under a fair-use exception:

Follow

Get every new post delivered to your Inbox.