The New York Post (incorrectly) jumps to conclusions about Salah Barhoum, courtesy of crowdsourcing.
By Ian Tingen
I love that individual human behavior can be so ridiculously diverse, and yet, at the aggregate level, is often quite predictable. I love it even more when the real world presents two easily juxtaposed examples of a social phenomenon. Today, I want to tease apart a couple of recently-publicized stories about crowdsourcing that met with wildly different degrees of success.
For the uninitiated, crowdsourcing is the act of posing a task or question to a group of people, in hopes that answers/products born of diversity will be better than answers coming from a single source. There are many books and articles out there on the topic; unexpectedly unearthing my copy of James Surowiecki‘s 2004 The Wisdom of Crowds was the inspiration for this article. As a buzzword, crowdsourcing pops up a number of places, and is a even a key part of business models of companies like clothing purveyor Threadless and content providers like YouTube.
Our first example of crowdsourcing comes courtesy of UCLA professor Peter Nonacs. Last week, Nonacs posted an essay that went viral (at least among academic types) called “Why I Let My Students Cheat on Their Exam“. It’s a worthy read, one that I found simultaneously inspiring and frustrating. The interesting bit, in Nonacs’ words:
A week before the test, I told my class that the Game Theory exam would be insanely hard—far harder than any that had established my rep as a hard prof. But as recompense, for this one time only, students could cheat. They could bring and use anything or anyone they liked, including animal behavior experts. (Richard Dawkins in town? Bring him!) They could surf the Web. They could talk to each other or call friends who’d taken the course before. They could offer me bribes. (I wouldn’t take them, but neither would I report it to the dean.) Only violations of state or federal criminal law such as kidnapping my dog, blackmail, or threats of violence were out of bounds.
Essentially, Nonacs set up a classroom analogue to a common real-world situation: a tough problem needed to be taken down by a team. Nonacs goes on to report that results were stellar; his class ‘learned Game Theory better than any he had taught before’. He links the stellar performance to the fact that, upon releasing the test, all but 3 of the 27 students immediately started organizing and collaborating. In short, Nonacs’ crowd was as wise as he could have hoped it would be.
Now, contrast Nonacs’ undergraduates with another news item from the end of last week. When following the Boston bombings, you may have seen that a handful of internet communities tried to discern the identity of the bombers. It didn’t work, to say the least. From an Atlantic piece on the debacle:
…the most crowdsourced terror investigation in American history transformed from Internet sleuthing of FBI photos on Thursday night into a lynch mob — from Reddit to a police scanner to social media and beyond — that led to the outing of even more innocent people as would-be suspects in the Boston Marathon bombing.
While users of internet hubs like Reddit and 4chan can be technologically savvy and have had some success in cracking cases before, they failed spectacularly this go-round. The consequences ran as far as the FBI looking into the people identified, not to mention that many would-be internet detectives started harassing social media profiles of the now-known-to-be-wrongly accused. Not all internet denizens were of one mind on the topic, but there certainly was a crowd, and it was not wise at all.
So what happened?
Crowdsourcing requires a few key conditions to be successful. Nonacs’ classroom had many of these: a manageable group was working on a clearly-defined question towards a common goal (get a good grade). The information that Nonacs’ group was working with was already somewhat familiar to them, and while group members had diverse backgrounds, they were united in purpose. The internet example offers an almost perfect inversion of the strengths detailed above: an unwieldy amount of people, a vague question (find terrorists?), and a variety of goals (get justice for Boston, break a story, ‘for teh lulz’). There was a HUGE amount of information being filtered by this group, from security camera footage, to amateur photographs and videos, to Twitter, to police scanners… you get the idea. Coupling vagueness with information overload, in internet parlance, can only lead to epic fail.
- Reddit and Crowdsourcing: Valuable or Problematic? (techland.time.com)
- Boston bombing identification attempts on social media end in farce (guardian.co.uk)