Crowdsourcing, a portmanteau of “crowd” and “outsourcing,” is what happens when an undefined group takes on responsibilities that would normally be delegated to a few experts, trained workers or even computers. It is a method of collaboratively handling a task whereby each individual’s responsibility for the end goal is greatly diminished, but the cumulative effect is substantial enough to be worthwhile or profitable. The point of crowdsourcing is usually to accomplish a large task both quickly and on the cheap.
Currently, the most prominent example of a crowdsourced project is Wikipedia, the website that asks users to submit encyclopedia style entries, including sources, to an ever growing, infinitely searchable cache of information. However, crowdsourcing debuted long before the Internet was even a twinkle in “Al Gore’s” eye (for those of you not in on that laugh, there is a widely circulated myth/joke that Al Gore invented the internet). The Oxford English Dictionary, a compilation of over 300,000 lexicographical entries whose first edition was published in 1928, used a large group of volunteer readers to mine English language writings for undocumented usages, which were then included in the dictionary.
Nowadays, though, crowdsourcing is a whole different game. The Internet provides the capability to reach millions of potential workers cheaply and quickly, and crowdsourcing is becoming the best way to do business, do research, turn a profit and the get media attention.
Here are some links to articles about successful attempts at crowdsourcing, and the effects that it has on how stuff gets done in the age of Web 2.0.
In “The Growth of Citizen Science,” (first link above), the authors provide a handy example of how scientists are harnessing a huge population of interested amateurs to conduct large-scale data gathering and compilation. By creating a simple interface that is easy for volunteers to use, it is possible to turn amateur science hobbyists into an integral part of the scientific process.
Another excellent example of crowdsourced citizen science is occurring right now in Indiana. The Hoosier River Watch project is using lightly trained volunteers to gather water quality data from sites across the state. The information is being compiled into a database that will hopefully be useful in caring for and predicting the quality of Indiana’s watersheds. Check this article out to see how GC students are getting involved:
In “The Answer Factory,” from Wired Magazine, the corporation Demand Media is profiled, and their strategy of content production and search engine optimization is about as close to crowdsourcing as you can get while still paying your workers. Content producers; writers, filmmakers and copy editors can browse a huge database of “assignments” generated by an algorithm that uses search engine searches and clickthroughs to predict popular topics. The content producers then create various articles, instructional videos or other media objects on the subject, and quickly upload them for between $15 and $25 a pop. It isn’t high quality stuff, but it does the job, and that is what crowdsourcing is best at: providing low caliber material or feedback by the barrel.
One possible backlash of this explosion of crowdsourcing is that by placing more value and workload on unpaid or underpaid amateurs, market share is being taken away from skilled experts. If one expert can harness hundreds of thousands of workers via the internet, who would pay more than one expert?