M T Fahey

Archive for the ‘General’ Category

A Pulitzer for The Onion

In General on June 23, 2011 at 4:39 am

In light of their 1000th issue, the Onion has launched a campaign backed by a growing number of celebrities to gently encourage (coerce) the Pulitzer committee (especially Thomas Friedman) to consider the fake newspaper for an award.

Some of my favorite supporting videos are a surprisingly profane clip by soft-spoken “This American Life” host Ira Glass (below) and a DDoS attack threat on the Columbia Journalism website made by Arianna Huffington on behalf of the Huffington Post (also below).

Check out some other clips (eg Tom Hanks) at the movement’s tumblr site, and read this Business Insider article about why an Onion Pulitzer would be fairly appropriate.


 

Advertisements

Some Thoughts on Internet Publication and Filtering

In General, Internet and Technology, Journalism on June 17, 2011 at 1:59 am

Post-Gutenberg Economics

In his two books and many lectures, popular internet commentator Clay Shirky of NYU’s Interactive Telecommunications Program has described the end of the predominance of Gutenberg Economics in publishing. The traditional publishing house setup came to be soon after the creation of the printing press, a response to the financial risk of upfront publishing costs that put the burden of quality-control on the publisher. This economic pattern has applied to most media industries (music, television, books, radio, newspapers) for the last 500 years:

“If a printer produced copies of a new book and no one wanted to read it, he’d lose the resources that went into creating it. If he did that enough times, he’d be out of business. Printers reproducing Bibles or the works of Aristotle never had to worry that people might not want their wares, but anyone who wanted to produce a novel book faced this risk. How did printers manage that risk?

Their answer was to make the people who bore the risk – the printers – responsible for the quality of the books as well. There’s no obvious reason why people who are good at running a printing press should also be good at deciding which books are worth printing. But a printing press is expensive, requiring a professional staff to keep it running, and because the material has to be produced in advance of demand for it, the economics of the printing press put the risk at the site of production. Indeed, shouldering the possibility that a book might be unpopular marks the transition from printers (who made copies of hallowed works) to publishers (who took on the risk of novelty).”

The main point is that the economic connection between physically producing and distributing media and selecting which media are worth publishing is a historical contingency. It no longer holds in the modern world of low-cost internet self-publishing. We are now, says Shirky, in the world of Post-Gutenberg Economics. There are no longer significant financial barriers to becoming a publisher. I do not need a printing press, a network, or a radio tower with FCC-allotted frequencies to make my content available to the world.

The financial risk has been reduced to a negligible level, more or less just the time it takes to create the content itself. Volume has increased and quality assessments have become unnecessary. In the recent past, Shirky argues in Cognitive Surplus: Creativity and Generosity in a Connected Age, we dedicated the collective free time of this most affluent nation to one-directional consumption (mostly television programs). Now, in the low-cost world of Post-Gutenberg Economics, any ex-consumer can choose to be a producer of content. Clay Shirky looked upon our Wikipedia entries and lolcats and saw that they were good.

Lehrer’s Objection: Consuming properly and creating poorly

In Cognitive Surplus, Shirky points to lolcats as a good candidate for the “stupidest possible creative act.” Still, he maintains that “The real gap is between doing nothing and doing something, and someone making lolcats has bridged that gap.” In a brief review, science writer Jonah Lehrer objected to the suggestion that any act of creation is superior to an act of consumption:

“There are two things to say about this. The first is that the consumption of culture is not always worthless. Is it really better to produce yet another lolcat than watch The Wire? And what about the consumption of literature? By Shirky’s standard, reading a complex novel is no different than imbibing High School Musical, and both are less worthwhile than creating something stupid online. While Shirky repeatedly downplays the importance of quality in creative production–he argues that mediocrity is a necessary side effect of increases in supply–I’d rather consume greatness than create yet another unfunny caption for a cat picture.”

This is a legitimate objection, although Shirky may well respond that what he was celebrating in the book was the rebalancing of the overall production/consumption equation, not necessarily an increase in production for its own sake. Although it is not explicitly stated in the book, I suspect that Shirky’s preferred producer, even of lolcatz, is also still an avid consumer of similar products, even if only of lolcatz. If asked directly, I doubt the Shirky would prefer a world in which everyone writes and no one reads, and I would be surprised if Shirky and Lehrer wouldn’t both rally around the value of true mental engagment, in terms of people becoming part of a network of inputs and outputs.

Looking at only the creative side of the equation, is today’s glut better than the old narrow stream of media production? Shirky acknowledges that the trade-off between quality and quantity is undoubtedly part the bargain: “Increasing freedom to publish does diminish average quality – how could it not?”

To Shirky, the increase in experimentation and the new diversity of contributors (previously locked out by economic of scale in the topics and people that would be included) makes the decrease in average quality worthwhile: “In comparison with a previous age’s scarcity, abundance brings a rapid fall in average quality, but over time the experimentation pays off, diversity expands the range of the possible, and the best work becomes better than what went before.” Even if it is buried beneath a vast excess of junk.

Identifying the problem: Filtration

This is the part of the story I find most compelling. As Shirky said in his 2008 talk at the Web 2.0 Expo in New York (but didn’t really mention in his 2010 book), the problem we are facing today isn’t properly framed as one of information overload (as it so often is), but one of filter failure. If we are to have exponentially more content, some of it better than before in some ways but much of it of stunningly low quality or simply irrelevant, how will we filter our content to find what we want?

We want to spend out limited free time consuming worthwhile media, not wading through worthless creations in search of the gems. This need represents a new, production-decoupled, front end (and therefore potentially lucrative) niche in the new ecosystem. It is as a symptom of Post-Gutenberg economics that the new quality-assurance role was separated from content production and publishing, which were bundled together by traditional media organizations. This functional fragmentation has some interesting consequences.

Take, as an example of a successful new news organization, the Huffington Post. In the HuffPost business model, traditional news organizations and independent writers produce content and publish it to the internet, and the site functions as the quality-assurance agent, aggregating content that has been chosen to satisfy HuffPost’s specific reader demographic and presenting it with links and well within fair use copyright law. On the internet it is content access which lends itself to advertising, and therefore it is the point of access that makes money, especially if the expensive task of content production has been avoided (allowing Arriana Huffington to sell the company for $315 million).

The success of news aggregation services seems to be an indication that these organizations have fallen into the ideal slot left in the new economy. In a Pew study of 199 leading news sites, 47 were classified as primarily being based on aggregation or commentary (although only three of the top ten were in this category).

Another interesting consequence of this new economic system is in news journalism, where content filtration and quality-assurance can be far from synonymous, and where the disruption in revenue flow has been especially disastrous. The reputation of a source like the New York Times is based around vigorous fact-checking and strict journalistic standards. The HuffPost can present NYT info to its readers without any regard to accuracy. In this case, the news producer retains the responsibility for the expensive quality of accuracy while the aggregator only has responsibility for presenting interesting and recent stories. This diffusion of responsibility can lead to dangerous situations in modern journalism (see my post on churnalism), such as the recent Texas mass grave hoax and the Gay Girl in Damascus hoax. Although Shirky’s observation that “the filter for quality is now way downstream of the side of production” seems to generally be true, it is interesting that it depends on what “quality” is being discussed.

Different Filters for Different Folks

If I were to make a sweeping prediction, it would be that filtration will necessarily be the main role of any successful new media organization. Excluding my internet email service, the sites I visit most frequently all serve this role: Google.com, Reddit.com, thebrowser.com, instapaper.com.

Google’s search engine is a clear example of a super-successful algorithmic filter. The Huffinton Post and Newser use human curation. A different sort of human curation (one closer to the sort of free and collective work that Shirky glorifies) can be found in websites like Reddit and Digg, which use members’ votes to move stories to the top of a long list. Reddit is a favorite of mine, as my selection of smaller communities (subreddits) gives me a uniquely personalized list of content that swings between user-created lolcat-like content and breaking international news.

Editorial curation of long pieces of written content takes place at sites such as thebroswer, instapaper (which also provides a service for saving content from other sites to read later), longreads.com, and longform.org. These sites simply sort through older publications (eg the New Yorker, the Guardian, or the Atlantic), high-quality blogs, and occasionally user submissions and judge pieces based on their individual value. These sites are pleasantly egalitarian (A blogger and Noam Chomsky can be equals if they produce equal work), while still driving traffic to the old high-quality publishers.

Not only can anyone produce content, anyone can be a curator of content. Anyone who consumes a fair amount of media content can pick the best and make a list for others, such that they act as a content filter to save others like them the trouble of sorting through the ever-increasing flood of info. I do just this on my own blog’s Reading List. Twitter also works as a sort of community-driven filter, as people retweet information they propagate that content throughout the network and make it more likely to be seen by others with similar interests.

Eli Pariser recently released a book The Filter Bubble: what the internet is hiding from you, about the level to which many of the most popular internet filters are highly personalized. It is fair enough to argue that a good filter should be personalized. Although I have yet to read the book, I gather from an interview of Pariser (NPR) that his main objection is that few of us realize how the information being offered to us by these services is being filtered. If we’re filtering for relevance and quality, what relevance and what qualities. Who goes Google think you are, and what information has the algorithm decided you don’t want to see?

Google, Facebook, and Twitter already change their results to cater to the individual user. Given the same search parameters, each of us is returned a different selection when we search or look at our various information feeds. News organizations (e.g. the Washington Post) are also considering customizing their homepages to the interests of the individual. It doesn’t take too much imagination to see how this could have an isolating effect for individuals and a polarizing effect on groups, an “echo chamber” situation. I imagine that the future of information filtration will have to include a system to introduce novelty into content that is presented (Google’s algorithm already does something similar, giving an outlier link a few results down the list). I doubt that the lack of transparency in the details of content filtration will last very long.

Addendum: How I consume media, the future looks bleak.

When I picked up Shirky’s book several days ago, I did so at our local library. I haven’t bought a CD, musical single, book (with the exception of used textbooks), magazine, newspaper, movie, or access to a television show in the last four years.

And yet, I have seen the last several seasons of a couple television shows, I have read the New York Times regularly for years, and enjoy access to the Atlantic, New York magazine, and dozens of blogs and news sites. There may be advertisements, but as far as I know I’ve never clicked on one, let alone purchased a good or service. I have almost one Terabyte of music and movies on my computer, and I’m working my way through the several hundred books that reside on my Amazon Kindle (the only tangible object I bought on the internet in the last year).

Yes, some fraction of the onus for the decline of the media, part of the breakdown in the economics that served these industries for so many years is on my opportunistic, pirating shoulders. As an unemployed 20-something, I use media products without any qualms, with the rationalization that were I somehow put in a situation where payment was necessary, I would out of necessity simply stop reading, listening, and watching.

I have no idea how any of these media are going to survive if there are many people with the same outlook as me, and I suspect there are many. It is not surprising that banner advertising on websites is not an adequate source of income (84% of US internet users don’t click on a single ad in a month, and of clicked ads Gawker’s Nick Denton wrote: “clickthroughs are an indicator of the blindness, senility or idiocy of readers rather than the effectiveness of the ads.”). How could ads be adequate? I don’t know anyone who makes purchases on the internet in the blithe way that those ads seem to presuppose.

Excluding music consumption, I think I spend close to thirty hours a week reading books and internet articles and watching television shows on the internet. If I had any sort of disposable income, I would be willing to pay one lump sum for each of the services I use (that is to say, I would pay one news organization and one internet television service like Netflix). This sum would be nothing close to the amount the NYT wants users to pay to get behind their easily-circumvented paywall ($455/year). It would be like the $30/year I voluntarily give to NPR, despite the fact that I clearly consume more than $30 of NPR programming.

To be clear, it’s not that I don’t think that the NYT deserves $445 for the services they provide, it’s that there’s no way a debt-laden unemployed ex-student such as myself could ever pay for such a thing, and I see no reason to willingly make myself any more ignorant or culturally stunted than I need to be by choosing not to use easily-accessible information and services just because I can’t afford them. I have heard many things about my generation, including that we act more entitled than previous generations, and in this case I wonder if that is true.

If one of the aggregating services I use daily took a donation and spread it in a proportional manner amongst their news contributors, I would give the small amount I could. If I could go into a music store and pay 99 cents for an entire album so I could go home and give it a listen, I would do so. If I knew of an organization that ranked news producers by the quality of their output and then distributed donated money to those groups (and maybe the individual authors and reports), I could get behind that financially. It’s about opportunity and convenience, not poorly-enforced commercial contracts. Could we raise money for internet media like the Grobanites Shirky describes raising charity money in his book? Could it work at a grassroots level (using new internet organizing resources to solve internet media problems)?

I suppose my question is what will be the organization that will provide the following service: one that acts as a single trustworthy filter that gives me the content I want, the news I think is important, and gives me the opportunity to pay for the content from all those disparate sources in one lump sum, to my ability, such that that money makes it back to the original producers so that they may continue to exist and produce. Is such a thing possible?

A Historical Comment on “Physics and the Immortality of the Soul”

In General, Philosophy, Science on May 29, 2011 at 5:12 am

I am not a physicist, professional philosopher, or interested in participating in the massive and fiery atheist v gnostic feud that takes place all over the internet every day.

I just finished reading a post by Sean Carroll over at Discover’s physicist and astrophysicist blog Cosmic Variance.  Sean writes:

Claims that some form of consciousness persists after our bodies die and decay into their constituent atoms face one huge, insuperable obstacle: the laws of physics underlying everyday life are completely understood, and there’s no way within those laws to allow for the information stored in our brains to persist after we die. If you claim that some form of soul persists beyond death, what particles is that soul made of? What forces are holding it together? How does it interact with ordinary matter?

Everything we know about quantum field theory (QFT) says that there aren’t any sensible answers to these questions. Of course, everything we know about quantum field theory could be wrong. Also, the Moon could be made of green cheese.

He slaps down the Dirac equation, the rather triumphant mathematical union of quantum mechanics and special relativity. He asks: What would you change to make the soul fit?

Carroll’s objection is not new.  It is, in essence, an updated version of the argument made by Princess Elizabeth of Bohemia (1618-1680) in her letters to Descartes in 1643. In Treatise of Man, Descartes identified the pineal gland as the “seat of the rational soul,” where (much like Carroll’s “blob of spirit energy,” which “drives around our body like a soccer mom driving an SUV”) it receives sensory information from flowing “animal spirits” and controls the body’s movement through interactions with the ventricles.

Descartes picked the pineal gland because it was singular, central, and small enough for the spirits to move it around:

“My view is that this gland is the principal seat of the soul, and the place in which all our thoughts are formed. The reason I believe this is that I cannot find any part of the brain, except this, which is not double. Since we see only one thing with two eyes, and hear only one voice with two ears, and in short have never more than one thought at a time, it must necessarily be the case that the impressions which enter by the two eyes or by the two ears, and so on, unite with each other in some part of the body before being considered by the soul. Now it is impossible to find any such place in the whole head except this gland; moreover it is situated in the most suitable possible place for this purpose, in the middle of all the concavities; and it is supported and surrounded by the little branches of the carotid arteries which bring the spirits into the brain” (29 January 1640, AT III:19-20, CSMK 143)

Princess Elizabeth’s objection was one having to do with Descartes own distinction and their (shared) contemporary understanding of physics: How does a substance not extended in space exert an influence on physical objects extended in space?  The problem raised in her letters was one of soul-brain interaction:

“I beseech you tell me how the soul of man (since it is but a thinking substance) can determine the spirits of the body to produce voluntary actions. For it seems every determination of movement happens from an impulsion of the thing moved, according to the manner in which it is pushed by that which moves it, or else, depends on the qualification and figure of the superficies of this latter. Contact is required for the first two conditions, and extension for the third. You entirely exclude extension from your notion of the soul, and contact seems to me incompatible with an immaterial thing.”

Although our understanding of the physical world has increased quite a bit since 1643, Descartes was unable to coherently respond to this challenge within the framework of the time (he suggested that the Princess conceive of the soul as extended, even if it really is not, and suggested that the soul’s ability to change the physical is simply an empirical fact).

How can the soul move an electron, something in an exhaustively equation-defined system, without being a part of that system (a term in the equation)?  If Descartes was alive today, he would probably be worrying about other things.

A Post about Open Science and Incentives

In General, Science on May 25, 2011 at 9:04 pm

Slate Magazine recently published an excerpt from Tim Harford’s new book, Adapt: Why Success Always Starts with Failure. The selection recounts the story of researcher Mario Capecchi’s work on gene targeting, funded by the NIH despite strong recommendations from grant reviewers that he abandon the project in favor of less speculative work. His risk-taking was rewarded by a Nobel Prize, and his reviewers later wrote to him in apology: “We are glad you didn’t follow our advice.”

In defense of his book’s title thesis, Harford invokes an economics study on the effects of financial incentives on creative scientific achievement. The paper tests the theoretical framework suggested by Gustavo Manso (2009), in which particular incentive schemes increase the production of innovative ideas (in Harford’s words: “insanely great ideas, not pretty good ones”). According to the researchers:

“The challenge is to find a setting in which (1) radical innovation is a key concern; (2) agents are at risk of receiving different incentive schemes; and (3) it is possible to measure innovative output and to distinguish between incremental and radical ideas. We argue that the academic life sciences in the United States provides a near-ideal testing ground. ”

It was an ideal testing ground. The Howard Hughes Medical Institute (HHMI) investigator program, one source of funding, has long renewal cycles (5 years), a robust and detailed review process, and the stated goals to fund “people not projects” and to “push the boundaries of knowledge.” This leads to a system which gives investigators flexibility in the projects they pursue and time to invest in exploratory efforts. In contrast, the National Institute of Health (NIH) R01 grant program has a shorter funding period (3 years), a stricter review process with less feedback, and an emphasis on funding specific projects. By measuring the number of publications from each investigator that fell into the top citation percentiles, the novelty of the attached keywords (in relation to the entire body of literature and the researcher’s past work), and the range of journals that cited the work, the researchers found evidence that HHMI investigators produced more novel research and more high-profile papers (while also producing a greater number of flops) than NIH controls.

The researchers are careful to note that their findings “should not be interpreted as a critique of NIH and its funding policies.” After all, researchers that are awarded HHMI grants (and the NIH MERIT controls) have been judged to exhibit extremely high potential. The investment in time (detailed project reviews) and risk (a high degree of freedom and low expectations for immediate results) for the program suggests that it may not be beneficial to replace the NIH incentive structure with HHMI-like criteria. The ideal situation may be closer to the one currently in place — a combination of the two incentive systems. A more project-based system may be better for younger, less experienced scientists, while the HHMI system would be best reserved for the select group who would flourish with less constraints.

How else might small differences in incentive structure determine the quality of scientific output? What implications does the confirmation of the Manso model have for the future direction of scientific investigation?

The incentive structure in scientific research is likely to change drastically in the next decade. Like the field of journalism (another information-gathering and sharing enterprise), the practice of science is poised at the edge of an internet-driven revolution in connectivity. Driving this change are journals like the Public Library of Science (PloS), which makes scientific papers available to the public via the internet using an author-pays model, and groups like openscience.org, which encourages the free sharing of methodologies and datasets with the public. Some major academic publishers are also moving to a hybrid system in which authors can pay a fee to have their paper publicly available, and groups like HHMI have agreed to cover the extra charge for their grantees.

The poster child of the open science movement is the Human Genome Project, which sequenced and published the entire human genome in 2003, in the process narrowly routing a proprietary attempt to do the same. Following in the footsteps of that success, all DNA sequences generated by recipients of NIH grants are required to be entered into the GenBank database, creating a massive searchable repository of our genetic knowledge. Open science has the potential to help scientists wade through massive amounts of data, to find patterns in extremely complex systems, and to substantially increase the speed of scientific progress.

Despite the success of these open science experiments, scientists are understandably slow to switch over. Dan Gezelter at openscience.org writes:

“Right now, the incentive network that scientists work under seems to favor ‘closed’ science. Scientific productivity is measured by the number of papers in traditional journals with high impact factors, and the importance of a scientists work is measured by citation count. Both of these measures help determine funding and promotions at most institutions, and doing open science is either neutral or damaging by these measures. Time spent cleaning up code for release, or setting up a microscopy image database, or writing a blog is time spent away from writing a proposal or paper. The ‘open’ parts of doing science just aren’t part of the incentive structure.”

Time investment is not the only reason the scientific community is resistant to moving to the open system. Opening up your data to analysis by other scientists is necessarily risky, and not something scientists are likely to do en masse until it is fully expected by the community. It involves massive reform of the scholarly publication system, as well as the reputation bookkeeping done by that system. It requires researchers be comfortable with managing their online identity and with using new software tools. It will involve sweeping changes in the way science is reported.

An online publishing system, especially one that encourages self-publishing, would allow scientists to publish smaller chunks of work at a time. Results from individual experiments could be made available as soon as data is gathered. Data analysis and interpretation could become a much more open process. The smallest publishable unit would shrink, allowing researchers across the world to build on each other’s findings in a fraction of the time it currently takes. Many of the publishing biases that plague the current system (eg unwillingness to publish replication of experiments or negative results) could be eliminated.

It is possible that the ability to publish in this manner, with atomized units (maybe single experiments), would lead to the expectation that scientists publish much more frequently. Having shorter periods of expected achievement would mimic the incentive structure of the NIH rather than the HHMI; in the same way that shorter review periods prevent exploration, expected rapid publication of data on the internet could incentivize less risky experimentation.  In this case, perhaps an increase in HHMI-like funding incentives would be appropriate.

There is a sense in which faster publication in smaller units could cheapen the value of a publication. There would be less effort, less time, and less reputation at stake for each piece released. The current system, with the huge time investment necessary to get a paper published in a traditional journal, encourages lumping of many experiments and forces researchers to piece all of their work together into a convincing narrative.  A scientific paper today may span many years of work. It puts all of a researcher’s eggs into one basket.

Rapid publication of open data prevents researchers from shuffling their experiments to tell a better story. It makes it difficult to change how data is perceived by changing the way it is presented. Peer review could be done online, transparently, by a larger group of people. The shorter time between finishing and experiment and publication would automatically help put experiment in context with other work being done. Review and meta-analysis would be more important than ever to retain the continuity and coherence of the scientific narrative being created (this could be a new role for journals as they lose their base of readers and contributors).

The face of science could be changed for the better, medical advances could be faster, and public understanding of the sciences could be improved, and it could all happen very soon if institutions with pull in the scientific community (the NIH, HHMI, journals, and universities) created the right incentives.

Old video (2009): Clay Shirky on the future of accountability journalism

In General on May 4, 2011 at 8:44 am

Great informal talk by Clay Shirky at the Joan Shorenstein Center on Press, Politics, and Public Policy at Harvard’s Kennedy School.

“So I have no idea how long this transition will take. But I don’t think that some degree of failure and decay is avoidable. I think our goal should be to minimize the depth of that trough, to constrain that trough to the areas we can constrain it to, and to hasten its end. But I don’t think we can get away with a simple and rapid alternative to what we enjoyed in the 20th century — in part because the accidents that held that landscape together in the 20th century were so crazily contingent.”

Or the complete transcript

Bin Laden’s Death and Pakistan

In General on May 3, 2011 at 5:21 am

The killing of Osama Bin Laden by US military forces has prompted a number of official comments from leaders around the world, varying from condemnation (Ismail Haniyeh, Hamas) to enthusiastic support. A summary of official reactions can be found at Al Jazeera.

Perhaps one of the most pointed responses came from the Indian home minister, who said:
“We take note with grave concern that part of the statement in which President Obama said that the firefight in which Osama bin Laden was killed took place in Abbottabad ‘deep inside Pakistan’. This fact underlines our concern that terrorists belonging to different organisations find sanctuary in Pakistan.”

Not a surprising comment given India’s uneasy relationship with their neighbor, and pertinent to the United States, whose operations in Afghanistan are not at all independent from events in Pakistan. Although the US tends to emphasize successful cooperation with Pakistan in the War on Terror, the recently-released Wikileaks files contribute to the conspicuous pile of evidence that groups within Pakistan are actively subverting US efforts. The successful Bin Laden operation is attributable in part to the President’s decision to keep the Pakistani government in the dark.

Great article by famed foreign correspondent Robert Fisk on Bin Laden (whom he interviewed three times) and Pakistan.

From the Telegraph on Wikileaks and Pakistan’s aid to Bin Laden.

The Disastrous Rise of Churnalism

In General on April 26, 2011 at 7:45 am

In his recent post at his Guardian blog, Martin Robbins discusses the apparent replacement of value-added journalism with “churnalism,” a form of reporting “in which press releases, wire stories and other forms of pre-packaged material are used to create articles in newspapers and other news media in order to meet increasing pressures of time and cost without undertaking further research or checking” (as Robbins quoted from Wikipedia).  He asks whether this sort of reporting was also prevalent in science reporting, before going on to demonstrate that it certainly is, using examples from the UCLA press office.

I would go as far as to speculate that the problem is in fact much worse in science journalism than the rest of the industry.  Reporters who are not comfortable with the content and language of a press release are more likely to simply replicate the text.  As I mention in my Chiang Mai Citylife Magazine article (next month’s issue) about poor press coverage of dengue fever vaccine development in Thailand, it is common for news stories to be based almost entirely on press releases, and those press releases themselves are frequently exaggerated and warped sources of information.  It would be interesting to use the site Robbins used for his analysis to compare various fields of journalism, but that will have to be left up to someone else (http://churnalism.com/).

In an age in which news wires can be created and distributed by anyone with free time and a computer, in which press releases are available on a company’s website, and in which newspapers are dying and the news industry in crisis, churnalism is a dangerous thing.  It’s not unexpected.  It’s cheaper and takes less staff to simply move news information from one source to another.  But it’s even cheaper for readers to abandon those news outlets altogether, and to seek that information out themselves using one of the many great computer tools created for that purpose.

H1N1 cases on the rise in Venezuala

In General on March 31, 2011 at 6:03 am

“H1N1 has spread rapidly to some other states in the country, increasing from 12 [es] to 342 [es] cases and 4 deaths in less than a month.

Marcos Díaz Orellana, the governor of the sate of Merida [es], suspended classes in the University of Los Andes as part of a preventive measure.”

From Global Voices Online

Journalist Nick Robertson angrily accuses Fox of lying in their ‘human shield’ story

In General on March 23, 2011 at 4:09 am

The Future of the Field: Computational Journalism

In General on March 11, 2011 at 7:51 am

A computational journalism reading list

An excellent post by Jonathan Stray (associated press), in which he gives a categorical overview of introductory reading for computational journalism including data journalism, visualization, computational linguistics, communication technology, free speech, information tracking, filtering/recommendation, and measuring public knowledge.

Investigating thousands (or millions) of documents by visualizing clusters

Stray’s talk at the recent NICAR (National Institute of Computer-Assisted Reporting) conference, demonstrates the practical importance of computational techniques in modern reporting.

Many Eyes

An experiment from IBM with useful visualization tools.