Archive for the ‘Media’ Category


October 04, 2010- http://www.gather.com/viewArticle.action?articleId=281474978571983

How does technology play a role in keeping the Chilean miners both psychologically and physically fit?

As modern day technology consumers, many people around the world have integrated their technology use into their ritual of daily habits. For example, studies have shown that at least half of us turn on our computers first thing in the morning, even before we use the bathroom or drink our coffee.

Technology has so ingrained itself into our daily rituals that it is now considered vital to our mental survival, and has factored highly into the list of amenities currently being proffered to the 33 Chilean miners who have been stuck a half-mile below the surface of the earth since August 5th, after an enormous rock slide impeded their exit from the mine.

As Newsweek noted in a recent article, the miners are against an incredible number of odds as a result of the harsh underground living conditions, “To survive, they must endure constant 90 percent humidity, avoid starvation, battle thirst, guard against fungus and bacteria, and stay sane enough to safely do the work necessary to aid their own rescue.”

However, this is not your traditional mining disaster. The 33 Chilean miners are being treated to a modern-day approach to human survival. That means the miners are able to have their laundry done, three hot meals a day and occasionally ice cream.

As Newsweek has reported, the rescue effort’s lead psychiatrist, Alberto Iturra Benavides, is implementing a strategy which leaves the miners “no possible alternative but to survive” until drillers finish rescue holes, an operation whose completion date is estimated for early November.

What’s more amazing than even the basic services of laundry and hot meals is how technology has been able to play a vital role in their daily rituals and the quality of their survival a half-mile down. MSN reported that each weekend the miners have been able to communicate with their families via video chat for nearly eight minutes per miner. Also, as Newsweek reported, “When the miners do get moments to relax, they can watch television  — 13 hours a day, mostly news programs and action movies or comedies, whatever is available that the support team decides won’t be depressing.” Dramatic television and movies are barred, and the news they receive is being censored. The censorship is performed on the miners’ behalf, allowing them only positive and escapist entertainment- nothing too serious or grim.

Interestingly, though television and movies are allowed, personal music players are not. The reason given for this is that they tend to “isolate people from one another.” The rescue operations feel that the most important thing the miners can do is to be there for one another and be united in their efforts to survive. Personal music or game players would impede that effort. Newsweek reports that the lead psychiatrist on the case, Iturra, has proclaimed “What they need is to be together.”

There are, of course, some restraints for what technology may reach the miners. At this stage in the rescue efforts any and all technology must be able to fit through the incredibly narrow holes (approximately 3.19”) which are the sole means of communication and transport between the surface of the earth and the miners.

To continue following the efforts to rescue the 33 trapped miners in Chile, including the possibility that they might be rescued as early as late October check out these links:

http://news.yahoo.com/s/ap/lt_chile_mine_collapse

http://www.salon.com/life/feature/2010/09/16/chile_miners_waiting
http://www.cnn.com/2010/WORLD/americas/08/26/mine.disasters.survivors/index.html


September 22, 2010- http://www.gather.com/viewArticle.action?articleId=281474978539098

Hegel famously proclaimed that “history is a dialectic,” that is, a dialogue between people who may hold differing views, but who seek to accomplish a basis of truth by debating together. In other words, history has no discernible truth, but more closely attains the overall goal of “truth” through discussion from all of the voices of history and their personal accounts of what happened.

This quotation of Hegel’s is often cited in the context of discussions about the literary canon, or the “western canon,” as some refer to it. The term “Western canon” is used to denote the selection of books, music, art and general cultural that have most influenced the shape ofWestern civilization and culture over time.

As demonstrated, a simple search on Wikipedia for either of these terms will tell you much about what they are. However, Wikipedia doesn’t explicitly tell us is that it is also holding the record of how the modern canon is determined, and how the truth of history is being determined by the myriad of voices which contribute to it everyday.

A recent Bits blog from the New York Times mentioned the trail of edits that the Internet provides to anyone who is looking for it. James Bridle, founder of BookTwo is particularly interested in what the future of literature holds, but also how that discussion is playing out and how we can track where the discussion has been. In one of his recent entries Bridle points out that although an article on Wikipedia may tell a specific story, the edits show a process of opinion, correction, and the potential biases of each writer. In this respect Wikipedia, and every constantly updated website represents an archive of evolving information over time. What interests Bridle is the offer of two distinct stories: one that is front-facing to the reader and one that reveals the behind-the-scenes editing, writing and creative process.

To illustrate the point, Bridle selected the topic of the Iraq war as an entry in the Wikipedia canon and had all of the history of the entries surrounding the Iraq War published into physical volumes. In his entry, Bridle writes, “This particular book — or rather, set of books — is every edit made to a single Wikipedia article, The Iraq War, during the five years between the article’s inception in December 2004 and November 2009, a total of 12,000 changes and almost 7,000 pages.” Bridle notes that the entire set comes to twelve volumes, which nearly approximates the size of a traditional encyclopedia.

Which brings us to the favorite comparison of Wikipedia and your parents’ Encyclopedia. Is one or the other more reliable? Who gets to decide what is a part of the overall Western canon? Shouldn’t we all be alarmed by a process in which a child may be permitted to contribute to an online encyclopedia which many now claim is an expert source?

In fact, Bridle’s point reminds us of a standard strategy employed to defend the credibility of Wikipedia and its process against its would-be detractors. The strategy is to cite a story central to the process under which the Oxford English Dictionary was compiled in the 19th century. Simon Winchester’s book, The Professor and the Madman: A Tale of Murder, Insanity, and the Making of The Oxford English Dictionary details a Jekyll and Hyde story of the brilliant but clinically insane Dr. W.C. Minor who provided thousands of entries to the editors of the OED while he was committed at the Broadmoor Criminal Lunatic Asylum. In other words, if a mad man may contribute significantly to a tome of the English language which is still very much the authoritative text today, why can a perfectly sane pre-teen not contribute to the modern canon of information about frogs, Major League Baseball, or global warming?Should we be preventing anyone from contributing to the ever-evolving conversation about what is truth and what is history?

As sites such as Twournal –which offers the Narcissistic boon of publishing your very own Tweets through time in print form– begin to proliferate, each of us can possess our very own piece of the modern web canon, whether in print or online. As Twournal describes itself, “Over time tweets can tell a story or remind us of moments. In 20 years we don’t know whether twitter will be around but your Twournal will be. Who knows maybe your great grandkids will dig it up in the attic in the next century.”

That means that each of us now has access to print a credible-looking book of our own (often misspelled) musings and meanderings as representative of history, according to us. Yet in the absence of a forum in which people can engage with our Tweeted observations, there’s no real dialectic. It therefore seems safe to conclude that Hegel would have preferred Wikipedia to Twitter, or to your Twournal.


September 22, 2010- http://www.gather.com/viewArticle.action?articleId=281474978538816

Hear that? It’s the sound of the content aggregator death knell. On October 1st, popular web-based RSS reader and news aggregator Bloglines, run by the team at Ask.com, will discontinue service. When asked why, Ask’s team reported “that social media sites like Twitter and Facebook killed it.”

And Bloglines is only the first. Other aggregators such as Google Reader, Digg, Reddit, and StumbleUpon are sure to be next. According to Hitwise, “visits to Google Reader are down 27 percent year-over-year.” The New York Times recently reported “a more pivotal reason that Digg is falling behind, analysts say, is that users are simply spending more time on Facebook and Twitter than they are on Digg.”

How did this happen? Is it truly impossible that content aggregation sites such as Google Reader, StumbleUpon, Digg and ReddIt can not exist side-by-side with the type of social news aggregation offered by Facebook and Twitter? What does this mean for RSS? RSS, which stands for Real Simple Syndication, is a protocol which helps push website updates to readers around the world so they don’t have to search for new content or endlessly hit refresh on a favorite web page. In 2005 RSS was a game changer. Today? Not so much.

According to the Bloglines message about its own end-date, “the Internet has undergone a major evolution. The real-time information RSS was so astute at delivering (primarily, blog feeds) is now gained through conversations, and consuming this information has become a social experience…being locked in an RSS reader makes less and less sense to people as Twitter and Facebook dominate real-time information flow. Today RSS is the enabling technology – the infrastructure, the delivery system.”

As a 2009 NeilsenWire also reported, part of this trend comes from the fact that blogs and news sites are no longer the endgame news tool, our friends are. “Socializers trust what their friends have to say and social media acts as an information filtration tool…If your friend creates or links to the content, then you are more likely to believe it and like it. And this thought plays out in the data.”

Does Mark Zuckerberg know that his company has driven content aggregators to the grave? Undoubtedly, yes. A recent New Yorker profile quoted Zuck as saying, “It’s like hardwired into us in a deeper way: you really want to know what’s going on with the people around you.” In fact, Facebook’s Open Graph feature allows users to see which articles their Facebook friends have read, shared, and liked. “Eventually,” the New Yorker observed, “the company hopes that users will read articles, visit restaurants, and watch movies based on what their Facebook friends have recommended, not, say, based on a page that Google’s algorithm sends them to.”

Some argue that content aggregators, or RSS readers, were always destined for the internet graveyard simply because they were too complicated and allowed users to become completely overwhelmed by the sheer bulk of information that was being pushed to them. One thing is for sure, if content aggregators don’t find a way to better integrate with, or at least successfully co-exist with social networking offerings like Facebook and Twitter, they will soon be relegated to the ever-growing category of “old news.”


September 22, 2010 – http://www.gather.com/viewArticle.action?articleId=281474978538594

Sneaking off to smoke cigarettes. Experimenting with alcohol. Sexual experimentation. These are all Hollywood hallmarks and symbols of adolescent youth in the United States. Americans think of the teenage years as the time to get out into the world, try new (and perhaps forbidden) things, and become an adult in the process. But what if in the process, the adolescent become a felon?

Every once in a while a high profile case involving teens and the internet hits the web, and parents start to squirm. Generally, however, these cases highlight how the Net may be perilous to the teen, not how the teen may be perilous to the Net. While in some situations those warnings are legitimate, it may be time for parents to begin to consider another way in which their child’s internet use may be perilous to their future: hacking.

Today’s teens are more tech savvy than any other generation, and the generation that follows them will be all the more savvy. According to a February 2010 study conducted by the Pew Research Center, “Internet use is near ubiquitous among teens and young adults. In the last decade, the young adult internet population has remained the most likely to go online. Over the past 10 years, teens and young adults have been consistently the two groups most likely to go online, even as the internet population has grown and even with documented larger increases in certain age cohorts (e.g. adults 65 and older).”

Thus it is no longer sufficient to think of every teen as wide-eyed and naive about the varied functions and uses for the Net. Many teens are way ahead of the rest of us, hacking and writing code, doing their own programming and creating the next generation’s tools. However, the same teen urges that drive them to experiment with drugs and sex– those strong hits of hormones and a sense of invincibility– also today lead them to commit crimes on the web.

Just this week a 17 year-old Australian teen caused a “massive hacker attack on Twitter which sent users to Japanese porn sites and took out the White House press secretary’s feed.” The teenager, Pearce Delphin, simply revealed a Twitter code security flaw and publicized it. The flaw was then exploited by hackers who subsequently wreaked havoc on Twitter’s user base of more than 100 million for nearly five hours. When asked why he would do such a thing, Delphin reportedly replied, “I did it merely to see if it could be done … that JavaScript really could be executed within a tweet…I had no idea it was going to take off how it did. I just hadn’t even considered it.””

But the story gets better, before the Associated Press could actually hypothesize what the danger of this hack might have been, 17-year old Delphin came through with it first, “Delphin said it could have been used to ‘maliciously steal user account details.’” He told the reporters, “The problem was being able to write the code that can steal usernames and passwords while still remaining under Twitter’s 140 character tweet limit.”

Likewise, in 2008 another 17-year old from Pennsylvania admitted to crashing Sony’s PlayStation site after being banned for cheating in a game called SOCOM U.S. Navy Seals. By intentionally infecting the Sony site with a virus, the teenage honors student was able to crash the site for a duration of 11 days in November 2008. In that case the kid got lucky, rather than pursue the case as a grand jury investigation, the authorities decided to let the teen’s local juvenile court handle the charges. In the end, the 11th grade student was judged delinquent and charged with unlawful use of a computer, criminal use of a computer, computer trespassing and the distribution of a computer virus.

Somewhat humorously, Net security sites like Symantec and McAfee have pages dedicated to teen use and abuse of the Net. Symantec’s is titled “The Typical Trickery of Teen Hackers,” and addresses questions such as “I discovered that my teenager had figured out my computer password and logged in, resetting the parental controls we had installed. How did this happen?.” In their recent 2010 study McAfee reports that, “85 percent of teens go online somewhere other than at home and under the supervision of their parents, nearly a third (32 percent) of teens say they don’t tell their parents what they do while they are online, and 28 percent engage with strangers online. The survey results should serve as a wake-up call for many parents.”

While teenage tomfoolery and trickery is generally regarded as humorous (thanks Hollywood) and as a coming-of-age tendency, the trouble begins when a teen’s future is jeopardized because he or she has not developed a sense for the moral and ethical implications of their actions on the web. Because hacking is not as tangible as, say, stealing a T-shirt at the mall, it is harder for teens to grasp how a few key strokes can be considered criminal. Yet it is up to today’s and tomorrow’s parents to put in the extra effort to educate teens about how their activities online may jeopardize their extremely valuable future.


September 12, 2010- http://www.gather.com/viewArticle.action?articleId=281474978513602

As the 2010 regular NFL football season begins, fans are reminded of everything they love about the game- the rushing roar of the home team crowd, the crisp fall weather, the complex plays, the strut and swagger of the scoring players. But fans should also take note of the new technology constantly being deployed and tested on football’s biggest fan base- the TV audience.

It may not be obvious why football fans would be such early technology adopters, but it begins to make more sense as you consider how statistically obsessed and fantasy football-involved the modern fan is. A Democrat and Chronicle article reporting on the effects of technology on modern NFL football consumption reported that one average fan they interviewed for their article is “never without his iPhone as he is constantly fed game updates and statistics each Sunday. At home, he watches games on his new big-screen plasma high-definition television through the Dish Network and writes a fantasy football blog at http://www.ffgeekblog.com.”

The same article listed some interesting stats on NFL media consumption, “While 1 million fans watch NFL games in person each week, an average of 16 million watch on television.” TVbytheNumbers.com reported that, according to Nielsen, this year’s first regular season game between the Minnesota Vikings and New Orleans Saints on September 10th was the most watched first regular season game ever.

With technologies such as high definition quality, the virtual visual 1st down line, access to any game via the Sunday Ticket, replays, and other league scores rotating on the screen, there’s no doubt that the ability to consume NFL games on TV is better-than-ever. But at stake are ticket sales for the live games, which suffer in terms of convenience and overall costs. Fewer people buying tickets to live games means more local blackouts. NFL team owners and stadium managers are investigating options such as seatback tv screens to bring that experience to the live game, but mobile and wireless technologies are still reigning supreme.

All of this adds up to make American football fans (college as well as NFL) some of the biggest consumers of home entertainment centers, TV equipment, and cable and satellite TV packages. However, as the future of network and cable TV looms ever more uncertain, and as web-based offerings work harder and harder to enhance the scope of their offerings, it seems inevitable that newly emerging products that incorporate the TV and web-browsing experience such as Google TV and Apple TV are perfectly suited to cater to these NFL early adopters with cutting edge offerings. How they do so and how much they cater to this influential demographic of TV fans still remains to be seen.


August 30, 2010- http://www.gather.com/viewArticle.action?articleId=281474978482524

At the beginning of this year Mark Zuckerberg famously announced that privacy was dead, stirring the pot and increasing concerns among the majority of internet users that their identities and personal information were being appropriated for capital gain.

Arguably, 2010 has been the year of “location aware technology,” whether the location is two dimensional or three dimensional. These days your computer knows where you’ve been online, where you’re going, and why you buy things there, and your phone can tell any satellite where you physically are on the globe and what advertising you’re passing at that very moment. Clearly, marketers are doing their best to collect as much of that information as possible and to use it.

One of the main issues in the ongoing debate about whether location aware technology and geotagging are net-positive or net-negative developments (or somewhere in between) centers on the concession that advertising and marketing are not going away any time soon. Advertising is an institutionalized facet of American life, especially in major urban centers. That being said, marketers like to argue that with more information they can better speak to a consumer’s interests and needs, as opposed to leading a consumer to buy something he or she doesn’t need.

Leaving that argument for a minute, the real concern here is over privacy, and educating the masses on how to protect their own privacy. A recent article in the New York Times cautioned readers against geotagging photos at their homes, and cited the example of Adam Savage, one half of the “MythBusters” team who had geotagged a Twitter photo of his car in front of his personal residence in the Bay Area. The Times pointed out that by doing Adam Savage had just informed all of his Twitter followers of his personal address, the make and model of his car, and that he was leaving for work at that very moment, “geotags… are embedded in photos and videos taken with GPS-equipped smartphones and digital cameras. Because the location data is not visible to the casual viewer, the concern is that many people may not realize it is there; and they could be compromising their privacy, if not their safety, when they post geotagged media online.”

Now with Facebook Places, a new feature which allows its users to tag their locations in their status updates, and the increasing use of Twitter and FourSquare, organizations such as the ACLU are concerned that the spread of technology is one again outpacing usage education and awareness of the risks of information abuse, “The organization highlighted the element of the new service that allows users to “tag” an accompanying friend and post his or her location to Facebook – even if the friend does not have an iPhone, which is currently the only platform on which the application is available.”

The other side of this coin involves how browsers and advertisers track our movements online. After all, this is a huge market that Facebook plans to tap, 50 percent of Facebook’s over 400 million users log in to the site at least once a day, and more than a quarter of that overall number access the service from mobile devices. However, despite all of the hype, new research shows that most users still decline to announce their location publicly.

According to a recent Forrester Research report, “Just 4 percent of Americans have tried location-based services, and 1 percent use them weekly…Eighty percent of those who have tried them are men, and 70 percent are between 19 and 35.”

Returning to the modern marketer’s argument that the more information they can gather on a person’s interests, habits and locations, the more applicable an ad will be for a consumer, there is strong evidence to support this. Personalized ad retargeting, where ads for specific products that consumers have perused online follow them around while they continue to browse the web, are becoming more pervasive. And marketers are big believers, “‘The overwhelming response has been positive,’ said Aaron Magness, senior director for brand marketing and business development at Zappos, a unit of Amazon.com.”

Still, consumer sentiment about being monitored, whether online or off, reflects overall concern and creepy feelings. Ongoing education about how browsers and advertisers collect behavioral information both online and off might serve to eliminate the two-way mirror feeling that many consumers experience. However, it has not yet proven to completely allay consumer fears and concerns about a potentially serious breach of privacy.

In other words, while consumers feel uncertain as to where all of this leaves their privacy, advertisers are increasingly certain of where consumers stand. Literally.


August 16, 2010- http://www.gather.com/viewArticle.action?articleId=281474978448715

The recent attention surrounding Verizon and Google’s agreement about net neutrality has unearthed manifold issues which are buzzing in the minds of the world’s web users- how free is the internet? And is that freedom an active function of American democracy? Much like free and fair elections, first amendment rights to free speech and the right to congregate, the Internet can be a phenomenal asset to American democracy. In fact, many modern political theorists consider the Web as a pillar of the modern public sphere. However, unlike those variables, a “neutral” Internet is not guaranteed to Americans under constitutional law.

But the issue also pulls in the more capitalistic challenges of the internet which include how to continue to strengthen the American broadband infrastructure and how ISPs can profit from the business of providing access without compromising the neutrality of the content. Certainly the US would not benefit from imposing stringent regulations on ISPs seeking to do business in the US, as the US must also consider the recent news that China has just surpassed Japan as the world’s second largest economy, and is digging in its heels to become #1. In order to remain competitive in the global economy, the business of improving upon the network infrastructure as well as encouraging healthy competition among ISPs will remain very important for the United States.

The issue of net neutrality is also inextricably enmeshed in the ongoing debate concerning Google’s policies of “Don’t Be Evil,” a mantra that has come under fire in recent years due to political fiascos such as Google’s compromises with China. Now Google stands under fire for compromising on their commitment to net-neutrality, and their credibility in the search market may take a hit as a result.

The last issue that is implicated in the net-neutrality debate is whether or not mobile access should be treated the same way that home or PC access is treated? The strains on mobile networks as evidenced by AT&T’s constant game of infrastructural catch-up since signing on with the iPhone have been widely covered, and so it’s easy to see why Verizon is anxious to nip that issue in the bud with Google at the onset.

Each of these issues is clearly significant enough to require full coverage by the news media, but there are deeper implications for American democracy and the freedom of information in the country. Americans often speak of the “right to access the world’s information” in the context of the glorious early days of the Internet, and of course, of Google’s appearance on the world’s stage. However, how far will Americans go to secure that access as a formal right? And would Americans vote for political regulations and requirements that may ultimately limit the quality of that access in favor of guaranteeing it for all?


July 28, 2010- http://www.gather.com/viewArticle.action?articleId=281474978401470

The Freedom of Information Act (FOIA) is frequently referenced by members of President Obama’s administration in the context of their own transparency efforts. However, most Americans are not aware that the Freedom of Information Act was actually signed into law by President Lyndon Johnson on September 6, 1966.

As a refresher, the act allows for the full or partial disclosure of previously unreleased information controlled by agencies which report to the executive branch of the American government. The act was significantly strengthened between 1995 and 1999 when President Bill Clinton extended the amendments to the FOIA to include the release of previously classified national security documents after a period of 25 years.

President Clinton’s extension of the act specifically released information which revealed a bevy of new information about the Cold War which had been previously unknown to the public, thereby creating a precedent and deadline for delayed but perhaps more educated public debate on the wartime strategies of the US government.

However, it now appears that the latest era of information dissemination might be effectively subverting the original intentions of the FOIA. On July 25th, website Wikileaks which operates in order to publish “leaked documents alleging government and corporate misconduct” released documents in a set entitled the “Afghan War Diary.” The set of documents comprises over 91,000 reports covering the war in Afghanistan from 2004 to 2010, essentially reducing FOIA’s 25 year window to zero.

Predictably, the documents which have created an unhappy stir among military and intelligence leaders. The reports are written by a number of different sources at all levels of command and from within and outside the operations in Afghanistan themselves.

The general sentiment of the modern information era claims that with more information we are all better, smarter, healthier and safer. However, this latest leak of classified documents and reports forces us all to re-examine whether we are, indeed, safer as a result of having this type of knowledge? The Wikileaks site itself admits “We have delayed the release of some 15,000 reports from the total archive as part of a harm minimization process demanded by our source. After further review, these reports will be released, with occasional redactions, and eventually in full, as the security situation in Afghanistan permits,” which would suggest that the original restrictions allowed for by President Clinton’s original extension of the FOIA may still hold some water.

While delaying those documents, can we really engage in fully educated discourse about US strategy in Afghanistan? Without the luxury of hindsight, can we accurately determine what works and what doesn’t? These are the tough questions that the US government will continue to face as they grapple with keeping secrets in the face of a world wide web that seeks every day to uncover them.


July 28, 2010- http://www.gather.com/viewArticle.action?articleId=281474978401386

On Monday July 26 the U.S. Library of Congress reached a groundbreaking decision concerning modern copyright laws and thrilling open source advocates the world over. The decision ruled that it was now legal to “jailbreak” a mobile phone. Or, in more vernacularized terminology, it is now legal to open up a phone’s controls to accommodate software that the phone maker had not previously authorized.

The ruling is the result of lobbying by the Electronic Frontier Foundation, a nonprofit digital rights group who had argued for these “exemptions” to the Digital Millennium Copyright Act for several years. Enacted in 1998, the Digital Millenium Copyright Act is a law which specifically extends copyright laws in the US to protect intellectual property and prevent copyright infringement on the Internet.

But what are the implications of the ruling? This decision opens up a huge debate about the differences between hardware and software and the way modern users will approach each as they apply to smartphones. As smartphone adoption and the numbers of application developers continue to rise, it is highly possible that users may begin to regard smartphone services and applications as they regard wifi, music and computer software: as something that should be free, and something that should be easy to share. Could it be that in the future hardware will continue to be something that you buy and invest in, but that software is destined to be free?

The ruling has been widely projected on Apple and its dominantly successful iPhone. As the New York Times wrote in its July 26th article covering the decision, “The issue has been a topic of debate between Apple, which says it has the right to control the software on its devices, and technically adept users who want to customize their phones as they see fit.” Apparently, Apple’s arguments with the US Copyright Office in the past claimed that jailbreaking phones would infringe on Apple’s copyrights by using an altered version of Apple’s OS.

However, hackers should be forewarned that this may not prove to be the complete ‘get out of jail free’ card that they are envisioning. Some mobile phone manufacturers, such as Apple, have countered by threatening that phone warranties will not be honored once a phone has been “jailbroken.”

What do you think of the ruling? Is this the newest banner issue for the open source movement? Do you think of hardware and software separately in this regard?


July 23, 2010- http://www.gather.com/viewArticle.action?articleId=281474978387963

Ladies and gentlemen, you may officially toss out your collection of mood rings. Now there’s a better way to check if we’re happy or not: Twitter.

Computer scientists at Northeastern University have just released a study which presents conclusions regarding the relative happiness of America based on “sentiment analysis” performed on tweet content. One key finding that’s making headlines and ruffling feathers is that Americans living on the West Coast are happier than those living on the East Coast.

For the study, the scientists at Northeastern performed the content analysis on nearly 300 million tweets from Americans and then indexed them according to time of day, sentiment, and location. The analysis was performed using the Affective Norms for English Words, or ANEW, system developed at the NIMH Center for Emotion and Attention at the University of Florida. As described in its manual, ANEW was developed “in order to provide standardized materials that are available to researchers in the study of emotion and attention.” Basically the ANEW system allows researchers to assign different values to a set of pre-selected words in order to determine a relative scale of emotions.

However, as is often the case with an attempt to project standardized values on subjective human behavior, ANEW is not completely reliable. As one article discussing the paper’s findings noted, “if someone types tweets ‘I am not happy’, the system counts the tweet as positive because of the word ‘happy’.” That being said, even as an imperfect tool, the practice of “sentiment analysis” using the ANEW guidelines is actually gaining popularity among large corporations as a tool to measure brand awareness and reactions.

Unfortunately for the study conducted by Northeastern University, most of the results aren’t exactly earth shattering revelations. For example, apparently most of us hate our jobs and prefer the weekends. But the video they put together does offer a nice visual of how the happiness or relative gloominess progresses over the course of an average workday.

What do you think? Do you think text-based “sentiment analysis” is a legitimate form of making these types of conclusions? Do you think the West Coast is, in fact, happier than the East?