Archive for the ‘Industry Research’ Category


Just because twitter is an American company, does it not have to play by other countries’ laws when it becomes embroiled in legal cases involving free speech?

That’s exactly the sort of mess that Twitter finds itself in today in the U.K. where a “British soccer player has been granted a so-called super-injunction, a stringent and controversial British legal measure that prevents media outlets from identifying him, reporting on the story or even from revealing the existence of the court order itself” in order to avoid being identified by name in scandalous tweets.  Unfortunately for the player, the super injunction has been ineffective and “tens of thousands of Internet users have flouted the injunction by revealing his name on Twitter, Facebook and online soccer forums, sites that blur the definition of the press and are virtually impossible to police.”

But I would argue that what is being blurred here is not necessarily the definition of the press, but rather the physical borders of a country where it meets the nebulous nature of the net.

How do we reconcile the physical, geographical and legal boundaries in which we live with the boundless expanses of the internet? I’m sure many people would agree that the democratic (in that it’s arguably free, fair and participatory) nature of Twitter’s platform and mission is inherently American. That Twitter’s ‘Americanness’ is built into its very code. So how do you transplant an American messaging platform such as Twitter’s in other countries and then expect it to be above or fly below the laws of another country?

“Last week…the athlete obtained a court order in British High Court demanding that Twitter reveal the identities of the anonymous users who had posted the messages.” But back in January of this year, as the New York Times reports, “Biz Stone, a Twitter founder, and Alex Macgillivray, its general counsel, wrote, ‘Our position on freedom of expression carries with it a mandate to protect our users’ right to speak freely and preserve their ability to contest having their private information revealed.’”

So what law should be followed in this case? According to the NYTimes, “Because Twitter is based in the United States, it could argue that it abides by the law and that any plaintiff would need to try the case in the United States, legal analysts said. But Twitter is opening a London office, and the rules are more complicated if companies have employees or offices in foreign countries.”

Yet our technologies and our corporations are very much U.S. representatives overseas. Google’s, Microsoft’s, Twitter’s, etc. offices in other parts of the world are nearly tantamount to U.S. embassies abroad.  These companies in large part bear the brunt of representing American ideals and encapsulate American soft power. Even the average Chinese person who may never encounter a flesh-and-blood American will most likely interact with multiple different examples of American cultural goods in his or her lifetime, largely due to the global proliferation of our technologies and media. Which is why, in large part, Twitter and Google have been banned in China. Too democratic for the Chinese government’s liking.

In his chapter (Chapter 1.4) of the Global Information Technology Report (GITR), Cesar Alierta with Telefonica argues that we are in the middle of “the fifth revolution.” The first revolution was the Industrial Revolution, then came Steam Power, then Electricity, then Oil, and now we are in the Information and Communication technology revolution- the fifth. He writes, “Each of these eras has entailed a paradigm shift, more or less abrupt or disruptive, which has led to profound changes in the organization of the economy, starting with individual businesses, and eventually, transforming society as a whole.”

What if we assume that- even if it’s not the original intent- the tacit intent of a technology is to become embedded in someone’s life until it’s nearly impossible to remember living before it. If America’s technologies are little carriers of soft power democratic beliefs and practices, aren’t those beliefs also becoming embedded as well? If so, really, where do we draw the line about the use of a technology in a country other than the one where it was invented?

And though indeed this is an example of a conflict occurring between two very first world countries (the U.S. and the U.K.) This may be one of the greatest barriers to ICT adoption in emerging and developing economies.

In their chapter (Chapter 1.2) of the 2011 Global Information Technology report, Enrique Rueda-Sabater and John Garrity from Cisco Systems, Inc. argue that the “treatment of broadband networks…as basic infrastructure; the recognition that competition is one of the best drivers of technology adoption, and imaginative policies that facilitate access to spectrum and to existing infrastructure that can be shared by networks” are necessary preconditions to accelerated Information and Communication Technologies (ICT). However, the beliefs that underlie those preconditions, 1) that all citizens of a country deserve unlimited access to the internet as a basic human right, and 2) that competition (which can be read as capitalism here) is one of the best drivers of technology adoption, do not seem to necessarily be universal values.

Certainly the belief that unlimited access to the internet is a basic human right is a fast-growing belief among developed economies of the world. As Karim Sabbagh, Roman Friedrich, Bahjat El-Darwiche, and Milind Singh of Booz & Company write in their Chapter (Chapter 1.3) on “Building Communities Around Digital Highways,” “In July 2010…the Finnish government formally declared broadband to be a legal right and vowed to deliver high-speed access (100 megabytes per second) to every household in Finland by 2015. The French assembly declared broadband to be a basic human right in 2009, and Spain is proposing to give the same designation to broadband access starting in 2011.”

But Finland and Spain are both democracies, and France, is a republic with strong democratic traditions. Democracies tend to believe in transparency, accountability and the free dissemination of information, so naturally the adoption of technologies which put the ability to freely disseminate and consume information squarely in the hands of its people jibe with those beliefs. But that is not so in non-democratic societies. I would thus argue that some form of democracy, well-established, should also be considered as a pre-condition for the accelerated adoption of ICT.  And if a country has already heavily adopted and invested in ICT, just as Britain has, that then, as we have seen here, the accelerated deployment of ICT will also bring about accelerated petitioning for expanded democratic rights among its people.


The other day I was reading through my May issue of Wired Magazine, and I came across a short article about a newly developed technique in online marketing that will soon become everyone’s new reality. The new technique is called “persuasion profiling,” and it’s an offshoot of the personalization and recommendation engine modes of online marketing. As the author of this article described the new technique, “it doesn’t just find content you might enjoy. It figures out how you think.”

Basically this new technique doesn’t just monitor what you are lured into clicking on, it also takes note of which strategies of persuasion work best on you, and which don’t. As the author of the article explained, “By alternating the types of pitches, Appeal to Authority (‘Malcolm Gladwell says you’ll like this’), Social Proof (‘All your friends on Facebook are buying this book’), and the like- [the scientist] could track which mode of argument was most persuasive for each person.”

Once enough information about your psychological weaknesses is uncovered, web-based marketers will be able to essentially profile which types of advertising will most appeal to those areas of weakness, and exploit them to help sell you more stuff. Additionally, the studies conducted found that your weaknesses are the same no matter what the product- whether clothes, home furnishings, cars, etc.

So I found this article pretty interesting and well laid out, and then my brother mentioned a concept developed by author Eli Pariser called “online filter bubbles” one day when we were discussing the annoyingly narrow scope of our respective Facebook news feeds. Pariser’s argument is basically that the programmers of the modern web are personalizing our content to such an extent that they are, in essence, selecting what’s important FOR US, rather than the other way around. He suggested I watch the video. Which I did, and you should too.

Then I realized, the article in Wired and this TED Talk? Same guy (speaking of filter bubbles…?).

Well, he’s promoting a book, so we can’t blame him for being everywhere. Besides, this really is an excellent TED Talk- it’s a good example of why TED Talks are so compelling. [I should say here, if you’re not familiar with TED Talks, a) where the hell have you been?, and b) go get familiar-NOW. ]

In his talk on the concept of online filter bubbles, Pariser starts his talk with an anecdote: “A journalist was asking Mark Zuckerberg a question about the news feed. And the journalist was asking him, “Why is this so important?” And Zuckerberg said, “A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” And I want to talk about what a Web based on that idea of relevance might look like.

So what he’s getting at is a few things. 1) The internet is really, really, large. Anyone who has attempted to explore its depths knows this. 2) We are an information society- there is so much information flying at us now at any given time from any number of different devices, it’s impossible to stay afloat. 3) But that doesn’t mean that any platform, company or tool gets to decide what IS important, and what is NOT.

In response to this, my first thoughts flew to Jurgen Habermas and his theories of the centrality of a healthy public sphere to the success of a democracy.

What is a “public sphere” you ask? OK, this is where Wikipedia becomes useful, folks. Do me a favor and look it up here so I don’t have to expound that much on it. But the gist of the concept of a public sphere is this: “The public sphere is an area in social life where individuals can come together to freely discuss and identify societal problems, and through that discussion influence political action.”

Many of you who are now being introduced to this concept for the first time will see the very obvious correlation to the original intent of the internet as a social and discursive space where people could freely communicate. Only, Habermas introduced this concept pre-internet, around 1962. And he was largely referencing the importance of the press when he introduced it. Smarty pants, eh? Anyhow, the importance of the internet as the focal embodiment of a modern public sphere is what Pariser is getting at here, and he sort of spells that out later in his talk:

“In 1915, it’s not like newspapers were sweating a lot about their civic responsibilities. Then people noticed that they were doing something really important. That, in fact, you couldn’t have a functioning democracy if citizens didn’t get a good flow of information. That the newspapers were critical, because they were acting as the filter, and then journalistic ethics developed. It wasn’t perfect, but it got us through the last century. And so now, we’re kind of back in 1915 on the Web. And we need the new gatekeepers to encode that kind of responsibility into the code that they’re writing.”

This is where it got really good, because Pariser basically called out Larry & Sergey in front of the TED audience:

“I know that there are a lot of people here from Facebook and from Google — Larry and Sergey — people who have helped build the Web as it is, and I’m grateful for that. But we really need you to make sure that these algorithms have encoded in them a sense of the public life, a sense of civic responsibility. We need you to make sure that they’re transparent enough that we can see what the rules are that determine what gets through our filters. And we need you to give us some control, so that we can decide what gets through and what doesn’t.”

Which is really the battle cry from this talk, and his whole point. He is asking major web-based companies to relinquish some of the control that they have actively seized over our internet use. In essence, Pariser is waving a red flag that the new waves of much-lauded personalization, and persuasion analysis are cutting down the scope of each of our online experiences into pre-conceived, pre-determined pathways based on past behavior.

Anyone who was a teenager- or was previously someone they don’t currently fully admire – can see the error in this strategy. As humans we change, we grow, we evolve. Pariser’s point is that we need to be exposed to new influences and new information in order to continue to evolve and grow, “Because I think we really need the Internet to be that thing that we all dreamed of it being. We need it to connect us all together. We need it to introduce us to new ideas and new people and different perspectives. And it’s not going to do that if it leaves us all isolated in a Web of one.”

I think he’s completely right. Actually, it makes me laugh because when you really break it down, marketers and advertisers are paying top dollar to help develop persuasion analysis and personalization technologies based on our previous behavior. All of that money and time invested goes into analyzing our histories online, but what they’re really trying to do is convince us to create a new version of ourselves by purchasing their products. Odd, no?

UPDATE: Pariser has a new article in the New York Times: http://www.nytimes.com/2011/05/23/opinion/23pariser.html?src=recg


The theme of this 10th edition of the Global Information Technology Report is “Transformations 2.0.” And a large theme of this report every year- and the basis for why it is relevant- are the connections it draws between economic development and information and communication technologies (ICT). Namely,

“The next decade will see the global Internet transformed from an arena dominated by advanced countries, their businesses, and citizens, to one where emerging economies will become predominant. As more citizens in these economies go online and connectivity levels approach those of advanced markets, the global shares of Internet activity and transactions will increasingly shift toward the former.” (page x)

As the report repeats (ad nauseum) in each of its otherwise excellent, guest-authored chapters, increased traction and penetration of ICT in developing countries and emerging economies is dependent upon two factors:

1)       “the availability of personal computers (PCs),” and

2)       “the density of pre-existing phone lines and cable”

Which is interesting on its own, because as Chapter 2, authored by Enrique Rueda-Sabater and John Garrity of Cisco Systems, Inc. asserts, the adoption or existence of PCs isn’t necessarily a precondition to the use of ICT. In fact, many emerging economies in Africa have leapfrogged PC ownership and moved straight into robust mobile internet access with limited or no access to PCs.

Each of these chapters seeks to answer deeper questions provoked by this positive correlation between ICT and economic development.

“we continue to be challenged by questions that were raised by John Gage of Sun Microsystems in the first edition of the GITR: “Can we apply ICT to improve the condition of each individual? Can ICT, designed for one-to- one links in telephone networks, or for one-to-many links in radio and television networks, serve to bond us all? And how can new forms of ICT—peer-to-peer, edge-to-edge, many-to-many networks—change the relationship between each one of us and all of us?” (page 3)

It is the new forms of ICT that this report largely focuses on, and in so doing introduces an interesting subset of factors to consider in evaluating new communications media:

 “Transformations 2.0 are difficult to accurately envisage, evolving technology trends are pointing to the most likely directions they will take over the next few years—what we term as the move toward SLIM ICT:

• S for social: ICT is becoming more intricately linked to people’s behaviors and social networks. The horizons of ICT are expanding from traditional processes and automation themes to include a human and social focus.

L for local: Geography and local context are becoming important. ICT provides an effective medium for linking people and objects (and processes) with local environments. This will allow differentiation across local contexts and the provision of tailored services.

• I for intelligent: ICT will become even more intelligent. People’s behaviors, individual preferences, and object interactions among other elements will be more easily stored, analyzed, and used to provide intelligent insights for action.

M for mobile: The wide adoption of the mobile phone has already brought ICT to the masses. Advances in hardware (screens, batteries, and so on), software (e.g., natural language interfaces), and communications (e.g., broadband wireless) will continue to make computing more mobile and more accessible.” (page 29)

There’s nothing absolutely groundbreaking here, or even new, really. But it is an interesting subset within which to view evolving and emerging economies.

I can already think of a number of people who would say that these lenses are actually partially contradictory- for instance, “mobile” and “local” seem at odds with each other from one vantage point, since mobile access allows anyone to reach out globally across previously restrictive limits of space and time, contradicting the notion that geography and context are becoming increasingly important.

Still, one of the most exciting things about emerging communications technology and media is that they can develop and burst on the scene in initially contradictory ways, later settling into their deeper contexts in a web of compatibility we would have earlier thought impossible. I think the social, local, intelligent and mobile aspects are good ones to zero in on as a platform for analysis.


(Full disclosure: While living in France, I worked with Soumitra Dutta and INSEAD’s eLab on a social media marketing project for a book he co-authored with Professor and Social Media Strategist Matthew Fraser.)

Happy Monday, all.

A few weeks ago I received links to the “2011 Global information Technology Report” from Soumitra Dutta, one of the co-authors of the annual report and Academic Director of INSEAD’s  eLab, an academic division of the university that pursues “thought leadership, community outreach and value creation in the global knowledge economy.” The report is published annually by the World Economic Forum in partnership with INSEAD, and this year marks the report’s 10th anniversary.

For those unfamiliar with the report (as I had been), it centers around the analysis of the impact of Information and Communication Technologies (which is refers to as ICT) on the global landscape. Perhaps most notably, the WEF created an index called the “Networked Readiness Index” or NRI through which to glance at the progression of ICT throughout the world, and to gauge its expansion on a quantifiable level. As the 2011 Global Information Technology Report states, the NRI “has mapped out the enabling forces driving networked readiness, which is the capacity of countries to fully benefit from new technologies in their competitiveness strategies and their citizens’ daily lives.”

While this is also the first edition I have read, I’ve found that the report touches on some highly pertinent and evocative information. In fact, the NRI’s stated goal touches on a point I heard during a recent conference I had the privilege to attend.

During his presentation, one of the featured speakers made the point that, in terms of disruptive innovation, often technologies are first invented and introduced to the mass public, where early adoption of those technologies then occurs. However, after the initial release of the new technology and relative levels of user traction occur the really outstanding leaps in innovation come from subsequent innovators in the space- in plainer terms, the guys who came second.

When those leaps of innovation occur, often they occur to such an extent, that the full range of technological capabilities contained within that new technology are not actually utilized by their users. In other words, the range of innovation often significantly outpaces the rate of user adoption and mastery of that technology.

I believe the same could be said for the relative rates of disruptive innovation and the global adoption of new technologies. For instance, much in the 2011 Global Information Technology Report (henceforth GITR) centers on the adoption of mobile technologies and their use in emerging economies- while smartphones get ever smarter in Asia, with mindboggling new capabilities introduced to Japanese, Korean and Chinese populations on a near daily basis, basic mobile networks and mobile phone adoption has only just begun to really soar across Africa.

I’m still making my way through the reports chapters which are guest-authored by various tech and economics luminaries, but in subsequent entries I’m hoping to tie some of these chapters back to trends I’ve observed in recent days and days still to come.

Wanted to give you all the head’s up, and invite you to read the report if you so desire. It can be found here: “2011 Global information Technology Report”.


October 19, 2010- http://www.gather.com/viewArticle.action?articleId=281474978616650

This last week a number of well-respected analysts and research centers released reports discussing the pervasive dominance of “post-PC devices.” Specifically the discussion is revolving around mobile phones, smartphones, tablets and e-readers. The Pew Research Center released its report on “gadget ownership” which demonstrated the overall dominance of cell phones in the U.S. Technology market. Pew surveyed 3,001 American adults and decided upon the “key appliances of the Information Age.” Those which came out on top were cellphones, PCs, e-readers, and Mp3 players.

Gartner and Forrester also threw their hat into the “buzz” ring. Gartner released its own report Friday which proclaims tablets the new cellphone. In the study Gartner reports that tablet sales have reached 19.5 million units this year and estimated that tablet sals would increase to 150 million units by 2013. In fact, Carolina Milanesi, a research vice president at Gartner, claims mini notebook computers “will suffer a “strong cannibalization” as the price of media tablets” drops nearer to the $300 mark.

In their report, Forrester chose to address head-on the new era security concerns that companies and consumers will experience as society continues to adopt these post-PC devices. In its report, Forrester discusses the additional security companies will have to implement for mobile devices commonly now used both at work and at play.

Yet as these respected analysts hail the new post-PC era, tablets and e-readers are still exploring the new hazards of a post-PC world. Just this week the New York Times posted an article which discusses the temperature control problems for Apple’s iPads and compares it to Amazon’s Kindle. Or, as the New York Times wrote it, “It seems that some iPads do not like direct sunlight, saunas or long walks on the beach.” And with the iPhone 4G’s antenna challenges earlier this summer, it’s clear that we’re nowhere near having worked out all of the kinks even as the mobile device market continues to innovate and we adopt their emerging products.


October 15, 2010- http://www.gather.com/viewArticle.action?articleId=281474978605059

Oh goody, as the New York Times reported on October 10th, Twitter has finally come up with a plan to make money. Only, it’s the old new plan, which is to say it’s the same plan as everyone else.

As Twitter’s Evan Williams stepped down, to make room for Dick Costolo who previously headed Twitter’s advertising program as the new CEO, the tech industry remarked on how the shuffle represented Twitter’s increased new commitment to monetization.

As the New York Times reported, “Twitter’s startling growth — it has exploded to 160 million users, from three million, in the last two years — is reminiscent of Google and Facebook in their early days. Those Web sites are now must-buys for advertisers online, and the ad industry is watching Twitter closely to see if it continues to follow that path.”

But there still seems to be no real innovation in the advertising models of hi-tech companies from whom the world expects a great deal of innovation. Why are hi-tech social media and social news aggregation companies having such a hard time innovating with their monetization strategies?

At this point, each new social media platform that comes along seems to jump into the online advertising market that Google forged largely on its own. Now that Google did the heavy lifting on education and we all speak and understand the language of “click-thru rates,” “impressions,” and “search engine optimization,” newcomers like Twitter don’t have to pay or do very much in order to enter this monetization space. Coincidentally, it would seem that they aren’t doing very much at all to evolve it.

As a result, the whole online ad framework is falling flat, and after a few years of evangelizing for social media advertising and the use of new media platforms like Twitter and Hulu, are advertisers really making more money and seeing the benefits of these new media? It’s becoming an embarrassingly redundant question- “yes, we know we are creating funny and entertaining media for our consumers to enjoy, but is it actually increasing sales?”

Interestingly, at this year’s gathering of the Association of National Advertisers, as the New York Times reported, a survey at the beginning of the opening session found that “marketers may still need some schooling on the dos and don’ts of social media. Asked to describe how its use has affected sales, 13 percent replied that they did not use social media at all. (Eleven percent said sales had increased a lot, 34 percent said sales increased ‘some’ and 42 percent said they had seen no change.)”

It would seem that media analysts are continuing to approach social media and search as a given element of any marketing strategy without any hard evidence as to why every company needs to integrate social media into their market strategies. Instead, without the numbers to make the case, analysts and marketeers still discuss the virtues of earned media versus paid media, the value of eyeballs and impressions, and earned equity.

One of this year’s smashing social media success stories has a particular ability to make marketers foam at the mouth. 2010’s Proctor & Gamble “smell like a man” campaign for Old Spice helped increase the brand’s followers on Twitter by 2,700%, to where they “now total almost 120,000.”

Marc Pritchard, global marketing and chief branding officer at Proctor and Gamble had his moment in the sun for what was, undoubtedly, the most high-profile and successful example of how modern brands can use social media to promote their brands. But in the coverage of Pritchard’s talks, there is little to no mention of how the campaign is actually impacting the company’s bottom line. Instead, there is this: “The currency the campaign has earned in social media has pushed it into the popular culture. Mr. Pritchard showed the audience a spoof that was recently introduced by Sesame Workshop in which Grover suggests that his young viewers ‘smell like a monster on Sesame Street.’

But an internet meme does not a year over year increase in sales make. There is no mention of how an increase in followers on Twitter converts itself into a percentage increase in sales. It’s like an equation is missing, or somehow we have all misunderstood how to connect the dots. At the conference Joseph V. Tripodi, chief marketing and commercial officer for Coca Cola was interviewed, and his only contribution to this dilemma was to discuss how social media can sometimes save a company money on promotions through viral videos, “It cost less than $100,000 to produce the video, he added, demonstrating that “you don’t need huge amounts of money to engage with consumers.” However, savings on a marketing budget also do not a sales increase make.

Refreshingly, one of the conference’s keynote speakers, Mark Baynes, vice president and global chief marketing officer at the Kellogg Company, did acknowledge the missing link in the social media to profits equation by proclaiming, “In God we trust; the rest of you bring data.”


October 08, 2010- http://www.gather.com/viewArticle.action?articleId=281474978584382

The New York Times today posted a ReadWriteWeb story about Google’s recently launched contest to encourage young kids to begin learning to code “The Google Open Source Program is announcing a new outreach effort, aimed at 13- to 18-year-old students around the world. Google Code-in will operate in a similar fashion to Google’s Summer of Code, giving students the opportunity to work in open-source projects.” While this is great PR for Google, and an admirable program to boot, it’s also a fascinating example of how today’s largest and most successful companies are assuming a significant role in the training and formation of their future workforce in the U.S.

A couple of years ago a viral video which featured a flash animated presented titled “Did You Know?” made the rounds and introduced us to incredible factoids about the modern world that we live in. One of the information nuggets that stood out among the many others was ““the top 10 in-demand jobs in 2010 didn’t exist in 2004… We are currently preparing students for jobs that don’t yet exist… Using technologies that haven’t been invented… In order to solve problems we don’t even know are problems yet.” It was a startling, yet very believable statement, and one that many people have since cited.

A now-dated 2006 Forbes article addressed this fact and listed jobs that don’t yet exist but should be in high demand within 20 years, jobs that will disappear within 20 years, and jobs that will always exist. For example, some jobs that are expected to disappear are booksellers, car technicians, miners, cashiers, and encyclopedia writers (if they haven’t already). The presented jobs of the future were slightly ominous and depressing in a sort of sci-fi way, such as genetic screening technicians, quarantine enforcers, drowned city specialists (Atlantis, anyone?) robot mechanics and space tour guides. Lastly, those jobs that will always be around? Pretty self explanatory. Prostitution is always high on the list, as are politicians, religious leaders, barbers and artists.

However, if everyone can’t be a hair stylist, how do we prepare the world’s children for an entire generation of jobs we don’t even know about? Among educators, the prevailing sentiment is that the best we can do is to arm tomorrow’s kids with problem solving skills, critical thinking skills, and endless curiosity. However, since most teachers are dealing with a very archaic and traditionally designed curriculum, much of the responsibility of training and forming the world’s new thinkers may continue to fall upon the shoulders of the tech giants like Google, Facebook, Twitter, etc. It is much easier to consider what future skills will be needed when your entire survival as a company depends upon being able to look into a crystal technology ball and anticipate the future needs of an entire world.


September 22, 2010- http://www.gather.com/viewArticle.action?articleId=281474978538816

Hear that? It’s the sound of the content aggregator death knell. On October 1st, popular web-based RSS reader and news aggregator Bloglines, run by the team at Ask.com, will discontinue service. When asked why, Ask’s team reported “that social media sites like Twitter and Facebook killed it.”

And Bloglines is only the first. Other aggregators such as Google Reader, Digg, Reddit, and StumbleUpon are sure to be next. According to Hitwise, “visits to Google Reader are down 27 percent year-over-year.” The New York Times recently reported “a more pivotal reason that Digg is falling behind, analysts say, is that users are simply spending more time on Facebook and Twitter than they are on Digg.”

How did this happen? Is it truly impossible that content aggregation sites such as Google Reader, StumbleUpon, Digg and ReddIt can not exist side-by-side with the type of social news aggregation offered by Facebook and Twitter? What does this mean for RSS? RSS, which stands for Real Simple Syndication, is a protocol which helps push website updates to readers around the world so they don’t have to search for new content or endlessly hit refresh on a favorite web page. In 2005 RSS was a game changer. Today? Not so much.

According to the Bloglines message about its own end-date, “the Internet has undergone a major evolution. The real-time information RSS was so astute at delivering (primarily, blog feeds) is now gained through conversations, and consuming this information has become a social experience…being locked in an RSS reader makes less and less sense to people as Twitter and Facebook dominate real-time information flow. Today RSS is the enabling technology – the infrastructure, the delivery system.”

As a 2009 NeilsenWire also reported, part of this trend comes from the fact that blogs and news sites are no longer the endgame news tool, our friends are. “Socializers trust what their friends have to say and social media acts as an information filtration tool…If your friend creates or links to the content, then you are more likely to believe it and like it. And this thought plays out in the data.”

Does Mark Zuckerberg know that his company has driven content aggregators to the grave? Undoubtedly, yes. A recent New Yorker profile quoted Zuck as saying, “It’s like hardwired into us in a deeper way: you really want to know what’s going on with the people around you.” In fact, Facebook’s Open Graph feature allows users to see which articles their Facebook friends have read, shared, and liked. “Eventually,” the New Yorker observed, “the company hopes that users will read articles, visit restaurants, and watch movies based on what their Facebook friends have recommended, not, say, based on a page that Google’s algorithm sends them to.”

Some argue that content aggregators, or RSS readers, were always destined for the internet graveyard simply because they were too complicated and allowed users to become completely overwhelmed by the sheer bulk of information that was being pushed to them. One thing is for sure, if content aggregators don’t find a way to better integrate with, or at least successfully co-exist with social networking offerings like Facebook and Twitter, they will soon be relegated to the ever-growing category of “old news.”


September 12, 2010- http://www.gather.com/viewArticle.action?articleId=281474978513602

As the 2010 regular NFL football season begins, fans are reminded of everything they love about the game- the rushing roar of the home team crowd, the crisp fall weather, the complex plays, the strut and swagger of the scoring players. But fans should also take note of the new technology constantly being deployed and tested on football’s biggest fan base- the TV audience.

It may not be obvious why football fans would be such early technology adopters, but it begins to make more sense as you consider how statistically obsessed and fantasy football-involved the modern fan is. A Democrat and Chronicle article reporting on the effects of technology on modern NFL football consumption reported that one average fan they interviewed for their article is “never without his iPhone as he is constantly fed game updates and statistics each Sunday. At home, he watches games on his new big-screen plasma high-definition television through the Dish Network and writes a fantasy football blog at http://www.ffgeekblog.com.”

The same article listed some interesting stats on NFL media consumption, “While 1 million fans watch NFL games in person each week, an average of 16 million watch on television.” TVbytheNumbers.com reported that, according to Nielsen, this year’s first regular season game between the Minnesota Vikings and New Orleans Saints on September 10th was the most watched first regular season game ever.

With technologies such as high definition quality, the virtual visual 1st down line, access to any game via the Sunday Ticket, replays, and other league scores rotating on the screen, there’s no doubt that the ability to consume NFL games on TV is better-than-ever. But at stake are ticket sales for the live games, which suffer in terms of convenience and overall costs. Fewer people buying tickets to live games means more local blackouts. NFL team owners and stadium managers are investigating options such as seatback tv screens to bring that experience to the live game, but mobile and wireless technologies are still reigning supreme.

All of this adds up to make American football fans (college as well as NFL) some of the biggest consumers of home entertainment centers, TV equipment, and cable and satellite TV packages. However, as the future of network and cable TV looms ever more uncertain, and as web-based offerings work harder and harder to enhance the scope of their offerings, it seems inevitable that newly emerging products that incorporate the TV and web-browsing experience such as Google TV and Apple TV are perfectly suited to cater to these NFL early adopters with cutting edge offerings. How they do so and how much they cater to this influential demographic of TV fans still remains to be seen.


September 08, 2010- http://www.gather.com/viewArticle.action?articleId=281474978505328

Banking on the fact that people read more quickly than they type, and that they have once again designed a feature that will change the way the world searches for information, Google has launched Google Instant.

Instant provides real-time potential search results based on each letter typed into the query box, and works with lightning-quick speed.

Currently Google claims that Instant “saves two to five seconds per search” and “will cumulatively save people more than 3.5 billion seconds every day, or 11 hours every second.” Was searching taking us all too long before? Was it, say, so tediously long that it was preventing us from spending time with our families or volunteering at our local charities. Not likely. However, there are those who would say that faster is always better.

Still, there are bound to be skeptics, many of whom will and are saying that Instant is merely a ploy to make Google look more cutting-edge, without necessarily representing truly large changes in how Google “organizes the world’s information.” In fact, Google itself admitted that “While the behind-the-scenes engineering that generates those results is a big reason Google gets the majority of searches, it can be hard for average users to notice. The instant results make this much clearer.”

PC Mag compared Google Instant to Bing’s Type Ahead functionality, which has been in place for a while, and found that Google Instant doesn’t necessarily come out on top. Specifically, reviewer Lance Ulanoff mentioned “Google Instant, for now, only works when you’re signed in and may be using some search history to intuit results. It combines type ahead with live results, while Bing only offers you a list of probable word matches. Still, the word matches in Bing are pretty solid, and if Google Instant is showing you a page you weren’t interested in anyway, then what’s the value in it?”

For the skeptics, cynics, and those with sensitive eyeballs, Google Instant does offer the chance to opt out (hint: look to the right of the search box, see the blue link reading: “Instant is on.” Click that), as Gadgetwise reports along with other tips on how to use the new feature.

As Gizmodo reported on the announcement event, Google Instant will be available on Chrome, Firefox, Safari and IE 8 starting today. Additionally, it is not yet available on browser toolbars or for mobile phones. That rollout is expected to occur in the coming months.

Every new browser innovation that is announced also reminds us again about the delayed promise of the “answer engine,” embodied most famously by the “computational knowledge engine” introduced to us by the Wolfram Alpha people a couple years ago. Answer engines such as Wolfram Alpha, launched in 2009 are supposed to collect and organize all of the information from authoritative databases and engines to obtain the answer to a specific question. In other words, the next step was supposed to be to skip the list of search results, and to send us straight away to the answer backed up by authoritative and well listed information.

Clearly Google has not yet come to adopt that model of search engine, and Wolfram Alpha has not yet even begun to compete with Google’s search market domination. Alas, as users we shall all have to be satisfied with how much faster we are delivered the search results that Google supplies, and follow the progress and promise that answer engines still have yet to deliver on.

Looking for more about Google Instant? Check out Google’s YouTube video about Google Instant.