Archive for the ‘Academics’ Category


(Full disclosure: While living in France, I worked with Soumitra Dutta and INSEAD’s eLab on a social media marketing project for a book he co-authored with Professor and Social Media Strategist Matthew Fraser.)

Happy Monday, all.

A few weeks ago I received links to the “2011 Global information Technology Report” from Soumitra Dutta, one of the co-authors of the annual report and Academic Director of INSEAD’s  eLab, an academic division of the university that pursues “thought leadership, community outreach and value creation in the global knowledge economy.” The report is published annually by the World Economic Forum in partnership with INSEAD, and this year marks the report’s 10th anniversary.

For those unfamiliar with the report (as I had been), it centers around the analysis of the impact of Information and Communication Technologies (which is refers to as ICT) on the global landscape. Perhaps most notably, the WEF created an index called the “Networked Readiness Index” or NRI through which to glance at the progression of ICT throughout the world, and to gauge its expansion on a quantifiable level. As the 2011 Global Information Technology Report states, the NRI “has mapped out the enabling forces driving networked readiness, which is the capacity of countries to fully benefit from new technologies in their competitiveness strategies and their citizens’ daily lives.”

While this is also the first edition I have read, I’ve found that the report touches on some highly pertinent and evocative information. In fact, the NRI’s stated goal touches on a point I heard during a recent conference I had the privilege to attend.

During his presentation, one of the featured speakers made the point that, in terms of disruptive innovation, often technologies are first invented and introduced to the mass public, where early adoption of those technologies then occurs. However, after the initial release of the new technology and relative levels of user traction occur the really outstanding leaps in innovation come from subsequent innovators in the space- in plainer terms, the guys who came second.

When those leaps of innovation occur, often they occur to such an extent, that the full range of technological capabilities contained within that new technology are not actually utilized by their users. In other words, the range of innovation often significantly outpaces the rate of user adoption and mastery of that technology.

I believe the same could be said for the relative rates of disruptive innovation and the global adoption of new technologies. For instance, much in the 2011 Global Information Technology Report (henceforth GITR) centers on the adoption of mobile technologies and their use in emerging economies- while smartphones get ever smarter in Asia, with mindboggling new capabilities introduced to Japanese, Korean and Chinese populations on a near daily basis, basic mobile networks and mobile phone adoption has only just begun to really soar across Africa.

I’m still making my way through the reports chapters which are guest-authored by various tech and economics luminaries, but in subsequent entries I’m hoping to tie some of these chapters back to trends I’ve observed in recent days and days still to come.

Wanted to give you all the head’s up, and invite you to read the report if you so desire. It can be found here: “2011 Global information Technology Report”.


October 19, 2010- http://www.gather.com/viewArticle.action?articleId=281474978616790

Remember that scene in the film Back to the Future where Marty McFly realizes that in the photo he carries of his family, he is fading from existence because of the events of the past not transpiring as they should? As a result, he faces the possibility that the shape of his family will change forever? Well, as it turns out, that’s not necessarily impossible.

At least, not according to one of the lead designers and developers at Mozilla, Aza Raskin, Creative Lead for Firefox. During his keynote speech at the University of Michigan School of Information Raskin claimed “the human brain’s predictable fallibility leaves us susceptible to the creation of false memories by brand marketers through retroactive product placement into our photos posted on Facebook and other social networks,” and his assertions are getting a lot of coverage. Raskin, only 27 years old, is one of Mozilla’s most talented innovators, and thus his arguments are by no means falling upon deaf ears. In essence, he’s predicting that social networks will modify our uploaded photos to include product placements and therefore modify our memories.

Specifically addressing the advertising and marketing potential involved in this ploy, Raskin claimed, “We will have memories of things we never did with brands we never did. Our past actions are the best predictor of our future decisions, so now all of a sudden, our future decisions are in the hands of people who want to make money off of us.”

During the talk, to bolster his cautionary predictions Raskin touched upon neurological research into memories and cited the Hollywood blockbuster Inception, which addressed the future potential to tap into and manipulate dreams and memories. This concept of subliminal advertising was also recently addressed in a viral video created by UK illusionist Derren Brown, “Subliminal Advertising,” where the practice of advertising is turned on its head when two high-end advertisers are manipulated into spontaneously generating a pre-determined pitch for a product.

Raskin’s keynote came at an unfortunate time for Facebook, who this week is once again suffering intense scrutiny for their privacy practices. As the New York Times argued, “When you sign up for Facebook, you enter into a bargain…At the same time, you agree that Facebook can use that data to decide what ads to show you.” Yet it was Mark Zuckerberg, the much publicized chief of Facebook, who this week apologized to his users for overly complicated site settings and acknowledged that some app developers on its site shared identifying information about users with advertisers and Web tracking companies.

However, as the New York Times reports, “Facebook has grown so rapidly, in both users and in technical complexity, that it finds it increasingly difficult to control everything that happens on its site.” If you consider that Facebook still claims just over 1,700 employees it seems unlikely that in the next few years the social media Goliath will grow rapidly enough to expand their advertising model to modify users’ uploaded content such as photos and videos. Nor is it entirely clear why they would want to do such a thing, given how infrequently users tend to re-visit their photos even weeks after they have posted them.

On the other side of the U.S., U.C Berkeley professor and privacy expert Deirdre Mulligan had this to say about Facebook: “This is one more straw on the camel’s back that suggests that Facebook needs to think holistically not just about its privacy policies, but also about baking privacy into their technical design.”

In the meantime, perhaps we should all pop some ginkgo biloba and back up the current versions of our photos- just incase.


October 08, 2010- http://www.gather.com/viewArticle.action?articleId=281474978584382

The New York Times today posted a ReadWriteWeb story about Google’s recently launched contest to encourage young kids to begin learning to code “The Google Open Source Program is announcing a new outreach effort, aimed at 13- to 18-year-old students around the world. Google Code-in will operate in a similar fashion to Google’s Summer of Code, giving students the opportunity to work in open-source projects.” While this is great PR for Google, and an admirable program to boot, it’s also a fascinating example of how today’s largest and most successful companies are assuming a significant role in the training and formation of their future workforce in the U.S.

A couple of years ago a viral video which featured a flash animated presented titled “Did You Know?” made the rounds and introduced us to incredible factoids about the modern world that we live in. One of the information nuggets that stood out among the many others was ““the top 10 in-demand jobs in 2010 didn’t exist in 2004… We are currently preparing students for jobs that don’t yet exist… Using technologies that haven’t been invented… In order to solve problems we don’t even know are problems yet.” It was a startling, yet very believable statement, and one that many people have since cited.

A now-dated 2006 Forbes article addressed this fact and listed jobs that don’t yet exist but should be in high demand within 20 years, jobs that will disappear within 20 years, and jobs that will always exist. For example, some jobs that are expected to disappear are booksellers, car technicians, miners, cashiers, and encyclopedia writers (if they haven’t already). The presented jobs of the future were slightly ominous and depressing in a sort of sci-fi way, such as genetic screening technicians, quarantine enforcers, drowned city specialists (Atlantis, anyone?) robot mechanics and space tour guides. Lastly, those jobs that will always be around? Pretty self explanatory. Prostitution is always high on the list, as are politicians, religious leaders, barbers and artists.

However, if everyone can’t be a hair stylist, how do we prepare the world’s children for an entire generation of jobs we don’t even know about? Among educators, the prevailing sentiment is that the best we can do is to arm tomorrow’s kids with problem solving skills, critical thinking skills, and endless curiosity. However, since most teachers are dealing with a very archaic and traditionally designed curriculum, much of the responsibility of training and forming the world’s new thinkers may continue to fall upon the shoulders of the tech giants like Google, Facebook, Twitter, etc. It is much easier to consider what future skills will be needed when your entire survival as a company depends upon being able to look into a crystal technology ball and anticipate the future needs of an entire world.


September 22, 2010- http://www.gather.com/viewArticle.action?articleId=281474978539098

Hegel famously proclaimed that “history is a dialectic,” that is, a dialogue between people who may hold differing views, but who seek to accomplish a basis of truth by debating together. In other words, history has no discernible truth, but more closely attains the overall goal of “truth” through discussion from all of the voices of history and their personal accounts of what happened.

This quotation of Hegel’s is often cited in the context of discussions about the literary canon, or the “western canon,” as some refer to it. The term “Western canon” is used to denote the selection of books, music, art and general cultural that have most influenced the shape ofWestern civilization and culture over time.

As demonstrated, a simple search on Wikipedia for either of these terms will tell you much about what they are. However, Wikipedia doesn’t explicitly tell us is that it is also holding the record of how the modern canon is determined, and how the truth of history is being determined by the myriad of voices which contribute to it everyday.

A recent Bits blog from the New York Times mentioned the trail of edits that the Internet provides to anyone who is looking for it. James Bridle, founder of BookTwo is particularly interested in what the future of literature holds, but also how that discussion is playing out and how we can track where the discussion has been. In one of his recent entries Bridle points out that although an article on Wikipedia may tell a specific story, the edits show a process of opinion, correction, and the potential biases of each writer. In this respect Wikipedia, and every constantly updated website represents an archive of evolving information over time. What interests Bridle is the offer of two distinct stories: one that is front-facing to the reader and one that reveals the behind-the-scenes editing, writing and creative process.

To illustrate the point, Bridle selected the topic of the Iraq war as an entry in the Wikipedia canon and had all of the history of the entries surrounding the Iraq War published into physical volumes. In his entry, Bridle writes, “This particular book — or rather, set of books — is every edit made to a single Wikipedia article, The Iraq War, during the five years between the article’s inception in December 2004 and November 2009, a total of 12,000 changes and almost 7,000 pages.” Bridle notes that the entire set comes to twelve volumes, which nearly approximates the size of a traditional encyclopedia.

Which brings us to the favorite comparison of Wikipedia and your parents’ Encyclopedia. Is one or the other more reliable? Who gets to decide what is a part of the overall Western canon? Shouldn’t we all be alarmed by a process in which a child may be permitted to contribute to an online encyclopedia which many now claim is an expert source?

In fact, Bridle’s point reminds us of a standard strategy employed to defend the credibility of Wikipedia and its process against its would-be detractors. The strategy is to cite a story central to the process under which the Oxford English Dictionary was compiled in the 19th century. Simon Winchester’s book, The Professor and the Madman: A Tale of Murder, Insanity, and the Making of The Oxford English Dictionary details a Jekyll and Hyde story of the brilliant but clinically insane Dr. W.C. Minor who provided thousands of entries to the editors of the OED while he was committed at the Broadmoor Criminal Lunatic Asylum. In other words, if a mad man may contribute significantly to a tome of the English language which is still very much the authoritative text today, why can a perfectly sane pre-teen not contribute to the modern canon of information about frogs, Major League Baseball, or global warming?Should we be preventing anyone from contributing to the ever-evolving conversation about what is truth and what is history?

As sites such as Twournal –which offers the Narcissistic boon of publishing your very own Tweets through time in print form– begin to proliferate, each of us can possess our very own piece of the modern web canon, whether in print or online. As Twournal describes itself, “Over time tweets can tell a story or remind us of moments. In 20 years we don’t know whether twitter will be around but your Twournal will be. Who knows maybe your great grandkids will dig it up in the attic in the next century.”

That means that each of us now has access to print a credible-looking book of our own (often misspelled) musings and meanderings as representative of history, according to us. Yet in the absence of a forum in which people can engage with our Tweeted observations, there’s no real dialectic. It therefore seems safe to conclude that Hegel would have preferred Wikipedia to Twitter, or to your Twournal.


September 02, 2010- http://www.gather.com/viewArticle.action?articleId=281474978491387

Smart advertisers and marketers know that part of building awareness of a brand and attachment to a brand these days involves allowing the consumer to feel as if they are a part of the brand, and the brand is a part of them.

The most innovative way to elicit this feeling among increasingly jaded consumers is to allow them to participate in the way a product is sold to them, or presented to an overall greater audience. In other words, to integrate elements of “interactive or collaborative advertising” into their overall marketing strategy.

Some of this is revolutionary stuff, and is still regarded as too dangerous by most traditional advertising, marketing and brand agencies the world over. Ostensibly, what it means is giving consumers permission to experiment with, and command some control of, a brand. If I may go down a yellow brick road of an analogy, this is no less than cutting down the Wizard’s curtain and revealing the small man behind it, subsequently allowing the consumer to revel in his or her discovery of the small man, and as a result of said revelation, being amply empowered to get Dorothy back from Oz to Kansas his or her self.

But when it works, it works so, so well.

Let us take, for example, the Old Spice Guy. If you’ve never seen or heard of Isaiah Mustafa, or any of the YouTube response videos that the company launched in response to Tweets it was receiving, then you must be dead or on a remote desert island with no smartphone. This ad campaign which has incorporate TV ads, Twitter, Facebook and YouTube so well has dominated most of this year’s buzz conversations.

How about something more recent? Tipp-Ex is a correction fluid brand (think White-Out), who recently launched a YouTube video ad campaign which allows the viewer to determine the end of the story. The viewer first watches the setup video where a guy camping with his friend is alerted that a bear is right behind him, and is urged by his friend who is videotaping the event to shoot the bear. The video viewer is at this juncture permitted to decide if the man should shoot the bear, or not. After making the decision, the viewer is redirected to a video in which the camper urges the viewer to rewrite the story.

The whole thing is highly reminiscent of “advertising and design factory,” CP+B’s groundbreaking 2001 “Subservient Chicken” campaign for Burger King, where visitors to the website can type in any command and a man dressed in a chicken suit on a webcam performs the requested function. So while Tipp-Ex’s overall concept isn’t new, their delivery is.

Largely what’s interesting about interactive or collaborative advertising is that it nicely paints the line between earned media and paid media. A company pays to create the initial ad, but then by virtue of the fun of interacting with it and collaborating it, consumers share and continue to virally promote that ad, which is where your earned media begins to kick in.

These concepts aren’t exactly brand new, but their integration into basic marketing strategies is, and increasingly larger companies are beginning to take notice of how much buzz can be generated through earned media without having to necessarily pay for every step of it. In addition, not every company has experienced skyrocketing revenues as a result of investing in interactive advertising, so the science here and how to master it is still relatively new.

One thing’s for sure, however. It sure makes advertising a lot more fun from the consumer perspective.


July 23, 2010- http://www.gather.com/viewArticle.action?articleId=281474978389762

Are you an early adopter or a laggard? Are you neither? Do you even know what these labels mean? Every technology company does, and how you respond to this question determines their relative level of interest in you as a tech consumer.

Incase you have never before encountered these terms, here’s a quick synopsis: Everett Rogers’ “diffusion of innovations” model organizes people based on how long it takes for them to adopt and adapt to new technologies.

Rogers’ theory comprises five groups. First, the “innovators,” which should be relatively straightforward. These are the people who are inventing and pushing the envelope. Next, the “early adopters” are big believers and big influencers who adopt technology right as it enters the market. The “early majority” follows. Those who belong to this group listen to early adopters and, based on their review and expert reviews of technology, will generally adopt and adapt to innovations. The “late majority” are those who are largely skeptical of emerging technologies, but realize when new technology becomes omnipresent that it’s time to get on the bandwagon. Last, but by no means least are the “laggards,” the strong skeptics who often blatantly disregard new technologies and publicly reject the latest innovations.

Laggards may often end up adopting technology further down the road, but can actually miss whole stages of innovation in between. For instance, a laggard might go straight from a Walkman to an iPhone without ever owning a single CD or minidisc.
But how does this all really break down into numbers? Early adopters generally only represent 13.5 per cent of the population, while early majority and late majority members represent 34 per cent each, making for a combined 68 per cent of us. The laggards represent just slightly more than the early adopters, at 16 per cent of the population. But why, then, does it feel as if everyone already owns (and complains about) an iPhone 4?

One theory is that early adopters and innovators are the most talkative about new technology. They love to show off their new gadgets and discuss them constantly.  The early majority and late majority assume everyone else has already heard enough about their gadgets by the time they acquire them. For laggards, chances are they are only interested in the fact that their technology works the way they want it to. Beyond that, their interest in the subject pales.
For these reasons early adopters and even the early majority have often been the darlings of the technology sector, who mostly focus their marketing and advertising on this 47.5 per cent segment of the population. However, now that the interwebs have been around for a while and we can reflect on some of the early fervor that the Web incited, some who were early web app and website adopters are now beginning to regret the information they put out there so early.

After all, the web has proven to be incredibly sticky with information, and increasingly website privacy controls evolve over time just as much as any consumer hardware design. That means the information entered ten years ago might be in the same condition as when you inputted it, but it might be visible to about 10 million more eyes than it was in the very beginning. In addition, consumer reviews of new gadgets, especially groundbreaking new efforts by technology companies, increasingly seem to suggest that the first generation of anything isn’t actually ready to own.

Do these frequent, and increasingly public failures in high-tech gadgetry suggest that a new era may emerge where late majority and laggard segments of technology consumers may be growing? If so, will technology companies begin to market more to laggards, and if so, how exactly would that look?

To quote an expression oft-used by a laggard friend of mine, “only time will tell.”


July 23, 2010- http://www.gather.com/viewArticle.action?articleId=281474978387963

Ladies and gentlemen, you may officially toss out your collection of mood rings. Now there’s a better way to check if we’re happy or not: Twitter.

Computer scientists at Northeastern University have just released a study which presents conclusions regarding the relative happiness of America based on “sentiment analysis” performed on tweet content. One key finding that’s making headlines and ruffling feathers is that Americans living on the West Coast are happier than those living on the East Coast.

For the study, the scientists at Northeastern performed the content analysis on nearly 300 million tweets from Americans and then indexed them according to time of day, sentiment, and location. The analysis was performed using the Affective Norms for English Words, or ANEW, system developed at the NIMH Center for Emotion and Attention at the University of Florida. As described in its manual, ANEW was developed “in order to provide standardized materials that are available to researchers in the study of emotion and attention.” Basically the ANEW system allows researchers to assign different values to a set of pre-selected words in order to determine a relative scale of emotions.

However, as is often the case with an attempt to project standardized values on subjective human behavior, ANEW is not completely reliable. As one article discussing the paper’s findings noted, “if someone types tweets ‘I am not happy’, the system counts the tweet as positive because of the word ‘happy’.” That being said, even as an imperfect tool, the practice of “sentiment analysis” using the ANEW guidelines is actually gaining popularity among large corporations as a tool to measure brand awareness and reactions.

Unfortunately for the study conducted by Northeastern University, most of the results aren’t exactly earth shattering revelations. For example, apparently most of us hate our jobs and prefer the weekends. But the video they put together does offer a nice visual of how the happiness or relative gloominess progresses over the course of an average workday.

What do you think? Do you think text-based “sentiment analysis” is a legitimate form of making these types of conclusions? Do you think the West Coast is, in fact, happier than the East?