Posts Tagged ‘Twitter’


In order to survive in the modern era, companies must grasp a strong understanding of psychology, or at least of the type of pseudo-psychology that Edward Bernays, immortalized as the father of PR, made widely available to marketers and advertisers. Bernays was an Austrian American who wove the ideas of Gustave Le Bon and Wilfred Trotter on crowd psychology with the psychoanalytical ideas of his uncle, Sigmund Freud, and ultimately asked, “If we understand the mechanism and motives of the group mind, is it not possible to control and regiment the masses according to our will without their knowing about it?”

Historically companies have leveraged a number of psychological devices and theories to generate desire within their target demographics and audiences in order to sell more. Advertising seeks to simultaneously engender strong positive feelings about a product or company while simultaneously leaving the audience feeling emptier for not owning the advertised product. The ability to pull this off is intensely powerful, and yet not as powerful as the ability to affect this reaction within the target demographic, autonomously, spontaneously.

This is the accomplishment of the new realm of mobile technologies and apps such as Twitter, Facebook and Instagram. In effect, their breakthrough in psycho-marketing is the ability to make their product habit-forming, even addictive. On Merriam Webster addiction is defined as: compulsive need for and use of a habit-forming substance (or we could say product) characterized by tolerance and by well-defined physiological symptoms upon withdrawal. Addiction is the new marketing goal precisely because its inherently dangerous, cyclical nature is exactly what embodies both the need and the fulfillment- all encapsulated in one.

Compulsion and habit are the key words here. Marketers and advertisers drool when they see those words, because they are truly the Holy Grail of advertising. If they can create a condition in their target audience where the deprivation of the product creates a state near-pain for the user/consumer, they are guaranteed a captive customer, possibly for life.

This is precisely what Nir Eyal describes in his TechCrunch article, “The Billion Dollar Mind Trick.”  Eyal outlines a couple of critical concepts; namely “internal triggers” and “desire engines,”

“When a product is able to become tightly coupled with a thought, an emotion, or a pre-existing habit, it creates an ‘internal trigger.’ Unlike external triggers, which are sensory stimuli, like a phone ringing or an ad online telling us to “click here now!” you can’t see, touch, or hear an internal trigger. Internal triggers manifest automatically in the mind and creating them is the brass ring of consumer technology.”

As Eyal points out, “We check Twitter when we feel boredom. We pull up Facebook when we’re lonesome. The impulse to use these services is cued by emotions.” He enumerates the current approach to create internal triggers, labeling it the manufacturing of desires.”

  • “Addictive technology creates “internal triggers” which cue users without the need for marketing, messaging or any other external stimuli.  It becomes a user’s own intrinsic desire.”
  • Creating internal triggers comes from mastering the “desire engine” and its four components: trigger, action, variable reward, and commitment.”

The “desire engine” Eyal refers to is merely a phrase that describes the pre-determined “series of experiences designed to create habits…the more often users run through them, the more likely they are to self-trigger.” All of this is to say that, increasingly, and especially when it comes to mobile consumer technologies and apps, companies increasingly find that their economic and social value is a function of the strength of the habits they create within their user/customer base.

Interesting, yes, but perhaps not entirely new. Michel Foucault (yes, I know I talk about him a lot here, but his work is endlessly relevant to the types of communications discussions we constantly engage in nowadays) discussed this same concept in his investigation of the concept of “technologies of the self,” whereby his objective was:

 “to sketch out a history of the different ways in our culture that humans develop knowledge about themselves: economics, biology, psychiatry, medicine, and penology. The main point is not to accept this knowledge at face value but to analyze these so-called sciences as very specific ‘truth games’ related to specific techniques that human beings use to understand themselves.” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

Yet the concept dates back to the Greeks, “constituted in Greek as epimelesthai sautou, ‘to take care of yourself’ ‘the concern with self,’ ‘to be concerned, to take care of yourself.’

Foucault posited that there were four main “technologies:”

“(I) technologies of production, (2) technologies of sign systems, (3) technologies of power, and (4) technologies of the self” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

Clearly in this case what we’re focusing on is the technology of the self, “which permit individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

You would be hard-pressed to convince me that the bulk of apps available to us all on our mobile devices these days are not, in some way, designed to fulfill some narcissistic desire to know ourselves better. Whether it’s for fitness (calorie counters, pedometers, diet analyses, jogging analyses) or for social edification (how many people who you know are around you, how many “friends” do you have [Facebook], what are you doing right now [Twitter], how often do you visit a place [FourSquare or Yelp]) many of these tools are intended to display a mirror image of ourselves and project it onto a social web and out to others. (Hell, iPhones now include a standard photo feature that allows you to use the phone as a literal mirror by using the front-end camera as you stare into it.) But they are also intended to help us transform ourselves and make ourselves happier by making us skinnier, healthier, more social, more aware, more productive, etc.

The importance of this is that we have been fooled into thinking we are using these apps to learn more about ourselves, but the social sharing functionality proves that this is performative- we wouldn’t be doing it repeatedly unless there was a performance aspect built-in, an audience waiting to view and comment on the information, providing continuous gratification. In other words, learning more about ourselves, then amplifying that knowledge out to an audience has become habit-forming. We have become addicted to the performance of ourselves.

 “These four types of technologies hardly ever function separately, although each one of them is associated with a certain type of domination. Each implies certain modes of training and modification of individuals, not only in the obvious sense of acquiring certain skills but also in the sense of acquiring certain attitudes.” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

In this case, though Foucault was often very careful in his diction and a master of semiotics, what if we replace the word “attitudes” with “habits?” After all, Foucault is referring to these technologies of self as dominating, as techniques which train and modify individuals, and a habit formed is demonstrably a tangible and acquired modification of human behavior. Later he continues to elaborate and speaks of “individual domination,”

”I am more and more interested in the interaction between oneself and others and in the technologies of individual domination, the history of how an individual acts upon himself, in the technology of self.”

I know quite a few people who would willingly and openly admit to the individual act of domination upon themselves that they perform on a compulsive basis by updating their Twitter feeds, updating the status on their Facebook accounts, uploading their latest photos to Instagram, and checking in on FourSquare. There is a reason that Googling “Is technology the new opiate of the masses?” garners page upon page of thoughtfully written and panicky editorials and blog posts. This is a newly acknowledged and little resisted truth of our times- we are willing slaves to the ongoing performance of our selves.

Advertisement

First and foremost, quite importantly for the purpose of this post: definitions of “Persona” vs. “Identity-“

Persona

  • : a character assumed by an author in a written work
  • : an individual’s social facade or front that especially in the analytic psychology of C. G. Jung reflects the role in life the individual is playing
  • : the personality that a person (as an actor or politician) projects in public
  • : a character in a fictional presentation (as a novel or play)

Identity

  • : the distinguishing character or personality of an individual : individuality
  • : the condition of being the same with something described or asserted

Crap, that actually wasn’t as helpful as I had hoped it would be…I feel more confused now than I did before.

Nevertheless, these definitions seem to point toward the fact that a “persona” is more often something performed, or developed consciously one’s self, or performatively developed by someone else, whereas an “identity” is embedded and synonymous with a person’s actual character. For the sake of this entry, that is how we will distinguish between the two.

Moving on to THE POINT.

A while ago I tried to pitch a story to this American Life which had been inspired by the experiences of my friend- we’ll call him Jim. See, Jim was looking for a new job and applying at a few different companies. One day, reminded by a friend of his that he should be actively managing his online persona through Google search results, Jim Googled himself to see what came up when he searched for his full name.

The search results floored him. Jim was met with a cascade of search results about a man with his same name. There were pages with warnings posted by people claiming that a gentleman with Jim’s same name was a con man, that he had tricked them out of money, that he was a pathological liar, and not to trust him. The warnings described a man with a similar build, height, weight and general hair and eye color.

Jim freaked out (I think, understandably), because he was very well aware that any prospective employer would be Googling him to do a cursory background check, and if they were met with this barrage of information he might be weeded out of even a beginner pool of job applicants. He was being framed by someone he had never met, and who, due only to sharing the same name and a similar physical build, was stealing his online identity. How can you combat that in this day and age?

To this day, Jim (luckily employed by now) has to include disclaimers in applications and emails and hope that employers and business partners will take his word that he is not “that Jim” when  embarking on new ventures. If Jim weren’t already married, presumably this would also severely impact his dating and love life.

The story I wanted (and still want) This American Life to cover is this: what happens in the modern world when all of the other folks who use your name misrepresent and sometimes even defame your character online? In a modern era where so much of our persona is developed and managed online, how do we separate what is fake from what is real, and what happens when even our fabricated online personas take on a life of their own?

What do I mean by fabricated online personas? Well, is the life you represent on Facebook an accurate snapshot of what is really going on with you? One of my favorite questions to ask is why no one ever posts photos of themselves crying alone on a Friday night- because that does happen to people. It’s widely known that our online selves, or personas, generally skew toward happiness, success, beauty, and popularity rather than honestly depicting struggles, bad hair days, and loneliness.

And having control over how we are presented online is very important to most internet users- so much so that companies like www.reputation.com now exist to help you “control how you look on the internet.”  Their claim, “People searching for you are judging you, too – defend yourself against digital discrimination with Reputation.com” may seem contrived and fear-mongery, but it still taps into some very real concerns for people.

After all, our identities are very important to us, and the gadgets and devices we are using provide a mirror of our own selves which we project onto these technologies. In fact, Michel Foucault (remember our dear friend?) called these tools “Technologies of the Self,” before the internet was a thing. According to my fascinating pal Wikipedia,  Technologies of the Self are “the methods and techniques (“tools”) through which human beings constitute themselves. Foucault argued that we as subjects are perpetually engaged in processes whereby we define and produce our own ethical self-understanding. According to Foucault, technologies of the self are the forms of knowledge and strategies that “permit individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.”[2]

In other words, these days, technology and social media help us to develop our online personas, which end up very deeply affecting our real identities. See what I did there?

For example, if you’re one of the millions of Indian surname Patels in the world, trying to get a unique but still relevant Gmail email address must be murder at this point. You would hardly feel like the email address represented you if you were Patel627281939464528193947273484@gmail.com

And what about the mayhem and madness that surrounded Facebook’s push to get its users to sign up for a unique direct URL to their profiles? Sure, maybe Tatianuh Xzanadu had no problems getting her direct URL with no competition, but for the rest of us, it was like an Oklahoma land run, or a crushing Black Friday sale, waiting for the clock to hit the magic time when we could hurriedly type in our first and last name and finally claim a personalized Facebook URL, a chance at allowing people to access the real me (as far as anyone’s Facebook profile actually does that).

This would all be complicated enough, except that these days not only are people with the same names being misjudged online for the behavior of others with the same name, but poor celebrities and famous authors are having their personas and online identities and even their styles co-opted. Again, for example, the gentleman who formerly tweeted as Christopher Walken under the handle “CWalken,” who delighted thousands on Twitter by impersonating the idiosyncratic and gloomy actor in his tweets about everyday observations and occurrences.

The Wrap interviewed “CWalken” and described the Twitter feed thusly,

“What’s great about the “CWalken” feed is that it sounds like Christopher Walken, yet it’s got the consistent tone and point of view that only a committed writer can achieve. “CWalken” reads as if the actor himself were emerging from a surreal haze a few times a day to note the stupidity, oddness, and weird beauty of the everyday world:”

And the mystery Tweeter, when interviewed, similarly made some really interesting points:

“The politics, tastes and observations are my own. That is — I am not trying to speak for Christopher Walken. I am simply borrowing his voice and reworking my words in his cadence. Some people crochet, I do this.”

It’s problematic because some celebrities feel that their identity and their reputation is at stake, that something they have lived a lifetime to build has been stolen from them. But in some cases, this really is high art. As The Wrap author points out, the CWalken tweets were focused and really well-written, probably much more so than Mr. Walken himself could have achieved.  Alas, the “CWalken” account was eventually shut down because Twitter maintains a policy of cracking down on impersonator accounts.

However, other online persona impersonators have had similar success, such as the perennial favorite: The Secret Diary of Steve Jobs, or one of my recent obsessions, “RuthBourdain” where Alice Waters was anonymously tweeting as a combined persona of Ruth Reichl mashed with Anthony Bourdain. That little venture  even earned Waters a humor award.

I mean, that gets really complicated. At that point we have a celebrity chef who is world renowned and celebrated in her own right, assuming the persona of not just one, but two other luminaries in the food world as an outlet for her nasty and rye, humorous side.

One last example I just came across today introduces yet another new genre, blog as Yelp Review as famous author: check out Yelping with Cormac. This Tumblr blog assumes the writing style and occasional subject favorites of Pulitzer prize winning author and presumed hermit Cormac McCarthy in order to write Yelp-style reviews of well known commercial establishments in the Bay Area. A fascinating concept, but here we have clearly gone completely down the persona-stealing online rabbit hole.

Where will the rabbit hole take us next?


Any student of communications worth his or her salt will have studied the infamous Nixon-Kennedy Presidential debates of 1960. Why? Because they were the first ever televised presidential debates, and they marked an inflection point in American politics, where hearts and minds were not won merely by talented rhetoricians and charming radio personalities, but increasingly by physical appearances and a demonstrated ease in front of a camera.

As the story goes, Nixon was ugly and evil looking normally, but on the date of the first of the four debates he would have with Kennedy, his physical appearance was worse than usual: “Nixon had seriously injured his knee and spent two weeks in the hospital. By the time of the first debate he was still twenty pounds underweight, his pallor still poor. He arrived at the debate in an ill-fitting shirt, and refused make-up to improve his color and lighten his perpetual ‘5:00 o’clock shadow.’” I think we can all imagine.

However, Kennedy’s appearance was another story, “Kennedy, by contrast, had spent early September campaigning in California. He was tan and confident and well-rested. ‘I had never seen him looking so fit,’ Nixon later wrote.”

Whether Kennedy’s handlers were much more prophetic about the impact of TV, or whether Kennedy just lucked out, we may never know. What we do know is that Kennedy’s appearance on TV during that debate changed the path of American politics forever. A majority of Americans who listened to the debate solely via radio pronounced Nixon the winner. A majority of the over 70 million who watched the televised debate pronounced Kennedy the easy winner.

Are you beginning to see why this appeals to comms geeks? The suggestion that a newly introduced medium could so profoundly impact the perspectives of so many people in the context of a very high stakes popularity contest was tantalizing. It remains tantalizing today.

Fast forward 51 years to Obama conducting a Townhall meeting streaming on Facebook, and to GOP Presidential candidates using Twitter and Facebook metrics to potentially supplant traditionally collected polling information.

What would happen if you could use Twitter, Facebook or good old Google Analytics to accurately predict the outcome of the 2010 Presidential Election? Some growing social media analytics companies such as Likester are doing just that by measuring the uptick in popularity of pages and social networking presences. In fact, Likester accurately predicted this year’s American Idol winner way back in April.

But how scientific is this data, and what exactly is being measured? As Mashable reports, Likester mostly measures popularity and predicts winners based on the aggregation of “likes” on Facebook in concert with high-profile events. For the GOP debate, “The stand-out frontrunner was Mitt Romney, who ended the night with the greatest number of new Facebook Likes and the greatest overall Likes on his Page.” As we can see, Likester basically began the ticker right when the debate began and distinguished between unique “likes,” or “likes” that occurred after the debate had started, from overall likes. In the end Romney had 19,658 unique or new “likes” during the debate, resulting in 955,748 total “likes,” representing a 2.06% increase in overall “likes” during and directly following the televised debate.

Likester reported, “Michelle Bachmann ranked second in the number of new Likes on her Facebook Page.” In numbers that came out to 9,232 unique or new “likes,” 326,225 total, representing a 2.75% increase.

Care of nation.foxnews.com

Naturally, AdWeek threw their two cents into the discussion, arguing:

“Polling has always been an important element to any electoral bid, but now a new type of impromptu assessment is coming to the fore. Third parties, such as analytics startup Likester, are carving out a space for themselves by processing data that is instantaneously available.”

I’ll give you instantaneously available, but, again, how scientific is this? After all, no one seems to be taking into account what I would call the “hipster correlate.” The hipster correlate is the number of Facebook users who would have “liked” a Romney or Bachmann or Ron Paul page in a stab at some hipster-ish irony, thus proving to those who check their Facebook page or read their status updates their outstanding skills of irony in becoming a fan of a Tea Partier web page, etc. If we’re really doing statistical regressions here, what’s the margin of error here, Likester?

Additionally, how closely can we attach the fidelity of someone who “likes” a Facebook page to a living, breathing politician? On my Facebook page I think I have “liked” mayonnaise, but if there were to be a vote between mayo and ketchup to determine which condiment would become the new official condiment of the U.S., would I necessarily vote mayo? That’s actually kind of a crap analogy, but you get what I mean.

Before we are forced to read more blogs and news articles (like this one!) pronouncing exit polls dead and Facebook and Twitter as the new polling media, I’d like to see a very solid research study conducted as to how closely becoming a fan of a political Facebook page correlates to Americans’ actual voting behavior. In other, more web-based marketing terms, what’s the voting conversion rate for political Facebook pages?

Has anyone seen anything like that?

In other words, please, social scientists and pollsters, show us whether yet another new medium is disrupting the way that Americans individually see and interact with their political candidates, and how that medium has begun to shape the way those political candidates are regarded by an American audience as a whole.


Just because twitter is an American company, does it not have to play by other countries’ laws when it becomes embroiled in legal cases involving free speech?

That’s exactly the sort of mess that Twitter finds itself in today in the U.K. where a “British soccer player has been granted a so-called super-injunction, a stringent and controversial British legal measure that prevents media outlets from identifying him, reporting on the story or even from revealing the existence of the court order itself” in order to avoid being identified by name in scandalous tweets.  Unfortunately for the player, the super injunction has been ineffective and “tens of thousands of Internet users have flouted the injunction by revealing his name on Twitter, Facebook and online soccer forums, sites that blur the definition of the press and are virtually impossible to police.”

But I would argue that what is being blurred here is not necessarily the definition of the press, but rather the physical borders of a country where it meets the nebulous nature of the net.

How do we reconcile the physical, geographical and legal boundaries in which we live with the boundless expanses of the internet? I’m sure many people would agree that the democratic (in that it’s arguably free, fair and participatory) nature of Twitter’s platform and mission is inherently American. That Twitter’s ‘Americanness’ is built into its very code. So how do you transplant an American messaging platform such as Twitter’s in other countries and then expect it to be above or fly below the laws of another country?

“Last week…the athlete obtained a court order in British High Court demanding that Twitter reveal the identities of the anonymous users who had posted the messages.” But back in January of this year, as the New York Times reports, “Biz Stone, a Twitter founder, and Alex Macgillivray, its general counsel, wrote, ‘Our position on freedom of expression carries with it a mandate to protect our users’ right to speak freely and preserve their ability to contest having their private information revealed.’”

So what law should be followed in this case? According to the NYTimes, “Because Twitter is based in the United States, it could argue that it abides by the law and that any plaintiff would need to try the case in the United States, legal analysts said. But Twitter is opening a London office, and the rules are more complicated if companies have employees or offices in foreign countries.”

Yet our technologies and our corporations are very much U.S. representatives overseas. Google’s, Microsoft’s, Twitter’s, etc. offices in other parts of the world are nearly tantamount to U.S. embassies abroad.  These companies in large part bear the brunt of representing American ideals and encapsulate American soft power. Even the average Chinese person who may never encounter a flesh-and-blood American will most likely interact with multiple different examples of American cultural goods in his or her lifetime, largely due to the global proliferation of our technologies and media. Which is why, in large part, Twitter and Google have been banned in China. Too democratic for the Chinese government’s liking.

In his chapter (Chapter 1.4) of the Global Information Technology Report (GITR), Cesar Alierta with Telefonica argues that we are in the middle of “the fifth revolution.” The first revolution was the Industrial Revolution, then came Steam Power, then Electricity, then Oil, and now we are in the Information and Communication technology revolution- the fifth. He writes, “Each of these eras has entailed a paradigm shift, more or less abrupt or disruptive, which has led to profound changes in the organization of the economy, starting with individual businesses, and eventually, transforming society as a whole.”

What if we assume that- even if it’s not the original intent- the tacit intent of a technology is to become embedded in someone’s life until it’s nearly impossible to remember living before it. If America’s technologies are little carriers of soft power democratic beliefs and practices, aren’t those beliefs also becoming embedded as well? If so, really, where do we draw the line about the use of a technology in a country other than the one where it was invented?

And though indeed this is an example of a conflict occurring between two very first world countries (the U.S. and the U.K.) This may be one of the greatest barriers to ICT adoption in emerging and developing economies.

In their chapter (Chapter 1.2) of the 2011 Global Information Technology report, Enrique Rueda-Sabater and John Garrity from Cisco Systems, Inc. argue that the “treatment of broadband networks…as basic infrastructure; the recognition that competition is one of the best drivers of technology adoption, and imaginative policies that facilitate access to spectrum and to existing infrastructure that can be shared by networks” are necessary preconditions to accelerated Information and Communication Technologies (ICT). However, the beliefs that underlie those preconditions, 1) that all citizens of a country deserve unlimited access to the internet as a basic human right, and 2) that competition (which can be read as capitalism here) is one of the best drivers of technology adoption, do not seem to necessarily be universal values.

Certainly the belief that unlimited access to the internet is a basic human right is a fast-growing belief among developed economies of the world. As Karim Sabbagh, Roman Friedrich, Bahjat El-Darwiche, and Milind Singh of Booz & Company write in their Chapter (Chapter 1.3) on “Building Communities Around Digital Highways,” “In July 2010…the Finnish government formally declared broadband to be a legal right and vowed to deliver high-speed access (100 megabytes per second) to every household in Finland by 2015. The French assembly declared broadband to be a basic human right in 2009, and Spain is proposing to give the same designation to broadband access starting in 2011.”

But Finland and Spain are both democracies, and France, is a republic with strong democratic traditions. Democracies tend to believe in transparency, accountability and the free dissemination of information, so naturally the adoption of technologies which put the ability to freely disseminate and consume information squarely in the hands of its people jibe with those beliefs. But that is not so in non-democratic societies. I would thus argue that some form of democracy, well-established, should also be considered as a pre-condition for the accelerated adoption of ICT.  And if a country has already heavily adopted and invested in ICT, just as Britain has, that then, as we have seen here, the accelerated deployment of ICT will also bring about accelerated petitioning for expanded democratic rights among its people.


The White House finally came forward last week with the decision not to circulate the graphic images that confirmed Osama Bin Laden’s death, and I immediately I believe I heard people around the U.S. (and the world, perhaps?) breathe a mostly collective sigh of relief. Or was that just me?

It is a favorite pronouncement that we are now an image-driven culture, focused chiefly on video, photos, and graphics to learn, retain and discuss the world around us. This pronouncement is made, particularly, in the context of discussions about the RSS-ification of news and information, where all the news that’s fit to print is expected to fit into 140 characters.

See, as the thinking goes, our brains are attempting to consume so much more information than ever before, so the introduction of new forms of media and imagery (read: not text) will help our brains to better retain and render more realistic those discrete and fast-coming pieces of information.

Whatever the strategy of getting information to us, as consumers of information, it is still worth fighting for the chance to use our own discretion when it comes to how we, as humans, want to digest our information. Often we seem to have no choice- the newspapers, site managers, TV and movie producers and editors do that for us. But when we are presented with the choice, many of us would still choose not to see graphic images of death and violence.

[I can already hear the devil on my shoulder wanting to advocate for his side of the story, so as an aside, I will say that I do believe there is power in images. And I believe that things can be rendered more real in our everyday lives by seeing them, even if only through a photographer’s lens. That is often a good thing, particularly for the politically sheltered and/or apathetic masses. But I also believe that things can be too real, and hinder a person’s ability to move on with their life. Or images can be so real, but so simultaneously staggeringly outside the context of someone’s own experience that they  are unreasonably and ineffectively disturbing. I believe the release of images of OBL’s death would have such an effect for many Americans.]

Which is why, I believe, so many Americans have keyed in on the photo taken by the White House photographer and posted on their Flickr feed, of Obama’s staff watching the live feed of the raid in Pakistan.

This photo has become the focal and symbolic  photo of the moment that OBL was killed, and has stirred so many different reactions. For me the photo is staggering on a number of levels:

1)      President Obama is not front and center.

2)      The expression of Secretary of State Clinton’s face (whether she likes it or not)

3)      The fact that we are experiencing the ultimate surveillance moment- through the eyes of someone who was watching the scene through a camera lens, we are watching those who are watching live footage of what was happening.

4)      It is perfect voyeurism, but it is also intensely primal. We are observing the reactions of other human beings to an event we know we must also react to. In their reactions we search for our own feelings about the event, and we take cues.

Incredibly, in their recent Opinionator entry, Gail Collins and David Brooks  brought up pretty much everything that I was thinking when I first saw this photo, but it’s something I think everyone should take a look at, because there is so much to discuss within the limits of this image.

On a similar note, and related to my earlier post about the news of Bin Laden’s death and the role of Twitter in breaking that news, here are some outstanding digital images of the flow of information across the Twitter-verse in the hours preceding and following the White House announcement, care of the SocialFlow blog.

For those of you who are unfamiliar with the company, SocialFlow is a social media optimization platform that is used …to increase engagement (clicks, re-tweets, re-posts and mentions) on Twitter. Our technology determines the optimal time to release the right Tweet based on when your audience is most receptive.”


Last night at about 8:45 pm PT I found out that Osama Bid Laden had been killed. Here’s how that went down:  My brother picked up his iPhone, glanced through his Twitter feed and announced the news as we were waiting for the opening credits for an action movie we had come to see to close out our Sunday nights.

It being Twitter, I suspended disbelief, but felt reasonably confident that the news was true, given the groundswell of information being tweeted about it. And then I moved on with my evening.

Is there anything wrong with that?

Yes. And, no.

Yes, because it was truly a momentous event. It was, in many ways, the culmination of 10 years of searching and frustration, made and broken political careers, physical demonstrations of strength and power, alarmed admissions of weakness and ignorance, aggression and intolerance, inner turmoil and acceptance, American tragedy and dark, dark American comedy. And all I did was continue to sit in my seat and watch a very sub-par movie.

No, because I knew that the next few days would unfurl themselves before me in a constant stream of information about his whereabouts for the last ten years, where he was killed, how he was killed, Obama’s thoughts on his death, everyone’s thoughts on his death, analyses of how this will affect the Presidential race, pronouncements of how this will affect Obama’s legacy in office, and general societal responses to the news of his death. And I would be there to read, watch, listen to, and ingest it all.

The thing about fast-breaking news these days is that it breaks, and it continues to break like a wave hitting the continental shelf over and over and over. This phenomenon gives modern news consumers time to digest that information from all chosen angles, from all chosen sources.

All of that, and all I’m really taking away from this news is a) I am utterly relieved to see a contingent of contacts within my sphere who are conflicted about unabashedly cheering someone else’s death-even if that person is arguably the most hated man of the 21st century. This contingent includes my brother, a member of the U.S. Army Reserves, who was twice deployed to Iraq.

I continue to believe that the greatest American patriots in the world are those who continue to question, and- where fitting- condemn, the loss of life as a necessary price of freedom and security, and who query our government about whether the loss of life abroad is a necessary precondition for maintaining American democracy.

In related news, an obituary for Osama Bin Laden in the NYTimes? A poignant statement in the city that lost the most at his hands.


October 15, 2010- http://www.gather.com/viewArticle.action?articleId=281474978605059

Oh goody, as the New York Times reported on October 10th, Twitter has finally come up with a plan to make money. Only, it’s the old new plan, which is to say it’s the same plan as everyone else.

As Twitter’s Evan Williams stepped down, to make room for Dick Costolo who previously headed Twitter’s advertising program as the new CEO, the tech industry remarked on how the shuffle represented Twitter’s increased new commitment to monetization.

As the New York Times reported, “Twitter’s startling growth — it has exploded to 160 million users, from three million, in the last two years — is reminiscent of Google and Facebook in their early days. Those Web sites are now must-buys for advertisers online, and the ad industry is watching Twitter closely to see if it continues to follow that path.”

But there still seems to be no real innovation in the advertising models of hi-tech companies from whom the world expects a great deal of innovation. Why are hi-tech social media and social news aggregation companies having such a hard time innovating with their monetization strategies?

At this point, each new social media platform that comes along seems to jump into the online advertising market that Google forged largely on its own. Now that Google did the heavy lifting on education and we all speak and understand the language of “click-thru rates,” “impressions,” and “search engine optimization,” newcomers like Twitter don’t have to pay or do very much in order to enter this monetization space. Coincidentally, it would seem that they aren’t doing very much at all to evolve it.

As a result, the whole online ad framework is falling flat, and after a few years of evangelizing for social media advertising and the use of new media platforms like Twitter and Hulu, are advertisers really making more money and seeing the benefits of these new media? It’s becoming an embarrassingly redundant question- “yes, we know we are creating funny and entertaining media for our consumers to enjoy, but is it actually increasing sales?”

Interestingly, at this year’s gathering of the Association of National Advertisers, as the New York Times reported, a survey at the beginning of the opening session found that “marketers may still need some schooling on the dos and don’ts of social media. Asked to describe how its use has affected sales, 13 percent replied that they did not use social media at all. (Eleven percent said sales had increased a lot, 34 percent said sales increased ‘some’ and 42 percent said they had seen no change.)”

It would seem that media analysts are continuing to approach social media and search as a given element of any marketing strategy without any hard evidence as to why every company needs to integrate social media into their market strategies. Instead, without the numbers to make the case, analysts and marketeers still discuss the virtues of earned media versus paid media, the value of eyeballs and impressions, and earned equity.

One of this year’s smashing social media success stories has a particular ability to make marketers foam at the mouth. 2010’s Proctor & Gamble “smell like a man” campaign for Old Spice helped increase the brand’s followers on Twitter by 2,700%, to where they “now total almost 120,000.”

Marc Pritchard, global marketing and chief branding officer at Proctor and Gamble had his moment in the sun for what was, undoubtedly, the most high-profile and successful example of how modern brands can use social media to promote their brands. But in the coverage of Pritchard’s talks, there is little to no mention of how the campaign is actually impacting the company’s bottom line. Instead, there is this: “The currency the campaign has earned in social media has pushed it into the popular culture. Mr. Pritchard showed the audience a spoof that was recently introduced by Sesame Workshop in which Grover suggests that his young viewers ‘smell like a monster on Sesame Street.’

But an internet meme does not a year over year increase in sales make. There is no mention of how an increase in followers on Twitter converts itself into a percentage increase in sales. It’s like an equation is missing, or somehow we have all misunderstood how to connect the dots. At the conference Joseph V. Tripodi, chief marketing and commercial officer for Coca Cola was interviewed, and his only contribution to this dilemma was to discuss how social media can sometimes save a company money on promotions through viral videos, “It cost less than $100,000 to produce the video, he added, demonstrating that “you don’t need huge amounts of money to engage with consumers.” However, savings on a marketing budget also do not a sales increase make.

Refreshingly, one of the conference’s keynote speakers, Mark Baynes, vice president and global chief marketing officer at the Kellogg Company, did acknowledge the missing link in the social media to profits equation by proclaiming, “In God we trust; the rest of you bring data.”


September 22, 2010- http://www.gather.com/viewArticle.action?articleId=281474978539098

Hegel famously proclaimed that “history is a dialectic,” that is, a dialogue between people who may hold differing views, but who seek to accomplish a basis of truth by debating together. In other words, history has no discernible truth, but more closely attains the overall goal of “truth” through discussion from all of the voices of history and their personal accounts of what happened.

This quotation of Hegel’s is often cited in the context of discussions about the literary canon, or the “western canon,” as some refer to it. The term “Western canon” is used to denote the selection of books, music, art and general cultural that have most influenced the shape ofWestern civilization and culture over time.

As demonstrated, a simple search on Wikipedia for either of these terms will tell you much about what they are. However, Wikipedia doesn’t explicitly tell us is that it is also holding the record of how the modern canon is determined, and how the truth of history is being determined by the myriad of voices which contribute to it everyday.

A recent Bits blog from the New York Times mentioned the trail of edits that the Internet provides to anyone who is looking for it. James Bridle, founder of BookTwo is particularly interested in what the future of literature holds, but also how that discussion is playing out and how we can track where the discussion has been. In one of his recent entries Bridle points out that although an article on Wikipedia may tell a specific story, the edits show a process of opinion, correction, and the potential biases of each writer. In this respect Wikipedia, and every constantly updated website represents an archive of evolving information over time. What interests Bridle is the offer of two distinct stories: one that is front-facing to the reader and one that reveals the behind-the-scenes editing, writing and creative process.

To illustrate the point, Bridle selected the topic of the Iraq war as an entry in the Wikipedia canon and had all of the history of the entries surrounding the Iraq War published into physical volumes. In his entry, Bridle writes, “This particular book — or rather, set of books — is every edit made to a single Wikipedia article, The Iraq War, during the five years between the article’s inception in December 2004 and November 2009, a total of 12,000 changes and almost 7,000 pages.” Bridle notes that the entire set comes to twelve volumes, which nearly approximates the size of a traditional encyclopedia.

Which brings us to the favorite comparison of Wikipedia and your parents’ Encyclopedia. Is one or the other more reliable? Who gets to decide what is a part of the overall Western canon? Shouldn’t we all be alarmed by a process in which a child may be permitted to contribute to an online encyclopedia which many now claim is an expert source?

In fact, Bridle’s point reminds us of a standard strategy employed to defend the credibility of Wikipedia and its process against its would-be detractors. The strategy is to cite a story central to the process under which the Oxford English Dictionary was compiled in the 19th century. Simon Winchester’s book, The Professor and the Madman: A Tale of Murder, Insanity, and the Making of The Oxford English Dictionary details a Jekyll and Hyde story of the brilliant but clinically insane Dr. W.C. Minor who provided thousands of entries to the editors of the OED while he was committed at the Broadmoor Criminal Lunatic Asylum. In other words, if a mad man may contribute significantly to a tome of the English language which is still very much the authoritative text today, why can a perfectly sane pre-teen not contribute to the modern canon of information about frogs, Major League Baseball, or global warming?Should we be preventing anyone from contributing to the ever-evolving conversation about what is truth and what is history?

As sites such as Twournal –which offers the Narcissistic boon of publishing your very own Tweets through time in print form– begin to proliferate, each of us can possess our very own piece of the modern web canon, whether in print or online. As Twournal describes itself, “Over time tweets can tell a story or remind us of moments. In 20 years we don’t know whether twitter will be around but your Twournal will be. Who knows maybe your great grandkids will dig it up in the attic in the next century.”

That means that each of us now has access to print a credible-looking book of our own (often misspelled) musings and meanderings as representative of history, according to us. Yet in the absence of a forum in which people can engage with our Tweeted observations, there’s no real dialectic. It therefore seems safe to conclude that Hegel would have preferred Wikipedia to Twitter, or to your Twournal.


September 22, 2010 – http://www.gather.com/viewArticle.action?articleId=281474978538594

Sneaking off to smoke cigarettes. Experimenting with alcohol. Sexual experimentation. These are all Hollywood hallmarks and symbols of adolescent youth in the United States. Americans think of the teenage years as the time to get out into the world, try new (and perhaps forbidden) things, and become an adult in the process. But what if in the process, the adolescent become a felon?

Every once in a while a high profile case involving teens and the internet hits the web, and parents start to squirm. Generally, however, these cases highlight how the Net may be perilous to the teen, not how the teen may be perilous to the Net. While in some situations those warnings are legitimate, it may be time for parents to begin to consider another way in which their child’s internet use may be perilous to their future: hacking.

Today’s teens are more tech savvy than any other generation, and the generation that follows them will be all the more savvy. According to a February 2010 study conducted by the Pew Research Center, “Internet use is near ubiquitous among teens and young adults. In the last decade, the young adult internet population has remained the most likely to go online. Over the past 10 years, teens and young adults have been consistently the two groups most likely to go online, even as the internet population has grown and even with documented larger increases in certain age cohorts (e.g. adults 65 and older).”

Thus it is no longer sufficient to think of every teen as wide-eyed and naive about the varied functions and uses for the Net. Many teens are way ahead of the rest of us, hacking and writing code, doing their own programming and creating the next generation’s tools. However, the same teen urges that drive them to experiment with drugs and sex– those strong hits of hormones and a sense of invincibility– also today lead them to commit crimes on the web.

Just this week a 17 year-old Australian teen caused a “massive hacker attack on Twitter which sent users to Japanese porn sites and took out the White House press secretary’s feed.” The teenager, Pearce Delphin, simply revealed a Twitter code security flaw and publicized it. The flaw was then exploited by hackers who subsequently wreaked havoc on Twitter’s user base of more than 100 million for nearly five hours. When asked why he would do such a thing, Delphin reportedly replied, “I did it merely to see if it could be done … that JavaScript really could be executed within a tweet…I had no idea it was going to take off how it did. I just hadn’t even considered it.””

But the story gets better, before the Associated Press could actually hypothesize what the danger of this hack might have been, 17-year old Delphin came through with it first, “Delphin said it could have been used to ‘maliciously steal user account details.’” He told the reporters, “The problem was being able to write the code that can steal usernames and passwords while still remaining under Twitter’s 140 character tweet limit.”

Likewise, in 2008 another 17-year old from Pennsylvania admitted to crashing Sony’s PlayStation site after being banned for cheating in a game called SOCOM U.S. Navy Seals. By intentionally infecting the Sony site with a virus, the teenage honors student was able to crash the site for a duration of 11 days in November 2008. In that case the kid got lucky, rather than pursue the case as a grand jury investigation, the authorities decided to let the teen’s local juvenile court handle the charges. In the end, the 11th grade student was judged delinquent and charged with unlawful use of a computer, criminal use of a computer, computer trespassing and the distribution of a computer virus.

Somewhat humorously, Net security sites like Symantec and McAfee have pages dedicated to teen use and abuse of the Net. Symantec’s is titled “The Typical Trickery of Teen Hackers,” and addresses questions such as “I discovered that my teenager had figured out my computer password and logged in, resetting the parental controls we had installed. How did this happen?.” In their recent 2010 study McAfee reports that, “85 percent of teens go online somewhere other than at home and under the supervision of their parents, nearly a third (32 percent) of teens say they don’t tell their parents what they do while they are online, and 28 percent engage with strangers online. The survey results should serve as a wake-up call for many parents.”

While teenage tomfoolery and trickery is generally regarded as humorous (thanks Hollywood) and as a coming-of-age tendency, the trouble begins when a teen’s future is jeopardized because he or she has not developed a sense for the moral and ethical implications of their actions on the web. Because hacking is not as tangible as, say, stealing a T-shirt at the mall, it is harder for teens to grasp how a few key strokes can be considered criminal. Yet it is up to today’s and tomorrow’s parents to put in the extra effort to educate teens about how their activities online may jeopardize their extremely valuable future.


September 02, 2010- http://www.gather.com/viewArticle.action?articleId=281474978491387

Smart advertisers and marketers know that part of building awareness of a brand and attachment to a brand these days involves allowing the consumer to feel as if they are a part of the brand, and the brand is a part of them.

The most innovative way to elicit this feeling among increasingly jaded consumers is to allow them to participate in the way a product is sold to them, or presented to an overall greater audience. In other words, to integrate elements of “interactive or collaborative advertising” into their overall marketing strategy.

Some of this is revolutionary stuff, and is still regarded as too dangerous by most traditional advertising, marketing and brand agencies the world over. Ostensibly, what it means is giving consumers permission to experiment with, and command some control of, a brand. If I may go down a yellow brick road of an analogy, this is no less than cutting down the Wizard’s curtain and revealing the small man behind it, subsequently allowing the consumer to revel in his or her discovery of the small man, and as a result of said revelation, being amply empowered to get Dorothy back from Oz to Kansas his or her self.

But when it works, it works so, so well.

Let us take, for example, the Old Spice Guy. If you’ve never seen or heard of Isaiah Mustafa, or any of the YouTube response videos that the company launched in response to Tweets it was receiving, then you must be dead or on a remote desert island with no smartphone. This ad campaign which has incorporate TV ads, Twitter, Facebook and YouTube so well has dominated most of this year’s buzz conversations.

How about something more recent? Tipp-Ex is a correction fluid brand (think White-Out), who recently launched a YouTube video ad campaign which allows the viewer to determine the end of the story. The viewer first watches the setup video where a guy camping with his friend is alerted that a bear is right behind him, and is urged by his friend who is videotaping the event to shoot the bear. The video viewer is at this juncture permitted to decide if the man should shoot the bear, or not. After making the decision, the viewer is redirected to a video in which the camper urges the viewer to rewrite the story.

The whole thing is highly reminiscent of “advertising and design factory,” CP+B’s groundbreaking 2001 “Subservient Chicken” campaign for Burger King, where visitors to the website can type in any command and a man dressed in a chicken suit on a webcam performs the requested function. So while Tipp-Ex’s overall concept isn’t new, their delivery is.

Largely what’s interesting about interactive or collaborative advertising is that it nicely paints the line between earned media and paid media. A company pays to create the initial ad, but then by virtue of the fun of interacting with it and collaborating it, consumers share and continue to virally promote that ad, which is where your earned media begins to kick in.

These concepts aren’t exactly brand new, but their integration into basic marketing strategies is, and increasingly larger companies are beginning to take notice of how much buzz can be generated through earned media without having to necessarily pay for every step of it. In addition, not every company has experienced skyrocketing revenues as a result of investing in interactive advertising, so the science here and how to master it is still relatively new.

One thing’s for sure, however. It sure makes advertising a lot more fun from the consumer perspective.