Posts Tagged ‘Facebook’


In order to survive in the modern era, companies must grasp a strong understanding of psychology, or at least of the type of pseudo-psychology that Edward Bernays, immortalized as the father of PR, made widely available to marketers and advertisers. Bernays was an Austrian American who wove the ideas of Gustave Le Bon and Wilfred Trotter on crowd psychology with the psychoanalytical ideas of his uncle, Sigmund Freud, and ultimately asked, “If we understand the mechanism and motives of the group mind, is it not possible to control and regiment the masses according to our will without their knowing about it?”

Historically companies have leveraged a number of psychological devices and theories to generate desire within their target demographics and audiences in order to sell more. Advertising seeks to simultaneously engender strong positive feelings about a product or company while simultaneously leaving the audience feeling emptier for not owning the advertised product. The ability to pull this off is intensely powerful, and yet not as powerful as the ability to affect this reaction within the target demographic, autonomously, spontaneously.

This is the accomplishment of the new realm of mobile technologies and apps such as Twitter, Facebook and Instagram. In effect, their breakthrough in psycho-marketing is the ability to make their product habit-forming, even addictive. On Merriam Webster addiction is defined as: compulsive need for and use of a habit-forming substance (or we could say product) characterized by tolerance and by well-defined physiological symptoms upon withdrawal. Addiction is the new marketing goal precisely because its inherently dangerous, cyclical nature is exactly what embodies both the need and the fulfillment- all encapsulated in one.

Compulsion and habit are the key words here. Marketers and advertisers drool when they see those words, because they are truly the Holy Grail of advertising. If they can create a condition in their target audience where the deprivation of the product creates a state near-pain for the user/consumer, they are guaranteed a captive customer, possibly for life.

This is precisely what Nir Eyal describes in his TechCrunch article, “The Billion Dollar Mind Trick.”  Eyal outlines a couple of critical concepts; namely “internal triggers” and “desire engines,”

“When a product is able to become tightly coupled with a thought, an emotion, or a pre-existing habit, it creates an ‘internal trigger.’ Unlike external triggers, which are sensory stimuli, like a phone ringing or an ad online telling us to “click here now!” you can’t see, touch, or hear an internal trigger. Internal triggers manifest automatically in the mind and creating them is the brass ring of consumer technology.”

As Eyal points out, “We check Twitter when we feel boredom. We pull up Facebook when we’re lonesome. The impulse to use these services is cued by emotions.” He enumerates the current approach to create internal triggers, labeling it the manufacturing of desires.”

  • “Addictive technology creates “internal triggers” which cue users without the need for marketing, messaging or any other external stimuli.  It becomes a user’s own intrinsic desire.”
  • Creating internal triggers comes from mastering the “desire engine” and its four components: trigger, action, variable reward, and commitment.”

The “desire engine” Eyal refers to is merely a phrase that describes the pre-determined “series of experiences designed to create habits…the more often users run through them, the more likely they are to self-trigger.” All of this is to say that, increasingly, and especially when it comes to mobile consumer technologies and apps, companies increasingly find that their economic and social value is a function of the strength of the habits they create within their user/customer base.

Interesting, yes, but perhaps not entirely new. Michel Foucault (yes, I know I talk about him a lot here, but his work is endlessly relevant to the types of communications discussions we constantly engage in nowadays) discussed this same concept in his investigation of the concept of “technologies of the self,” whereby his objective was:

 “to sketch out a history of the different ways in our culture that humans develop knowledge about themselves: economics, biology, psychiatry, medicine, and penology. The main point is not to accept this knowledge at face value but to analyze these so-called sciences as very specific ‘truth games’ related to specific techniques that human beings use to understand themselves.” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

Yet the concept dates back to the Greeks, “constituted in Greek as epimelesthai sautou, ‘to take care of yourself’ ‘the concern with self,’ ‘to be concerned, to take care of yourself.’

Foucault posited that there were four main “technologies:”

“(I) technologies of production, (2) technologies of sign systems, (3) technologies of power, and (4) technologies of the self” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

Clearly in this case what we’re focusing on is the technology of the self, “which permit individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

You would be hard-pressed to convince me that the bulk of apps available to us all on our mobile devices these days are not, in some way, designed to fulfill some narcissistic desire to know ourselves better. Whether it’s for fitness (calorie counters, pedometers, diet analyses, jogging analyses) or for social edification (how many people who you know are around you, how many “friends” do you have [Facebook], what are you doing right now [Twitter], how often do you visit a place [FourSquare or Yelp]) many of these tools are intended to display a mirror image of ourselves and project it onto a social web and out to others. (Hell, iPhones now include a standard photo feature that allows you to use the phone as a literal mirror by using the front-end camera as you stare into it.) But they are also intended to help us transform ourselves and make ourselves happier by making us skinnier, healthier, more social, more aware, more productive, etc.

The importance of this is that we have been fooled into thinking we are using these apps to learn more about ourselves, but the social sharing functionality proves that this is performative- we wouldn’t be doing it repeatedly unless there was a performance aspect built-in, an audience waiting to view and comment on the information, providing continuous gratification. In other words, learning more about ourselves, then amplifying that knowledge out to an audience has become habit-forming. We have become addicted to the performance of ourselves.

 “These four types of technologies hardly ever function separately, although each one of them is associated with a certain type of domination. Each implies certain modes of training and modification of individuals, not only in the obvious sense of acquiring certain skills but also in the sense of acquiring certain attitudes.” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

In this case, though Foucault was often very careful in his diction and a master of semiotics, what if we replace the word “attitudes” with “habits?” After all, Foucault is referring to these technologies of self as dominating, as techniques which train and modify individuals, and a habit formed is demonstrably a tangible and acquired modification of human behavior. Later he continues to elaborate and speaks of “individual domination,”

”I am more and more interested in the interaction between oneself and others and in the technologies of individual domination, the history of how an individual acts upon himself, in the technology of self.”

I know quite a few people who would willingly and openly admit to the individual act of domination upon themselves that they perform on a compulsive basis by updating their Twitter feeds, updating the status on their Facebook accounts, uploading their latest photos to Instagram, and checking in on FourSquare. There is a reason that Googling “Is technology the new opiate of the masses?” garners page upon page of thoughtfully written and panicky editorials and blog posts. This is a newly acknowledged and little resisted truth of our times- we are willing slaves to the ongoing performance of our selves.

Advertisement

In my opinion, discussions of identity and what identity means, and what constitutes identity have never been more interesting. With the web as a mirror for each of us, as well as a playland of impersonation, self-invention, reincarnation and improvisation, the platforms and dimensions where human identity is played out have never been more abundant and easily accessible.

I was reading my New Yorker last night and came across an article about Aadhaar, an enormous project on a scale never before attempted, that could have far and wide reaching implications for generations of humans yet to come. The project is to officially identify and document the existence of every Indian living in India by collecting certain biometrics from each individual and then issuing those specific biometric features an ID number.

As summarized here:

“Aadhaar, launched by Nandan Nilekani, a genial software billionaire, intends to create a national biometric database ten times larger than the world’s next-largest biometric database.”

One of the stated aims of the project is to “help reduce the extraordinary economic distances between those who have benefitted from India’s boom of the past two decades and those who have not.”

Some of the stunning details of this project that I read about raised very fundamental issues of modern identity as being tied to a specific nation-state. For instance:

“India has no equivalent of Social Security numbering, and just thirty-three million Indians, out of 1.2 billion, pay income tax, and only sixty million have passports. The official opacity of hundreds of millions of Indians hampers economic growth and emboldens corrupt bureaucrats.”

It’s just incredible to think that in this day and age, not everyone who is born into the world is documented as even being alive. It’s incredible, as an American, to consider a situation where your government has no idea that you exist, and you survive outside of the limits of its systems .

Though that all sounds like fodder for a summer anarchist action movie trailer, the reality is, though the lines and boundaries between physical country border and cultures are disappearing slowly as a result of widespread globalization and commercialization, our identities are still very strongly tied to the countries in which we are born, or in which we live. And I should clarify that I am not saying that if you are born in America, you automatically strongly identify as an American. I am saying that if you are born in America and hate the U.S., you are still defined by the country in which you were born, or in which you live, even if you hate it. You are defined in reference to being a part of it, however tenuous that connection is. But if you are born in India and your country does not know you, nor acknowledge that you exist, how is your identity derived?

And if you do not really exist in the eyes of the government, and are not automatically considered a citizen in having been born there, what right does the government have to come and claim you later, as India and the Unique Identification Authority of India (the government agency that is directing this program) are attempting to do with Aadhaar?

It is ironic in this instance that India, so woefully behind in identifying its own citizens due to an outsized human population, is poised to actually jump the gap immediately and overstep other more-developed countries’ systems of social security and citizen identification. Aadhar’s system is based on biometrics, or “methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits.” Interestingly, Wikipedia’s entry here described two different applications of biometrics information: “In computer science, in particular, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance.” Oh Foucault, why do you plague us so?!

Still, in this case, it is hard not to question the intense nature of these methods of identification and the database of information they will generate. Concerns about privacy have, of course, been raised:  “Alongside arguments about social policy, there is also some Indian disquiet about Aadhaar’s threat to privacy.”

A Times of India article also mentioned the potential for exploitation of this info:

“For Nandan Nilekani , the chairman of Unique Identification Authority of India , the challenge now is not just to roll out one lakh or more Aadhaar numbers a day, but to create an ecosystem for players to build applications on top of this identity infrastructure. Now, Nilekani has been negotiating with the Reserve Bank of India to allow banks to treat Aadhaar number as the only document for opening an account. In a free-wheeling interview with Shantanu Nandan Sharma, Nilekani talks about life after Aadhaar when a villager would be able to use a micro-ATM in his locality, or a migrant from Bihar would be able to flash out his number in Mumbai as an identity proof.”

So the ability to identify people, truly down to their physical core can be both exploitative and empowering, as the New Yorker article claims, “If the project is successful, India would abruptly find itself at the forefront of citizen-identification technology, outperforming Social Security and other non-biometric systems.” Of course it would, it’s the physical data collection analog of what Facebook has been doing all along.

In fact, the arguments for embarking upon this venture to issue ID numbers to each Indian are manifold, yet one that seems to float upwards most often is the assertion that this system of identification will help to cut down on abuse of government resources and inaccurate snapshots of how many people are affected by official policy.

Interesting that this is the very reason that Facebook famously insists upon banning pseudonyms on its ever-popular social platform. As Gawker puts it, “the idea that anonymity or multiple identities leads inexorably to a cesspool of abuse, cyberbullying, and spam is Facebook’s strongest argument for a monolithic online identity—one they come back to again and again in defending their controversial real name policy.”

This is written in the context of an article highlighting Chris Poole, shadowy head of the online meme-maker  4chan, and his remarks at the recent Web 2.0 conference that “true identity is prismatic,” and that the actions of online mega-sites like Facebook are “eroding our options” when they lock us into a single identity. In reality, Poole argues, humans are not defined in only one way or another, but by multiple simultaneously performed identities.

Gawker writes, “At this week’s Web 2.0 conference, Poole criticized Facebook’s real name, one profile-per-person policies. Facebook are, he said, ‘consolidating identity and making people seem more simple than they really are… our options are being eroded.

True identity is ‘prismatic,’ according to Poole. You want to be able to present a different identity in different contexts, and be able to experiment without risking a permanent stain on your identity—something Facebook is making increasingly possible as it colonizes everything from games, to blog comments to your favorite music service.”

Synthesizing these ideas and arguments for a minute, the very idea that a retinal scan can prove someone was born somewhere, and that those two elements of identity correlate to provide a human identity is interesting enough to a modern mind. How about when we suppose that each of us– though we may physically have blue, green, or brown eyes, blond, black, brunette or red hair, have been born in Bali, Mexico, Singapore, or Tunisia—is actually a shapeshifter, constantly adapting our personality and persona to best compliment a new group of people, or a new context. Are these quests to nail down our identity through increasingly scientific pursuits even worth their salt if we are each, many people, simultaneously?

No matter how we think our own identities are constituted and shaped, whether we believe we are multiple people or just one, the quest to collect information and data about how we behave, who we are, and what we look like is always evolving. Just recently Facebook field paperwork to form a Facebook Political Action Committee (PAC) that “will fund candidates who support ‘giving people the power to share.’

According to the Gawker story about the PAC, it’s “dedicated to ‘mak[ing] the world more open and connected,’ a spokesman tells The Hill. It will be funded by Facebook employees. Meanwhile, Facebook’s lobbying budget is metastasizing ; the company spent $550,000 so far this year, compared to $350,000 all of last year.”

Will a new era of online conmen and women emerge as a result of this movement to collect identity data? Is privacy officially dead? How do you choose to identify yourself, what do you identify with?


I have recently become obsessed with analytics. I just love the idea of using solid data to make informed choices toward action. It’s the ultimate voyeurism. After all, the internet is a window through which you can peer to monitor other people’s activity. It’s also seductive, instant gratification- I post a document and then check in just an hour later to see how many people have clicked on it, how long they spent reviewing it, where they went after they read it, where they came from before reading it. ..

The power that platforms like Google Analytics and Omniture offer excites me in ways I shouldn’t even publicize- the possibility that all of that information about online actions and behavior is at my fingertips to exploit in order to be more productive, more effective is intoxicating. This is probably why it’s a good thing that I don’t work in marketing or advertising.

But apparently the harvest, process of sorting, and the exploitation of human information no longer stops with marketers and advertisers- now the government wants in.

According to an article in yesterday’s  NY Times,  “social scientists are trying to mine the vast resources of the Internet — Web searches and Twitter messages, Facebook and blog posts, the digital location trails generated by billions of cellphones” to predict the future. This is being conducted all in the name of the U.S. Government, or in this case, the Intelligence Advanced Research Projects Activity unit of the Office of National Intelligence.

Why? Because  they believe “that these storehouses of ‘big data’will for the first time reveal sociological laws of human behavior — enabling them to predict political crises, revolutions and other forms of social and economic instability, just as physicists and chemists can predict natural phenomena.”

Remember our dear friend Michel Foucault who opined on systems of surveillance in modern society? He just rolled over so many times in his grave he’s now a taquito. But putting the panopticon aside for a moment, let us instead turn to “chaos theory” to underline why this whole venture isn’t necessarily a very good idea.

Chaos theory, as a discipline, studies:

“the behavior of dynamical systems that are highly sensitive to initial conditions, an effect which is popularly referred to as the butterfly effect.”

The “butterfly effect theory” is basically this:

Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible in general.  This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable.

Yes, if this is ringing a bell, it’s because you’ve heard of the anecdote the theory is named for, whereby a hurricane’s formation occurred because a distant butterfly had flapped its wings several weeks before. Ridiculous, but it does vividly illustrate the point that the entire globe is a system, and there are infinite factors within that system interacting every day to produce outcomes- and needless to say, these factors are not all diligently recorded in Brooke Shields’ Twitter stream.

Ever since analytics, Facebook, and Twitter broke onto the human information scene, the embedded hubris of men has convinced us that if we’re just smart enough to design a program to parse all of this information, then finally all of our inane yet determined recordings of our daily details will finally mean something– that it will be useful!

Right? Wrong.

The mashed potatoes are just mashed potatoes. If you want to see anything in the figurative mashed potatoes, then see this: the Tower of Babel, people.

“Tower of Babel?” you say? Yes. The Tower of Babel. My favorite of all biblical references ( we all have one, right? Right?).

Need a quick brush-up? OK!

In the story of the Tower of Babel, from Genesis, ‘a united humanity of the generations following the Great Flood, speaking a single language and migrating from the east, came to the land of Shinar, where they resolved to build a city with a tower “with its top in the heavens…lest we be scattered abroad upon the face of the Earth.’ God came down to see what they did and said: ‘They are one people and have one language, and nothing will be withholden from them which they purpose to do.’ So God said, ‘Come, let us go down and confound their speech.’ And so God scattered them upon the face of the Earth, and confused their languages, and they left off building the city, which was called Babel ‘because God there confounded the language of all the Earth.’(Genesis 11:5-8).

In other words, chaos theory’s conclusion that all of the world’s data is basically worthless, unreliable crap aside- this “big data eye in the sky” can and will never be.

First, because, without God’s intervention, we are perfectly great at getting in our own way, thankyouverymuch.

For example, the NY Times article cites IARPA’s claim that “It will use publicly accessible data, including Web search queries, blog entries, Internet traffic flow, financial market indicators, traffic webcams and changes in Wikipedia entries.”

About that, the U.S. Government would do well to recall the response to every single privacy change that Facebook has ever made about user data.

Also, the public’s responses to the Patriot Act.

Also, the public response to News Corp’s recent phone hacking scandal.

I could go on. The point is, I don’t think folks will accept the government’s efforts to exploit the aggregation of their online and publicly collected information in order to predict when we might all come down with whooping cough.

Second problematic claim, “It is intended to be an entirely automated system, a “data eye in the sky” without human intervention.” Errrr…what about all of that human generated information? Isn’t that, um, human intervention?

I recently had the absolute pleasure of hearing Stephen J. Dubner- author of Freakonomics and creator or host of every other program, show, or book that came along with it- speak at a conference. He gave an excellent and very compelling lecture on the dangers of relying too much on “self-reported data.”

His point is that, for industries or disciplines where data in large part determines future strategy and action, a little outside consulting and collection is merited. Self-reported data is, by virtue of the fact that humans are involved, problematic when it comes to accuracy.

This means that every tweet, Facebook update and comment flame war on a review site should be read and collected with a massive grain of Kosher salt. It is hard to imagine how the government would calculate this unreliability into its system through error analysis and standard deviation. Suffice it to say, there is still much work to be done on human reported data, sentiment analysis and social statistics before we could get anywhere close to sorting this all out in any meaningful fashion.

Luckily, as the NY Times reports in the article, not everyone is convinced this is even worthwhile:

“”I’m hard pressed to say that we are witnessing a revolution,’ said Prabhakar Raghavan, the director of Yahoo Labs, who is an information retrieval specialist. He noted that much had been written about predicting flu epidemics by looking at Web searches for ‘flu,’ but noted that the predictions did not improve significantly on what could already be found in data from the Centers for Disease Control and Prevention.”

So, though I myself am drinking the cherry kool aid of acting and strategizing based on the measured results from analytical data, I feel the U.S. Government is seriously overstepping its bounds on this one- both in terms of infringing on other people’s data rights, as well as in terms of outpacing the world’s statistical abilities when applied to cultural data.

Hit me in the comments if you have thoughts of your own on the matter…


First and foremost, quite importantly for the purpose of this post: definitions of “Persona” vs. “Identity-“

Persona

  • : a character assumed by an author in a written work
  • : an individual’s social facade or front that especially in the analytic psychology of C. G. Jung reflects the role in life the individual is playing
  • : the personality that a person (as an actor or politician) projects in public
  • : a character in a fictional presentation (as a novel or play)

Identity

  • : the distinguishing character or personality of an individual : individuality
  • : the condition of being the same with something described or asserted

Crap, that actually wasn’t as helpful as I had hoped it would be…I feel more confused now than I did before.

Nevertheless, these definitions seem to point toward the fact that a “persona” is more often something performed, or developed consciously one’s self, or performatively developed by someone else, whereas an “identity” is embedded and synonymous with a person’s actual character. For the sake of this entry, that is how we will distinguish between the two.

Moving on to THE POINT.

A while ago I tried to pitch a story to this American Life which had been inspired by the experiences of my friend- we’ll call him Jim. See, Jim was looking for a new job and applying at a few different companies. One day, reminded by a friend of his that he should be actively managing his online persona through Google search results, Jim Googled himself to see what came up when he searched for his full name.

The search results floored him. Jim was met with a cascade of search results about a man with his same name. There were pages with warnings posted by people claiming that a gentleman with Jim’s same name was a con man, that he had tricked them out of money, that he was a pathological liar, and not to trust him. The warnings described a man with a similar build, height, weight and general hair and eye color.

Jim freaked out (I think, understandably), because he was very well aware that any prospective employer would be Googling him to do a cursory background check, and if they were met with this barrage of information he might be weeded out of even a beginner pool of job applicants. He was being framed by someone he had never met, and who, due only to sharing the same name and a similar physical build, was stealing his online identity. How can you combat that in this day and age?

To this day, Jim (luckily employed by now) has to include disclaimers in applications and emails and hope that employers and business partners will take his word that he is not “that Jim” when  embarking on new ventures. If Jim weren’t already married, presumably this would also severely impact his dating and love life.

The story I wanted (and still want) This American Life to cover is this: what happens in the modern world when all of the other folks who use your name misrepresent and sometimes even defame your character online? In a modern era where so much of our persona is developed and managed online, how do we separate what is fake from what is real, and what happens when even our fabricated online personas take on a life of their own?

What do I mean by fabricated online personas? Well, is the life you represent on Facebook an accurate snapshot of what is really going on with you? One of my favorite questions to ask is why no one ever posts photos of themselves crying alone on a Friday night- because that does happen to people. It’s widely known that our online selves, or personas, generally skew toward happiness, success, beauty, and popularity rather than honestly depicting struggles, bad hair days, and loneliness.

And having control over how we are presented online is very important to most internet users- so much so that companies like www.reputation.com now exist to help you “control how you look on the internet.”  Their claim, “People searching for you are judging you, too – defend yourself against digital discrimination with Reputation.com” may seem contrived and fear-mongery, but it still taps into some very real concerns for people.

After all, our identities are very important to us, and the gadgets and devices we are using provide a mirror of our own selves which we project onto these technologies. In fact, Michel Foucault (remember our dear friend?) called these tools “Technologies of the Self,” before the internet was a thing. According to my fascinating pal Wikipedia,  Technologies of the Self are “the methods and techniques (“tools”) through which human beings constitute themselves. Foucault argued that we as subjects are perpetually engaged in processes whereby we define and produce our own ethical self-understanding. According to Foucault, technologies of the self are the forms of knowledge and strategies that “permit individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.”[2]

In other words, these days, technology and social media help us to develop our online personas, which end up very deeply affecting our real identities. See what I did there?

For example, if you’re one of the millions of Indian surname Patels in the world, trying to get a unique but still relevant Gmail email address must be murder at this point. You would hardly feel like the email address represented you if you were Patel627281939464528193947273484@gmail.com

And what about the mayhem and madness that surrounded Facebook’s push to get its users to sign up for a unique direct URL to their profiles? Sure, maybe Tatianuh Xzanadu had no problems getting her direct URL with no competition, but for the rest of us, it was like an Oklahoma land run, or a crushing Black Friday sale, waiting for the clock to hit the magic time when we could hurriedly type in our first and last name and finally claim a personalized Facebook URL, a chance at allowing people to access the real me (as far as anyone’s Facebook profile actually does that).

This would all be complicated enough, except that these days not only are people with the same names being misjudged online for the behavior of others with the same name, but poor celebrities and famous authors are having their personas and online identities and even their styles co-opted. Again, for example, the gentleman who formerly tweeted as Christopher Walken under the handle “CWalken,” who delighted thousands on Twitter by impersonating the idiosyncratic and gloomy actor in his tweets about everyday observations and occurrences.

The Wrap interviewed “CWalken” and described the Twitter feed thusly,

“What’s great about the “CWalken” feed is that it sounds like Christopher Walken, yet it’s got the consistent tone and point of view that only a committed writer can achieve. “CWalken” reads as if the actor himself were emerging from a surreal haze a few times a day to note the stupidity, oddness, and weird beauty of the everyday world:”

And the mystery Tweeter, when interviewed, similarly made some really interesting points:

“The politics, tastes and observations are my own. That is — I am not trying to speak for Christopher Walken. I am simply borrowing his voice and reworking my words in his cadence. Some people crochet, I do this.”

It’s problematic because some celebrities feel that their identity and their reputation is at stake, that something they have lived a lifetime to build has been stolen from them. But in some cases, this really is high art. As The Wrap author points out, the CWalken tweets were focused and really well-written, probably much more so than Mr. Walken himself could have achieved.  Alas, the “CWalken” account was eventually shut down because Twitter maintains a policy of cracking down on impersonator accounts.

However, other online persona impersonators have had similar success, such as the perennial favorite: The Secret Diary of Steve Jobs, or one of my recent obsessions, “RuthBourdain” where Alice Waters was anonymously tweeting as a combined persona of Ruth Reichl mashed with Anthony Bourdain. That little venture  even earned Waters a humor award.

I mean, that gets really complicated. At that point we have a celebrity chef who is world renowned and celebrated in her own right, assuming the persona of not just one, but two other luminaries in the food world as an outlet for her nasty and rye, humorous side.

One last example I just came across today introduces yet another new genre, blog as Yelp Review as famous author: check out Yelping with Cormac. This Tumblr blog assumes the writing style and occasional subject favorites of Pulitzer prize winning author and presumed hermit Cormac McCarthy in order to write Yelp-style reviews of well known commercial establishments in the Bay Area. A fascinating concept, but here we have clearly gone completely down the persona-stealing online rabbit hole.

Where will the rabbit hole take us next?


Nielsen just dropped its Q3 2011 Social Media Usage Report, and some of the stats here are pretty interesting.

At a Glance:

  • Social networks and blogs continue to dominate Americans’ time online, now accounting for nearly a quarter of total time spent on the Internet
  • Social media has grown rapidly – today nearly 4 in 5 active Internet users visit social networks and blogs
  • Americans spend more time on Facebook than they do on any other U.S. website
  • Close to 40 percent of social media users access social media content from their mobile phone
  • Social networking apps are the third most-used among U.S. smartphone owners
  • Internet users over the age of 55 are driving the growth of social network-ing through the Mobile Internet
  • Although a larger number of women view online video on social networks and blogs, men are the heaviest online video users overall streaming more videos and watching them longer
  • 70 percent of active online adult social networkers shop online, 12 percent more likely than the average adult Internet user
  • 53 percent of active adult social networkers follow a brand, while 32 percent follow a celebrity
  • Across a snapshot of 10 major global markets, social networks and blogs reach over three-quarters of active Internet users
  • Tumblr is an emerging player in social media, nearly tripling its audience from a year ago

Here are a few of the graphics that go into a little more detail:


Any student of communications worth his or her salt will have studied the infamous Nixon-Kennedy Presidential debates of 1960. Why? Because they were the first ever televised presidential debates, and they marked an inflection point in American politics, where hearts and minds were not won merely by talented rhetoricians and charming radio personalities, but increasingly by physical appearances and a demonstrated ease in front of a camera.

As the story goes, Nixon was ugly and evil looking normally, but on the date of the first of the four debates he would have with Kennedy, his physical appearance was worse than usual: “Nixon had seriously injured his knee and spent two weeks in the hospital. By the time of the first debate he was still twenty pounds underweight, his pallor still poor. He arrived at the debate in an ill-fitting shirt, and refused make-up to improve his color and lighten his perpetual ‘5:00 o’clock shadow.’” I think we can all imagine.

However, Kennedy’s appearance was another story, “Kennedy, by contrast, had spent early September campaigning in California. He was tan and confident and well-rested. ‘I had never seen him looking so fit,’ Nixon later wrote.”

Whether Kennedy’s handlers were much more prophetic about the impact of TV, or whether Kennedy just lucked out, we may never know. What we do know is that Kennedy’s appearance on TV during that debate changed the path of American politics forever. A majority of Americans who listened to the debate solely via radio pronounced Nixon the winner. A majority of the over 70 million who watched the televised debate pronounced Kennedy the easy winner.

Are you beginning to see why this appeals to comms geeks? The suggestion that a newly introduced medium could so profoundly impact the perspectives of so many people in the context of a very high stakes popularity contest was tantalizing. It remains tantalizing today.

Fast forward 51 years to Obama conducting a Townhall meeting streaming on Facebook, and to GOP Presidential candidates using Twitter and Facebook metrics to potentially supplant traditionally collected polling information.

What would happen if you could use Twitter, Facebook or good old Google Analytics to accurately predict the outcome of the 2010 Presidential Election? Some growing social media analytics companies such as Likester are doing just that by measuring the uptick in popularity of pages and social networking presences. In fact, Likester accurately predicted this year’s American Idol winner way back in April.

But how scientific is this data, and what exactly is being measured? As Mashable reports, Likester mostly measures popularity and predicts winners based on the aggregation of “likes” on Facebook in concert with high-profile events. For the GOP debate, “The stand-out frontrunner was Mitt Romney, who ended the night with the greatest number of new Facebook Likes and the greatest overall Likes on his Page.” As we can see, Likester basically began the ticker right when the debate began and distinguished between unique “likes,” or “likes” that occurred after the debate had started, from overall likes. In the end Romney had 19,658 unique or new “likes” during the debate, resulting in 955,748 total “likes,” representing a 2.06% increase in overall “likes” during and directly following the televised debate.

Likester reported, “Michelle Bachmann ranked second in the number of new Likes on her Facebook Page.” In numbers that came out to 9,232 unique or new “likes,” 326,225 total, representing a 2.75% increase.

Care of nation.foxnews.com

Naturally, AdWeek threw their two cents into the discussion, arguing:

“Polling has always been an important element to any electoral bid, but now a new type of impromptu assessment is coming to the fore. Third parties, such as analytics startup Likester, are carving out a space for themselves by processing data that is instantaneously available.”

I’ll give you instantaneously available, but, again, how scientific is this? After all, no one seems to be taking into account what I would call the “hipster correlate.” The hipster correlate is the number of Facebook users who would have “liked” a Romney or Bachmann or Ron Paul page in a stab at some hipster-ish irony, thus proving to those who check their Facebook page or read their status updates their outstanding skills of irony in becoming a fan of a Tea Partier web page, etc. If we’re really doing statistical regressions here, what’s the margin of error here, Likester?

Additionally, how closely can we attach the fidelity of someone who “likes” a Facebook page to a living, breathing politician? On my Facebook page I think I have “liked” mayonnaise, but if there were to be a vote between mayo and ketchup to determine which condiment would become the new official condiment of the U.S., would I necessarily vote mayo? That’s actually kind of a crap analogy, but you get what I mean.

Before we are forced to read more blogs and news articles (like this one!) pronouncing exit polls dead and Facebook and Twitter as the new polling media, I’d like to see a very solid research study conducted as to how closely becoming a fan of a political Facebook page correlates to Americans’ actual voting behavior. In other, more web-based marketing terms, what’s the voting conversion rate for political Facebook pages?

Has anyone seen anything like that?

In other words, please, social scientists and pollsters, show us whether yet another new medium is disrupting the way that Americans individually see and interact with their political candidates, and how that medium has begun to shape the way those political candidates are regarded by an American audience as a whole.


Ooohhh ho ho! This one is good. Really, really good, people.

We interrupt our analysis of the 2011 Global information technology Report to give you news about some gossipy, tech rivalry backstabbing.

What do you get when you take one of the biggest powerhouse PR firms in the world and plug it in between two of the most influential global technology companies? Modern info wars, people. Modern information warfare!

As Dan Lyons wrote in his Daily Beast report on this, for the last week or so word got out that Burson-Marsteller had been retained to pitch an anti-Google PR campaign that urged credible news outlets to investigate claims that Google was invading people’s privacy.

Word got out because Burson “offered to help an influential blogger write a Google-bashing op-ed, which it promised it could place in outlets like The Washington Post, Politico, and The Huffington Post.” The offer, it appears, was turned down by blogger Chris Soghoian who then publicized the emails BM sent him after they refused to reveal their patron.

Next, “USA Today broke a story accusing Burson of spreading a ‘whisper campaign’ about Google ‘on behalf of an unnamed client'” and after that, Facebook, it was revealed, was the crooked, Whispering Wizard behind the curtain.

This is the kind of stuff that makes comms geeks like me drool! PR, search and social networking combined in one story?

So let’s break down the elements that make this so juicy. First, for Facebook to be accusing anyone else of being flippant or irresponsible about user privacy is ridiculous. Plain ridiculous. When your founder and CEO is Mr. “Privacy is Dead,” you cannot take that position. Period.

Second, it’s so interesting to see Facebook getting upset about Google doing what it was invented to do, i.e. cull information from every relevant source on the net and organize it in a meaningful way to those searching for it. For Facebook to think that it would be immune to the reach of the Google information engine’s grasp is delusional. In essence, the crux of Facebook’s whole problem with this situation lies herein: “just as Google built Google News by taking content created by hundreds of newspapers and repackaging it, so now Google aims to build a social-networking business by using that rich user data that Facebook has gathered.”

Third, I love how Lyons cuts through all of that and gets down to the brass tacks: “The clash between Google and Facebook represents one of the biggest battles of the Internet Age. Basically, the companies are vying to see who will grab the lion’s share of online advertising.” Yup.

He continues, “Facebook has 600 million members and gathers information on who those people are, who their friends are, and what they like. That data let Facebook sell targeted advertising. It also makes Facebook a huge rival to Google.” There I actually don’t agree with him, because of what I see as their divergent relative scopes.

Although Facebook has done a remarkable job of positioning itself as a competitor to Google in the eyes of the internet public, it’s just not remotely possible. It is a David and Goliath story, where Goliath wins hands down, and then, laughing about squishing little David, goes outside to have a margarita in the sun.

Facebook’s scope started out much too small to then later tack and take on the search giant. Facebook wanted to provide an exclusive network online where people could share information about themselves with other people. Google began as a creature that wanted to dominate the world and all of its information, and has proven how badly by successfully venturing into myriad other arenas. Google aims to “organize the world’s information,” whereas Facebook’s stated goal is to…wait, what is Facebook’s stated goal? A cursory search came up with this article from the Observer about Facebook’s mission statement, which apparently started as “Facebook helps you connect and share with the people in your life,” and has now, rather tellingly, become “Facebook’s mission is to give people the power to share and make the world more open and connected.” Interesting.

But back to the matter at hand- there’s no doubt that Google has performed so well in other arenas that they are well positioned now to really take on the social angle. And as Lyons points out, they have already begun, “Last month, Google CEO and co-founder Larry Page sent out a memo telling everyone at Google that social networking was a top priority for Google—so much so that 25 percent of every Googler’s bonus this year will be based on how well Google does in social.” That may be the first sound of the bugle in Google’s hunt for Facebook’s market share that should play out over the course of the next few years. But if this was Facebook’s “shot across the bow” in that race, then it has made them look, well, ridiculous.

Fourth, I find it interesting how Facebook took down some of Burson-Marsteller’s credibility with it. In politics, usually when a smear campaign is run, the focus of criticism for having done so falls largely upon the candidate himself or herself- and discussions generally center on their morals or ethics for having chosen to go that route. Occasionally the blame falls on the chief campaign manager for having persuaded them to do so, but generally not. In this case BM seems to have taken a lot of the heat for attempting to carry out orders under a condition of anonymity.

This political angle begs a few questions. Namely, in an era when civic engagement is diminishing by the minute for a largely apathetic American audience, are huge corporations fighting the new political battles for our attention? It’s safe to say that large technology corporations such as Microsoft, Apple, Google and Facebook are much more relevant and identifiable to your average American than would be the 2008 class of Presidential candidates. With this new era of political and business landscapes converging, will the political and business practices of smear tactics converge as well?


The fallacy that social media platforms such as Facebook provide “two-way communication,” or a “virtual dialogue” is getting a day in the sun today, following President Obama’s “Town Hall at Facebook headquarters yesterday. While on the surface, media enthusiasts and modern-day communications professionals choose to see Facebook as the future of interactive social media due to live streaming capabilities, instant messaging, Q&A mechanisms, and the ability to cull an audience of thousands, yesterday’s Town Hall event proved that nothing beats a physically present audience.

As the SFGate (SF Chronicle) article declared, “Despite the promise that President Obama’s first Facebook town hall would open a new level of two-way communication with his constituents, social-networking technology didn’t add much to the conversation.”

In all, the President answered eight questions, a few of which were asked by the physically present audience of Facebook employees, and ignored hundreds which were posted by the thousands of virtual attendees. As the SFGate article quotes, “Cynthia Spurling posted: ‘What a joke Facebook! So glad you had this town hall for your employees. The Ask Question button is a joke!’”

As a President of the people, and as a politician campaigning for re-election, why would he do such a thing?

Well, how much time do you have? How about:

A)     A politician is always trained to play to the flesh and blood right in front of him or her, because he can see their eyes, their expressions, and he or she is trained to digest that physical information, as an orator, to sway an audience one way or another. But hell, the normal human reaction is to play to the live audience right in front of you, so that’s not saying much.

B)      The physical audience was a group of employees of Facebook, a cutting edge technology company that employs young, top tier people from all over the country, meaning most of them are equipped with at least a bachelor’s degree, if not a master’s. And historically, studies have suggested that a higher level of attained education generally correlates to a more liberal standpoint among Americans.

C)      Facebook HQ is located in California, a very liberal state.  Thus the President is keenly aware that the average person in the room will be more aligned with his own political standpoints and the standpoints of his party. Knowing that, and knowing he can field their softball questions, why would he cater to the wildcard attendees from other states?

Saying all this, you probably won’t believe it, but I should disclose that I am a big Obama fan. I mean, a BIG OBAMA FAN. But this is just common sense. What I find interesting about it from the communication point of view is not the choices that were made by his team to keep him on the “safe” side of rhetoric, but how surprised people seem to be that he basically ignored the online audience.

Yes, we have become a very virtualized population of individuals, often more comfortable with interacting with screens and mobile devices when given the choice between that and a real person. But the actual act of speaking publicly has not changed much. A skilled orator thrives off of the energy he or she receives back from an audience, and the computer, iPad, iPhone, or Android screens don’t offer any love back.

As I’m wrapping this up, I need to mention what a theater professor of mine once said. This has stayed with me every day of my life. He told us, “don’t ever agree to appear on stage with babies, small children or animals. They will upstage you ever time. The difference is their authenticity of emotion, of movement, of reaction. The second you step on stage with them, you have already lost the audience’s attention to their absolutely authentic behavior, which no actor can match.”

How does this correlate to the fallacy of interactivity, as proven by yesterday’s Facebook Town Hall? The same rule, it would seem, applies to “don’t ever agree to attend a webcast or live streaming event if you know there will be people in the room, physically, with the performer or speaker.” As the virtual audience, you will always lose. The physical audience will upstage you every time.


October 19, 2010- http://www.gather.com/viewArticle.action?articleId=281474978616790

Remember that scene in the film Back to the Future where Marty McFly realizes that in the photo he carries of his family, he is fading from existence because of the events of the past not transpiring as they should? As a result, he faces the possibility that the shape of his family will change forever? Well, as it turns out, that’s not necessarily impossible.

At least, not according to one of the lead designers and developers at Mozilla, Aza Raskin, Creative Lead for Firefox. During his keynote speech at the University of Michigan School of Information Raskin claimed “the human brain’s predictable fallibility leaves us susceptible to the creation of false memories by brand marketers through retroactive product placement into our photos posted on Facebook and other social networks,” and his assertions are getting a lot of coverage. Raskin, only 27 years old, is one of Mozilla’s most talented innovators, and thus his arguments are by no means falling upon deaf ears. In essence, he’s predicting that social networks will modify our uploaded photos to include product placements and therefore modify our memories.

Specifically addressing the advertising and marketing potential involved in this ploy, Raskin claimed, “We will have memories of things we never did with brands we never did. Our past actions are the best predictor of our future decisions, so now all of a sudden, our future decisions are in the hands of people who want to make money off of us.”

During the talk, to bolster his cautionary predictions Raskin touched upon neurological research into memories and cited the Hollywood blockbuster Inception, which addressed the future potential to tap into and manipulate dreams and memories. This concept of subliminal advertising was also recently addressed in a viral video created by UK illusionist Derren Brown, “Subliminal Advertising,” where the practice of advertising is turned on its head when two high-end advertisers are manipulated into spontaneously generating a pre-determined pitch for a product.

Raskin’s keynote came at an unfortunate time for Facebook, who this week is once again suffering intense scrutiny for their privacy practices. As the New York Times argued, “When you sign up for Facebook, you enter into a bargain…At the same time, you agree that Facebook can use that data to decide what ads to show you.” Yet it was Mark Zuckerberg, the much publicized chief of Facebook, who this week apologized to his users for overly complicated site settings and acknowledged that some app developers on its site shared identifying information about users with advertisers and Web tracking companies.

However, as the New York Times reports, “Facebook has grown so rapidly, in both users and in technical complexity, that it finds it increasingly difficult to control everything that happens on its site.” If you consider that Facebook still claims just over 1,700 employees it seems unlikely that in the next few years the social media Goliath will grow rapidly enough to expand their advertising model to modify users’ uploaded content such as photos and videos. Nor is it entirely clear why they would want to do such a thing, given how infrequently users tend to re-visit their photos even weeks after they have posted them.

On the other side of the U.S., U.C Berkeley professor and privacy expert Deirdre Mulligan had this to say about Facebook: “This is one more straw on the camel’s back that suggests that Facebook needs to think holistically not just about its privacy policies, but also about baking privacy into their technical design.”

In the meantime, perhaps we should all pop some ginkgo biloba and back up the current versions of our photos- just incase.


October 15, 2010- http://www.gather.com/viewArticle.action?articleId=281474978605059

Oh goody, as the New York Times reported on October 10th, Twitter has finally come up with a plan to make money. Only, it’s the old new plan, which is to say it’s the same plan as everyone else.

As Twitter’s Evan Williams stepped down, to make room for Dick Costolo who previously headed Twitter’s advertising program as the new CEO, the tech industry remarked on how the shuffle represented Twitter’s increased new commitment to monetization.

As the New York Times reported, “Twitter’s startling growth — it has exploded to 160 million users, from three million, in the last two years — is reminiscent of Google and Facebook in their early days. Those Web sites are now must-buys for advertisers online, and the ad industry is watching Twitter closely to see if it continues to follow that path.”

But there still seems to be no real innovation in the advertising models of hi-tech companies from whom the world expects a great deal of innovation. Why are hi-tech social media and social news aggregation companies having such a hard time innovating with their monetization strategies?

At this point, each new social media platform that comes along seems to jump into the online advertising market that Google forged largely on its own. Now that Google did the heavy lifting on education and we all speak and understand the language of “click-thru rates,” “impressions,” and “search engine optimization,” newcomers like Twitter don’t have to pay or do very much in order to enter this monetization space. Coincidentally, it would seem that they aren’t doing very much at all to evolve it.

As a result, the whole online ad framework is falling flat, and after a few years of evangelizing for social media advertising and the use of new media platforms like Twitter and Hulu, are advertisers really making more money and seeing the benefits of these new media? It’s becoming an embarrassingly redundant question- “yes, we know we are creating funny and entertaining media for our consumers to enjoy, but is it actually increasing sales?”

Interestingly, at this year’s gathering of the Association of National Advertisers, as the New York Times reported, a survey at the beginning of the opening session found that “marketers may still need some schooling on the dos and don’ts of social media. Asked to describe how its use has affected sales, 13 percent replied that they did not use social media at all. (Eleven percent said sales had increased a lot, 34 percent said sales increased ‘some’ and 42 percent said they had seen no change.)”

It would seem that media analysts are continuing to approach social media and search as a given element of any marketing strategy without any hard evidence as to why every company needs to integrate social media into their market strategies. Instead, without the numbers to make the case, analysts and marketeers still discuss the virtues of earned media versus paid media, the value of eyeballs and impressions, and earned equity.

One of this year’s smashing social media success stories has a particular ability to make marketers foam at the mouth. 2010’s Proctor & Gamble “smell like a man” campaign for Old Spice helped increase the brand’s followers on Twitter by 2,700%, to where they “now total almost 120,000.”

Marc Pritchard, global marketing and chief branding officer at Proctor and Gamble had his moment in the sun for what was, undoubtedly, the most high-profile and successful example of how modern brands can use social media to promote their brands. But in the coverage of Pritchard’s talks, there is little to no mention of how the campaign is actually impacting the company’s bottom line. Instead, there is this: “The currency the campaign has earned in social media has pushed it into the popular culture. Mr. Pritchard showed the audience a spoof that was recently introduced by Sesame Workshop in which Grover suggests that his young viewers ‘smell like a monster on Sesame Street.’

But an internet meme does not a year over year increase in sales make. There is no mention of how an increase in followers on Twitter converts itself into a percentage increase in sales. It’s like an equation is missing, or somehow we have all misunderstood how to connect the dots. At the conference Joseph V. Tripodi, chief marketing and commercial officer for Coca Cola was interviewed, and his only contribution to this dilemma was to discuss how social media can sometimes save a company money on promotions through viral videos, “It cost less than $100,000 to produce the video, he added, demonstrating that “you don’t need huge amounts of money to engage with consumers.” However, savings on a marketing budget also do not a sales increase make.

Refreshingly, one of the conference’s keynote speakers, Mark Baynes, vice president and global chief marketing officer at the Kellogg Company, did acknowledge the missing link in the social media to profits equation by proclaiming, “In God we trust; the rest of you bring data.”