Archive for the ‘Media’ Category


When you think about it, you really have to give Starbuck’s a hell of a lot of credit, whether you like their coffee and their business practices or not. Here’s why: Starbuck’s could easily fall down on the job, lean back a bit and sail on their complete global dominance in the coffee and café market, but they’re not. Case in point: their latest pumpkin spice latte campaign.

Laugh if you will, but my best friend from high school and I have this tradition of alerting each other when the first pumpkin spice lattes hit Starbuck’s. I think we’ve been doing this since we were 15. And what’s funny is, we’re not alone in this. This has become an annual tradition for people all around the world.

And this year’s campaign to promote the PSL’s official arrival is some of the most creative marketing I’ve seen in a while.

Here’s how it works:

On Twitter, I saw that Starbuck’s had promoted the Pumpkin Spice Latte account, @TheRealPSL.

No, I am not a follower of Starbuck’s OR the Pumpkin Spice Latte account on Twitter, but I saw the (paid) promotion in my Twitter feed:

Twitter

 

 

Instantly, I thought about my friend and sent this to her in an email, feeling that sense of fun over our inside joke and the excitement that our favorite little commercial harbinger of Fall had arrived. Then I looked closer, and clicked on the link in the tweet. It brought me to the landing page for their campaign:

Web1

On the landing page you are told to solve a riddle relating to fall, and once you solve the riddle, you are given a secret code.

You are then told to bring this into a barista at your local Starbuck’s to unlock the PSL for everyone in that specific location, officially unleashing “fall’s favorite beverage.”

Web2

So, let’s go back and take some account of what’s happened just in the course of my following through with these steps, and let’s assume I go to the closest Starbuck’s and hand them this code.

This campaign has combined elements of social (Twitter), mobile (on my phone), gamification (solving riddles, getting there first), physical brick-and-mortar sales (bring the code to your closest/local store), has turned a product into an event, has combated the threat of a stale menu (promotion of a new, seasonally available item) and brand fatigue (so you don’t have to order the same vanilla latte yet again), and promotes values of community (you’re doing this for the people at your local Starbuck’s) and that ever-elusive aura of seasonality (only available in the Fall, a sign that Autumn has arrived) at the same time.

It’s relatively simple in execution, doesn’t require a lot of effort, but is seamless in its experience. It’s an excellent example of how companies can tie social promotion and social communities to web campaigns, and then to in-store sales or physical events. Quite an elegant and effective design, and a great example to anyone in marketing.

Thought it was worth sharing.

 

 

Advertisement

In order to survive in the modern era, companies must grasp a strong understanding of psychology, or at least of the type of pseudo-psychology that Edward Bernays, immortalized as the father of PR, made widely available to marketers and advertisers. Bernays was an Austrian American who wove the ideas of Gustave Le Bon and Wilfred Trotter on crowd psychology with the psychoanalytical ideas of his uncle, Sigmund Freud, and ultimately asked, “If we understand the mechanism and motives of the group mind, is it not possible to control and regiment the masses according to our will without their knowing about it?”

Historically companies have leveraged a number of psychological devices and theories to generate desire within their target demographics and audiences in order to sell more. Advertising seeks to simultaneously engender strong positive feelings about a product or company while simultaneously leaving the audience feeling emptier for not owning the advertised product. The ability to pull this off is intensely powerful, and yet not as powerful as the ability to affect this reaction within the target demographic, autonomously, spontaneously.

This is the accomplishment of the new realm of mobile technologies and apps such as Twitter, Facebook and Instagram. In effect, their breakthrough in psycho-marketing is the ability to make their product habit-forming, even addictive. On Merriam Webster addiction is defined as: compulsive need for and use of a habit-forming substance (or we could say product) characterized by tolerance and by well-defined physiological symptoms upon withdrawal. Addiction is the new marketing goal precisely because its inherently dangerous, cyclical nature is exactly what embodies both the need and the fulfillment- all encapsulated in one.

Compulsion and habit are the key words here. Marketers and advertisers drool when they see those words, because they are truly the Holy Grail of advertising. If they can create a condition in their target audience where the deprivation of the product creates a state near-pain for the user/consumer, they are guaranteed a captive customer, possibly for life.

This is precisely what Nir Eyal describes in his TechCrunch article, “The Billion Dollar Mind Trick.”  Eyal outlines a couple of critical concepts; namely “internal triggers” and “desire engines,”

“When a product is able to become tightly coupled with a thought, an emotion, or a pre-existing habit, it creates an ‘internal trigger.’ Unlike external triggers, which are sensory stimuli, like a phone ringing or an ad online telling us to “click here now!” you can’t see, touch, or hear an internal trigger. Internal triggers manifest automatically in the mind and creating them is the brass ring of consumer technology.”

As Eyal points out, “We check Twitter when we feel boredom. We pull up Facebook when we’re lonesome. The impulse to use these services is cued by emotions.” He enumerates the current approach to create internal triggers, labeling it the manufacturing of desires.”

  • “Addictive technology creates “internal triggers” which cue users without the need for marketing, messaging or any other external stimuli.  It becomes a user’s own intrinsic desire.”
  • Creating internal triggers comes from mastering the “desire engine” and its four components: trigger, action, variable reward, and commitment.”

The “desire engine” Eyal refers to is merely a phrase that describes the pre-determined “series of experiences designed to create habits…the more often users run through them, the more likely they are to self-trigger.” All of this is to say that, increasingly, and especially when it comes to mobile consumer technologies and apps, companies increasingly find that their economic and social value is a function of the strength of the habits they create within their user/customer base.

Interesting, yes, but perhaps not entirely new. Michel Foucault (yes, I know I talk about him a lot here, but his work is endlessly relevant to the types of communications discussions we constantly engage in nowadays) discussed this same concept in his investigation of the concept of “technologies of the self,” whereby his objective was:

 “to sketch out a history of the different ways in our culture that humans develop knowledge about themselves: economics, biology, psychiatry, medicine, and penology. The main point is not to accept this knowledge at face value but to analyze these so-called sciences as very specific ‘truth games’ related to specific techniques that human beings use to understand themselves.” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

Yet the concept dates back to the Greeks, “constituted in Greek as epimelesthai sautou, ‘to take care of yourself’ ‘the concern with self,’ ‘to be concerned, to take care of yourself.’

Foucault posited that there were four main “technologies:”

“(I) technologies of production, (2) technologies of sign systems, (3) technologies of power, and (4) technologies of the self” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

Clearly in this case what we’re focusing on is the technology of the self, “which permit individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

You would be hard-pressed to convince me that the bulk of apps available to us all on our mobile devices these days are not, in some way, designed to fulfill some narcissistic desire to know ourselves better. Whether it’s for fitness (calorie counters, pedometers, diet analyses, jogging analyses) or for social edification (how many people who you know are around you, how many “friends” do you have [Facebook], what are you doing right now [Twitter], how often do you visit a place [FourSquare or Yelp]) many of these tools are intended to display a mirror image of ourselves and project it onto a social web and out to others. (Hell, iPhones now include a standard photo feature that allows you to use the phone as a literal mirror by using the front-end camera as you stare into it.) But they are also intended to help us transform ourselves and make ourselves happier by making us skinnier, healthier, more social, more aware, more productive, etc.

The importance of this is that we have been fooled into thinking we are using these apps to learn more about ourselves, but the social sharing functionality proves that this is performative- we wouldn’t be doing it repeatedly unless there was a performance aspect built-in, an audience waiting to view and comment on the information, providing continuous gratification. In other words, learning more about ourselves, then amplifying that knowledge out to an audience has become habit-forming. We have become addicted to the performance of ourselves.

 “These four types of technologies hardly ever function separately, although each one of them is associated with a certain type of domination. Each implies certain modes of training and modification of individuals, not only in the obvious sense of acquiring certain skills but also in the sense of acquiring certain attitudes.” (http://foucault.info/documents/foucault.technologiesOfSelf.en.html)

In this case, though Foucault was often very careful in his diction and a master of semiotics, what if we replace the word “attitudes” with “habits?” After all, Foucault is referring to these technologies of self as dominating, as techniques which train and modify individuals, and a habit formed is demonstrably a tangible and acquired modification of human behavior. Later he continues to elaborate and speaks of “individual domination,”

”I am more and more interested in the interaction between oneself and others and in the technologies of individual domination, the history of how an individual acts upon himself, in the technology of self.”

I know quite a few people who would willingly and openly admit to the individual act of domination upon themselves that they perform on a compulsive basis by updating their Twitter feeds, updating the status on their Facebook accounts, uploading their latest photos to Instagram, and checking in on FourSquare. There is a reason that Googling “Is technology the new opiate of the masses?” garners page upon page of thoughtfully written and panicky editorials and blog posts. This is a newly acknowledged and little resisted truth of our times- we are willing slaves to the ongoing performance of our selves.


I go through the occasional bout of nostalgia, I admit it. Sometimes I muse that it would have been much more fun to be alive during the Wild West, or during the American Revolutionary War. Mostly this is clearly symptomatic of the fact that I feel disconnected and I want to feel a part of a movement, something significant that is taking hold of history and making it sit up and pay attention.

While I lived in Paris I was privileged to see the works of infamous, modern-era, groundbreaking schools of art such as the Blau Reiter, the Futurists, Alexander Calder and the mobile sculptors, Impressionists, Fauvists, Surrealists, Cubists, Pointillists, you name it. As I browsed the carefully curated collections of work and imagined what it would be like to exist in a time of such intense creation, innovation and turn-the-world-on-its-head thinking, I remember thinking: does anyone really ever know when they’re living smack-dab in one of those eras?

Now that I’m back in the U.S. working, and no longer have the luxury of wandering the streets of Paris, being a flaneur and contemplating my navel, those questions have gone mostly by the wayside in favor of, oh, I dunno, buying toilet paper and writing corporate emails again. Sigh. However, they don’t have to because it may actually be true that we are in the middle of a cohesive burgeoning artistic, cultural and technological movement! It even has a name, folks, which is huge, because without a name it will be hard to reference it: The New Aesthetic.

What is it about? Well, significantly it’s pretty all-encompassing, which it has to be in this era of multimedia, consolidated and integrated channels, myriad communication modes and access. In a nutshell (though that is a depressingly analog expression to use in this context) it’s about taking the time to understand how technology is affecting and has already impacted the way we see the world, how we see everything. The movement focuses on the presumption that most of the world increasingly now experiences the world not directly through their eyeballs, but through the eyes of a technological device- whether it’s a camera, a smartphone, GPS, a tablet, an e-reader, a computer screen, etc.

This opportunity for reflection is significant first because the pace of technology and its adoption simply hasn’t historically allowed us to do this- we adopt a technology, learn it, deploy it and then we’re off and running with barely a glance backward. In the super-charged modern era of technology have we really reflected on its impact on how we see things? Yes, the visionary artists, influencers and politicians of our time have, in small numbers. But this movement finally has identified certain themes about how we have all been shaped by new technologies and it’s just so interesting.

Another great facet of the New Aesthetic is in how it is playing out. This is not a genre that is reserved for the intellectual or artistic elite. So far the movement has invited everyone to participate, thereby furthering the impact that the act of reflecting has. It begs questions of its members-How is the world different from how I saw it before? Can we actually evaluate if things were better or worse before this technology/gadget/access/knowledge? Show us what you see and how you see it. Can you find us other examples of where this is playing out?

From Bruce Sterling‘s Wired piece on the topic:

“The “New Aesthetic” is a native product of modern network culture…it was born digital, on the Internet. The New Aesthetic is a “theory object” and a “shareable concept.”

The New Aesthetic is “collectively intelligent.” It’s diffuse, crowdsourcey, and made of many small pieces loosely joined. It is rhizomatic, as the people at Rhizome would likely tell you. It’s open-sourced, and triumph-of-amateurs. It’s like its logo, a bright cluster of balloons tied to some huge, dark and lethal weight.” (http://www.wired.com/beyond_the_beyond/2012/04/anessayonthenewaesthetic/)

It should come as no surprise that this discussion largely began at the recent South by Southwest (SXSW) conference in Austin, Texas. Here is the description of the panel discussion:

“Slowly, but increasingly definitively, our technologies and our devices are learning to see, to hear, to place themselves in the world. Phones know their location by GPS. Financial algorithms read the news and feed that knowledge back into the market. Everything has a camera in it. We are becoming acquainted with new ways of seeing: the Gods-eye view of satellites, the Kinect’s inside-out sense of the living room, the elevated car-sight of Google Street View, the facial obsessions of CCTV.

As a result, these new styles and senses recur in our art, our designs, and our products. The pixelation of low-resolution images, the rough yet distinct edges of 3D printing, the shifting layers of digital maps. In this session, the participants will give examples of these effects, products and artworks, and discuss the ways in which ways of seeing are increasingly transforming ways of making and doing.” (http://schedule.sxsw.com/2012/events/event_IAP11102)

James Bridle is sort of the figurehead of the discourse around the New Aesthetic and he has done an excellent job of laying out what it means to him and helping to provide spaces for the conversation about it to unfold. In fact, he’s downright poetic in some of his descriptions:

And what of the render ghosts, those friends who live in our unbuilt spaces, the first harbingers of our collective future? How do we understand and befriend them, so that we may shape the future not as passive actors but as collaborators? (I don’t have much truck with the “don’t complain, build” / “make stuff or shut up” school, but I do believe in informed consent. Because a line has been crossed, technology/software/code is in and of the world and there’s no getting out of it. ” (http://booktwo.org/notebook/sxaesthetic/)

“My point is, all our metaphors are broken. The network is not a space (notional, cyber or otherwise) and it’s not time (while it is embedded in it at an odd angle) it is some other kind of dimension entirely.

BUT meaning is emergent in the network, it is the apophatic silence at the heart of everything, that-which-can-be-pointed-to. And that is what the New Aesthetic, in part, is an attempt to do, maybe, possibly, contingently, to point at these things and go but what does it mean?” (http://booktwo.org/notebook/sxaesthetic/)

That’s good stuff, right? I think so.

But let’s take a step back from the philosophical implications of the movement and do some of our own shell collecting in the sand. Where do we see the New Aesthetic playing out?

Here’s a few that I found:

1) My latest favorite Tumblr: ScreenshotsofDespair. Apart from appealing to that deep and sinister Schadenfreude bone that I have, this Tumblr is a perfect example of the New Aesthetic. We take photos, of screens, which we see delivering ambiguous and subtly insulting messages that seem to mirror our own loneliness, unpopularity, failure,- despair. So good.

From "Screenshots of Despair"

2) Where am I?: Google Maps and StreetView. The fact that we now actively use archived and ongoing screenshots of satellite maps and digital photography to represent to us what the world looks like, rather than having to travel there physically. I know what my friend Anna’s house looks like in Berlin without ever having been there, but I only know what it looks like on a sunny day-  April 2nd, 2009.

3) Tweet-note: I’m coining this term (unless it has been coined before) to mean seeing a live event happen through the lens of what is being said about it by the Twitter-verse. See my piece onsentiment analysis for a more nuanced examination of the implications of this, but it’s pretty crazy that these days (especially at ANY high-tech conference) you can sit in a room of thousands of people, listeningto/watching the same keynote, and yet about 98% of the audience is simultaneously tracking what is being said about that event via Twitter on their smartphones, thereby allowing the rest of the audience to largely color their opinion in real time.

4) Art: This is obvious, but the emergence of re-pixellating and bringing digital back to analog, and a nostalgia for real film is all playing out in the art world. The pixellation movement really interests me because it’s such a blatant reversion to pointillism, but it represents more of a re-education for a younger generation on how the greater whole is amassed as the result of millions and millions of tiny components. It’s also a throwback to so many other modernist movements- Duchamp’s Nude Descending a Staircase and Picasso’s Cubism comes to mind, especially here, when we talk about the New Aesthetic in terms of trying to represent the everywhere-at-once nature of things today. You can look at a book, just a simple book with your own eyes. But you can also look up reviews of the book on Amazon or Goodreads, you can research Google images of the book, how much people will pay for the book on eBay, you can read reviews of the book on the NYTimes, you can take a weathered antique-y snapshot of the book with Hipstamatic, text message your friend about the book with its photo attached, and many other options that I can’t even think of right now. All of that is a more than 360 degree representation of that book: what it is, what it looks like, what it represents, where it is, and how it is. Just like in Cubism, the object ends up being transformed, rendered nearly unrecognizable to its original form by having been taken apart and conveyed based on its components, then re-constructed on more planes than the naked eye can fully behold. The same is true of my next example…

5) Does This Photoshop Make Me Look Fat?: We are no longer satisfied with truthful representations of human bodies. In fact, we might not even really believe the truth any more if it were given to us. We have been carried away- in the beginning unaware, later blissfully aware- by the movement to re-architect human anatomy through Photoshop. I admit I have visited blogs and websites that show the blunders of graphic artists and I often STILL can’t see that anything is wrong with the images. It is that nefarious. We are more content to see human bodies through the lens of Photoshop than through reality.

6) Branded Space: this is an old feature, the fact that we see in everything a chance to advertise or place products, but one recent example was so blatant I can’t fail to mention it here. It was very recently announced that in his next movie, James Bond will be sipping not a martini, but aHeineken. That’s right, 007’s drink of choice has received the ouster in favor of product placement. Needless to say, the reaction has not been, er, positive. But it is yet another example of the New Aesthetic- not only do we see even everyday objects and products through new physical lenses, we continue to see them through figurative lenses that are colored according to which advertiser has the most money to spend that day. So the object is not permitted to exist alone for us any more. Its meaning is always stamped across its face.

In fact, Maybe the weirdest aspect of this movement is how eminently consumable it is. It’s practically Warhol-esque in its commercial viability. A perfect example being how Facebook just gobbled up Instagram, the popular hipster-making photog app for $1B. But there are thousands more examples on the official New Aesthetic Tumblr. Let the New Aesthetic binge begin.

One last expression for you: Analog Recidivism: Actually, I’m just hoping this will somehow emerge as a reaction to the New Aesthetic. I think one of the next evolutions of the movement will be to feature in art, culture, social customs, etc. what we just don’t see any more as a result of our attachment to viewing the world through the lens of our gadgets and technology. Instead of showing us how our views have changed and been modified, somehow we will be shown what we just didn’t see as a result of staring at a phone, a computer, a tablet, etc. The little things we no longer notice or take note of will be featured as once again novel by virtue of the fact that we, physically, are no longer trained to see or look for them. Did I just blow your mind?

To read more on the New Aesthetic:

http://booktwo.org/notebook/sxaesthetic/

http://www.riglondon.com/blog/2011/05/06/thenewaesthetic/

http://newaesthetic.tumblr.com/

http://www.wired.com/beyond_the_beyond/2012/04/anessayonthenewaesthetic/


Hi All, sorry for the hiatus. But I’m back in black. New year, and lots to discuss. Let’s get to it!

Clearly I couldn’t let discussion about SOPA and PIPA and the ensuing takedowns architected by Anonymous go untouched in this discussion space, so let’s delve into this, shall we?

For those who aren’t aware (where in the hell have you been?), let’s first break these two down to their most elemental forms:

Here’s what (admittedly biased on this matter) Wikipedia has to say about what SOPA is, “The Stop Online Piracy Act (SOPA) is a United States bill introduced by U.S. Representative Lamar S. Smith (R-TX) to expand the ability of U.S. law enforcement to fight online trafficking in copyrighted intellectual property and counterfeit goods. Provisions include the requesting of court orders to bar advertising networks and payment facilities from conducting business with infringing websites, and search engines from linking to the sites, and court orders requiring Internet service providers to block access to the sites. The law would expand existing criminal laws to include unauthorized streaming of copyright material, imposing a maximum penalty of five years in prison”

Basically, this was legislators catering to big media companies’ interests by proposing a law that would give the U.S. government the right to prosecute people who propagated intellectual property that they didn’t own online. In other words, the internet wouldn’t exist unless the government felt that it should.

“Proponents of the bill say it protects the intellectual property market and corresponding industry, jobs and revenue, and is necessary to bolster enforcement of copyright laws, especially against foreign websites.”

“Opponents say the proposed legislation threatens free speech and innovation, and enables law enforcement to block access to entire internet domains due to infringing material posted on a single blog or webpage. They have raised concerns that SOPA would bypass the “safe harbor” protections from liability presently afforded to Internet sites by the Digital Millennium Copyright Act.”

So that’s SOPA in the House of Representatives. A second, replica piece of legislation was simultaneously being put up for consideration in the Senate called PIPA or the Protect IP Act. On this legislation, Wikipedia says:

“The PROTECT IP Act (Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act, or PIPA) is a proposed law with the stated goal of giving the US government and copyright holders additional tools to curb access to “rogue websites dedicated to infringing or counterfeit goods”, especially those registered outside the U.S. The bill defines infringement as distribution of illegal copies, counterfeit goods, or anti-digital rights management technology. Infringement exists if “facts or circumstances suggest [the site] is used, primarily as a means for engaging in, enabling, or facilitating the activities described. The bill was introduced on May 12, 2011, by Senator Patrick Leahy (D-VT) and 11 bipartisan co-sponsors.”

So that’s the house-cleaning. They’re the same piece of legislation, same clear and dangerous threat to internet freedoms and the Internet’s intrinsic ability to allow for the wide and free dissemination of information.  Now down to the brass tacks.

Let’s begin with the fact that just pragmatically, taking on the Internets is always stupid. Why? Because congress and the President are centralized forces of power- quite well identified and held to certain moral and legal standards of behavior and comportment. The internet is none of those things. It is a nebulous, unscrupulous, largely anonymous and completed decentralized force of power, and it will not be stopped. Which is why, they have certainly won this round of the fight and will ultimately win the war on issues of intellectual property online.

So Wikipedia was one of the largest of many web resources (others included Reddit, the social news site, and BoingBoing, a technology and culture blog) that decided to shut down for a 24 hour period in public protest against these two bills.  As the NYTimes Bits Blog reported: “Visitors around the globe who try to reach the English-version of Wikipedia will be greeted with information about the bills and details about how to reach their local representatives. Mr. (Founder Jimmy) Wales said 460 million people around the world visited the site each month, and he estimated that the blackout could reach as many as 100 million people. In addition, some international Wikipedia communities, including the one in Germany, have decided to post notices on their home pages leading to information about the protests, although they will remain functioning as usual.”

“The government could tell us that we could write an entry about the history of the Pirate Bay but not allow us to link to it,” he said, referring to the popular file-sharing site. “That’s a First Amendment issue.”

But then Anonymous had to go and get all involved, making it no longer a seemingly noble protest, but taking matters into their own hands. And this is where decentralization begins to get really interesting.

For Anonymous it wasn’t enough to shut down one’s own site, and make one’s own decision to go dark- Anonymous wanted to prove once and for all to big media companies such as CBS and Universal Music that it is but for the grace of Anonymous that their sites exist at all. In a bold and HIGHLY under-publicized and under-discussed move if you weren’t online (I think largely because of Anonymous’s reputation as an anarchist and borderline-terrorist non-organization) Anonymous temporarily removed CBS.com and Universal Music as well as its parent company Vivendi from online view. There has been much speculation about whether the sites were full-on deleted, redirected, etc. and I won’t debate that here, but I think the major point here is that a decentralized network of self-labeled “hacktivists” hold the power to completely destroy someone else’s online presence as retribution. So while the politicians, PACs and lobbyists seek to pass these bills the old-fashioned way through our system of government and legislation, the internet turns its nose on their efforts and operates completely independently.

The repercussions of these acts by Anonymous are massive. Is it to be said once and for all that the internet is ungovernable? Certainly any jurisdiction over internet content and domains is highly debatable and obscure- who has the right or the resources to police the net? Where is that online security task force- is it a branch of the UN peacekeeping forces? Which country’s government holds the right to censor content? If Google’s tangle with China and the Arab Spring have taught us anything, it’s that the rules for who gets to yea or nay internet content are still being written and continue to be written by unknown authors, sitting in dark corners leading their revolutions with armies of revolutionaries who couldn’t recognize them if they passed by on the sidewalk.

President Obama said no to the current versions of these bills, but SOPA and PIPA are by no means dead in the water. This will be an ongoing discussion, but I stand by my opinion that even if SOPA or PIPA were to pass in Congress, they would have a completely unmanageable time attempting to enforce either in the chaotic and decentralized network that is the Net. The point it would seem, my friends, is moot.


In my opinion, discussions of identity and what identity means, and what constitutes identity have never been more interesting. With the web as a mirror for each of us, as well as a playland of impersonation, self-invention, reincarnation and improvisation, the platforms and dimensions where human identity is played out have never been more abundant and easily accessible.

I was reading my New Yorker last night and came across an article about Aadhaar, an enormous project on a scale never before attempted, that could have far and wide reaching implications for generations of humans yet to come. The project is to officially identify and document the existence of every Indian living in India by collecting certain biometrics from each individual and then issuing those specific biometric features an ID number.

As summarized here:

“Aadhaar, launched by Nandan Nilekani, a genial software billionaire, intends to create a national biometric database ten times larger than the world’s next-largest biometric database.”

One of the stated aims of the project is to “help reduce the extraordinary economic distances between those who have benefitted from India’s boom of the past two decades and those who have not.”

Some of the stunning details of this project that I read about raised very fundamental issues of modern identity as being tied to a specific nation-state. For instance:

“India has no equivalent of Social Security numbering, and just thirty-three million Indians, out of 1.2 billion, pay income tax, and only sixty million have passports. The official opacity of hundreds of millions of Indians hampers economic growth and emboldens corrupt bureaucrats.”

It’s just incredible to think that in this day and age, not everyone who is born into the world is documented as even being alive. It’s incredible, as an American, to consider a situation where your government has no idea that you exist, and you survive outside of the limits of its systems .

Though that all sounds like fodder for a summer anarchist action movie trailer, the reality is, though the lines and boundaries between physical country border and cultures are disappearing slowly as a result of widespread globalization and commercialization, our identities are still very strongly tied to the countries in which we are born, or in which we live. And I should clarify that I am not saying that if you are born in America, you automatically strongly identify as an American. I am saying that if you are born in America and hate the U.S., you are still defined by the country in which you were born, or in which you live, even if you hate it. You are defined in reference to being a part of it, however tenuous that connection is. But if you are born in India and your country does not know you, nor acknowledge that you exist, how is your identity derived?

And if you do not really exist in the eyes of the government, and are not automatically considered a citizen in having been born there, what right does the government have to come and claim you later, as India and the Unique Identification Authority of India (the government agency that is directing this program) are attempting to do with Aadhaar?

It is ironic in this instance that India, so woefully behind in identifying its own citizens due to an outsized human population, is poised to actually jump the gap immediately and overstep other more-developed countries’ systems of social security and citizen identification. Aadhar’s system is based on biometrics, or “methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits.” Interestingly, Wikipedia’s entry here described two different applications of biometrics information: “In computer science, in particular, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance.” Oh Foucault, why do you plague us so?!

Still, in this case, it is hard not to question the intense nature of these methods of identification and the database of information they will generate. Concerns about privacy have, of course, been raised:  “Alongside arguments about social policy, there is also some Indian disquiet about Aadhaar’s threat to privacy.”

A Times of India article also mentioned the potential for exploitation of this info:

“For Nandan Nilekani , the chairman of Unique Identification Authority of India , the challenge now is not just to roll out one lakh or more Aadhaar numbers a day, but to create an ecosystem for players to build applications on top of this identity infrastructure. Now, Nilekani has been negotiating with the Reserve Bank of India to allow banks to treat Aadhaar number as the only document for opening an account. In a free-wheeling interview with Shantanu Nandan Sharma, Nilekani talks about life after Aadhaar when a villager would be able to use a micro-ATM in his locality, or a migrant from Bihar would be able to flash out his number in Mumbai as an identity proof.”

So the ability to identify people, truly down to their physical core can be both exploitative and empowering, as the New Yorker article claims, “If the project is successful, India would abruptly find itself at the forefront of citizen-identification technology, outperforming Social Security and other non-biometric systems.” Of course it would, it’s the physical data collection analog of what Facebook has been doing all along.

In fact, the arguments for embarking upon this venture to issue ID numbers to each Indian are manifold, yet one that seems to float upwards most often is the assertion that this system of identification will help to cut down on abuse of government resources and inaccurate snapshots of how many people are affected by official policy.

Interesting that this is the very reason that Facebook famously insists upon banning pseudonyms on its ever-popular social platform. As Gawker puts it, “the idea that anonymity or multiple identities leads inexorably to a cesspool of abuse, cyberbullying, and spam is Facebook’s strongest argument for a monolithic online identity—one they come back to again and again in defending their controversial real name policy.”

This is written in the context of an article highlighting Chris Poole, shadowy head of the online meme-maker  4chan, and his remarks at the recent Web 2.0 conference that “true identity is prismatic,” and that the actions of online mega-sites like Facebook are “eroding our options” when they lock us into a single identity. In reality, Poole argues, humans are not defined in only one way or another, but by multiple simultaneously performed identities.

Gawker writes, “At this week’s Web 2.0 conference, Poole criticized Facebook’s real name, one profile-per-person policies. Facebook are, he said, ‘consolidating identity and making people seem more simple than they really are… our options are being eroded.

True identity is ‘prismatic,’ according to Poole. You want to be able to present a different identity in different contexts, and be able to experiment without risking a permanent stain on your identity—something Facebook is making increasingly possible as it colonizes everything from games, to blog comments to your favorite music service.”

Synthesizing these ideas and arguments for a minute, the very idea that a retinal scan can prove someone was born somewhere, and that those two elements of identity correlate to provide a human identity is interesting enough to a modern mind. How about when we suppose that each of us– though we may physically have blue, green, or brown eyes, blond, black, brunette or red hair, have been born in Bali, Mexico, Singapore, or Tunisia—is actually a shapeshifter, constantly adapting our personality and persona to best compliment a new group of people, or a new context. Are these quests to nail down our identity through increasingly scientific pursuits even worth their salt if we are each, many people, simultaneously?

No matter how we think our own identities are constituted and shaped, whether we believe we are multiple people or just one, the quest to collect information and data about how we behave, who we are, and what we look like is always evolving. Just recently Facebook field paperwork to form a Facebook Political Action Committee (PAC) that “will fund candidates who support ‘giving people the power to share.’

According to the Gawker story about the PAC, it’s “dedicated to ‘mak[ing] the world more open and connected,’ a spokesman tells The Hill. It will be funded by Facebook employees. Meanwhile, Facebook’s lobbying budget is metastasizing ; the company spent $550,000 so far this year, compared to $350,000 all of last year.”

Will a new era of online conmen and women emerge as a result of this movement to collect identity data? Is privacy officially dead? How do you choose to identify yourself, what do you identify with?


I have recently become obsessed with analytics. I just love the idea of using solid data to make informed choices toward action. It’s the ultimate voyeurism. After all, the internet is a window through which you can peer to monitor other people’s activity. It’s also seductive, instant gratification- I post a document and then check in just an hour later to see how many people have clicked on it, how long they spent reviewing it, where they went after they read it, where they came from before reading it. ..

The power that platforms like Google Analytics and Omniture offer excites me in ways I shouldn’t even publicize- the possibility that all of that information about online actions and behavior is at my fingertips to exploit in order to be more productive, more effective is intoxicating. This is probably why it’s a good thing that I don’t work in marketing or advertising.

But apparently the harvest, process of sorting, and the exploitation of human information no longer stops with marketers and advertisers- now the government wants in.

According to an article in yesterday’s  NY Times,  “social scientists are trying to mine the vast resources of the Internet — Web searches and Twitter messages, Facebook and blog posts, the digital location trails generated by billions of cellphones” to predict the future. This is being conducted all in the name of the U.S. Government, or in this case, the Intelligence Advanced Research Projects Activity unit of the Office of National Intelligence.

Why? Because  they believe “that these storehouses of ‘big data’will for the first time reveal sociological laws of human behavior — enabling them to predict political crises, revolutions and other forms of social and economic instability, just as physicists and chemists can predict natural phenomena.”

Remember our dear friend Michel Foucault who opined on systems of surveillance in modern society? He just rolled over so many times in his grave he’s now a taquito. But putting the panopticon aside for a moment, let us instead turn to “chaos theory” to underline why this whole venture isn’t necessarily a very good idea.

Chaos theory, as a discipline, studies:

“the behavior of dynamical systems that are highly sensitive to initial conditions, an effect which is popularly referred to as the butterfly effect.”

The “butterfly effect theory” is basically this:

Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible in general.  This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable.

Yes, if this is ringing a bell, it’s because you’ve heard of the anecdote the theory is named for, whereby a hurricane’s formation occurred because a distant butterfly had flapped its wings several weeks before. Ridiculous, but it does vividly illustrate the point that the entire globe is a system, and there are infinite factors within that system interacting every day to produce outcomes- and needless to say, these factors are not all diligently recorded in Brooke Shields’ Twitter stream.

Ever since analytics, Facebook, and Twitter broke onto the human information scene, the embedded hubris of men has convinced us that if we’re just smart enough to design a program to parse all of this information, then finally all of our inane yet determined recordings of our daily details will finally mean something– that it will be useful!

Right? Wrong.

The mashed potatoes are just mashed potatoes. If you want to see anything in the figurative mashed potatoes, then see this: the Tower of Babel, people.

“Tower of Babel?” you say? Yes. The Tower of Babel. My favorite of all biblical references ( we all have one, right? Right?).

Need a quick brush-up? OK!

In the story of the Tower of Babel, from Genesis, ‘a united humanity of the generations following the Great Flood, speaking a single language and migrating from the east, came to the land of Shinar, where they resolved to build a city with a tower “with its top in the heavens…lest we be scattered abroad upon the face of the Earth.’ God came down to see what they did and said: ‘They are one people and have one language, and nothing will be withholden from them which they purpose to do.’ So God said, ‘Come, let us go down and confound their speech.’ And so God scattered them upon the face of the Earth, and confused their languages, and they left off building the city, which was called Babel ‘because God there confounded the language of all the Earth.’(Genesis 11:5-8).

In other words, chaos theory’s conclusion that all of the world’s data is basically worthless, unreliable crap aside- this “big data eye in the sky” can and will never be.

First, because, without God’s intervention, we are perfectly great at getting in our own way, thankyouverymuch.

For example, the NY Times article cites IARPA’s claim that “It will use publicly accessible data, including Web search queries, blog entries, Internet traffic flow, financial market indicators, traffic webcams and changes in Wikipedia entries.”

About that, the U.S. Government would do well to recall the response to every single privacy change that Facebook has ever made about user data.

Also, the public’s responses to the Patriot Act.

Also, the public response to News Corp’s recent phone hacking scandal.

I could go on. The point is, I don’t think folks will accept the government’s efforts to exploit the aggregation of their online and publicly collected information in order to predict when we might all come down with whooping cough.

Second problematic claim, “It is intended to be an entirely automated system, a “data eye in the sky” without human intervention.” Errrr…what about all of that human generated information? Isn’t that, um, human intervention?

I recently had the absolute pleasure of hearing Stephen J. Dubner- author of Freakonomics and creator or host of every other program, show, or book that came along with it- speak at a conference. He gave an excellent and very compelling lecture on the dangers of relying too much on “self-reported data.”

His point is that, for industries or disciplines where data in large part determines future strategy and action, a little outside consulting and collection is merited. Self-reported data is, by virtue of the fact that humans are involved, problematic when it comes to accuracy.

This means that every tweet, Facebook update and comment flame war on a review site should be read and collected with a massive grain of Kosher salt. It is hard to imagine how the government would calculate this unreliability into its system through error analysis and standard deviation. Suffice it to say, there is still much work to be done on human reported data, sentiment analysis and social statistics before we could get anywhere close to sorting this all out in any meaningful fashion.

Luckily, as the NY Times reports in the article, not everyone is convinced this is even worthwhile:

“”I’m hard pressed to say that we are witnessing a revolution,’ said Prabhakar Raghavan, the director of Yahoo Labs, who is an information retrieval specialist. He noted that much had been written about predicting flu epidemics by looking at Web searches for ‘flu,’ but noted that the predictions did not improve significantly on what could already be found in data from the Centers for Disease Control and Prevention.”

So, though I myself am drinking the cherry kool aid of acting and strategizing based on the measured results from analytical data, I feel the U.S. Government is seriously overstepping its bounds on this one- both in terms of infringing on other people’s data rights, as well as in terms of outpacing the world’s statistical abilities when applied to cultural data.

Hit me in the comments if you have thoughts of your own on the matter…


First and foremost, quite importantly for the purpose of this post: definitions of “Persona” vs. “Identity-“

Persona

  • : a character assumed by an author in a written work
  • : an individual’s social facade or front that especially in the analytic psychology of C. G. Jung reflects the role in life the individual is playing
  • : the personality that a person (as an actor or politician) projects in public
  • : a character in a fictional presentation (as a novel or play)

Identity

  • : the distinguishing character or personality of an individual : individuality
  • : the condition of being the same with something described or asserted

Crap, that actually wasn’t as helpful as I had hoped it would be…I feel more confused now than I did before.

Nevertheless, these definitions seem to point toward the fact that a “persona” is more often something performed, or developed consciously one’s self, or performatively developed by someone else, whereas an “identity” is embedded and synonymous with a person’s actual character. For the sake of this entry, that is how we will distinguish between the two.

Moving on to THE POINT.

A while ago I tried to pitch a story to this American Life which had been inspired by the experiences of my friend- we’ll call him Jim. See, Jim was looking for a new job and applying at a few different companies. One day, reminded by a friend of his that he should be actively managing his online persona through Google search results, Jim Googled himself to see what came up when he searched for his full name.

The search results floored him. Jim was met with a cascade of search results about a man with his same name. There were pages with warnings posted by people claiming that a gentleman with Jim’s same name was a con man, that he had tricked them out of money, that he was a pathological liar, and not to trust him. The warnings described a man with a similar build, height, weight and general hair and eye color.

Jim freaked out (I think, understandably), because he was very well aware that any prospective employer would be Googling him to do a cursory background check, and if they were met with this barrage of information he might be weeded out of even a beginner pool of job applicants. He was being framed by someone he had never met, and who, due only to sharing the same name and a similar physical build, was stealing his online identity. How can you combat that in this day and age?

To this day, Jim (luckily employed by now) has to include disclaimers in applications and emails and hope that employers and business partners will take his word that he is not “that Jim” when  embarking on new ventures. If Jim weren’t already married, presumably this would also severely impact his dating and love life.

The story I wanted (and still want) This American Life to cover is this: what happens in the modern world when all of the other folks who use your name misrepresent and sometimes even defame your character online? In a modern era where so much of our persona is developed and managed online, how do we separate what is fake from what is real, and what happens when even our fabricated online personas take on a life of their own?

What do I mean by fabricated online personas? Well, is the life you represent on Facebook an accurate snapshot of what is really going on with you? One of my favorite questions to ask is why no one ever posts photos of themselves crying alone on a Friday night- because that does happen to people. It’s widely known that our online selves, or personas, generally skew toward happiness, success, beauty, and popularity rather than honestly depicting struggles, bad hair days, and loneliness.

And having control over how we are presented online is very important to most internet users- so much so that companies like www.reputation.com now exist to help you “control how you look on the internet.”  Their claim, “People searching for you are judging you, too – defend yourself against digital discrimination with Reputation.com” may seem contrived and fear-mongery, but it still taps into some very real concerns for people.

After all, our identities are very important to us, and the gadgets and devices we are using provide a mirror of our own selves which we project onto these technologies. In fact, Michel Foucault (remember our dear friend?) called these tools “Technologies of the Self,” before the internet was a thing. According to my fascinating pal Wikipedia,  Technologies of the Self are “the methods and techniques (“tools”) through which human beings constitute themselves. Foucault argued that we as subjects are perpetually engaged in processes whereby we define and produce our own ethical self-understanding. According to Foucault, technologies of the self are the forms of knowledge and strategies that “permit individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.”[2]

In other words, these days, technology and social media help us to develop our online personas, which end up very deeply affecting our real identities. See what I did there?

For example, if you’re one of the millions of Indian surname Patels in the world, trying to get a unique but still relevant Gmail email address must be murder at this point. You would hardly feel like the email address represented you if you were Patel627281939464528193947273484@gmail.com

And what about the mayhem and madness that surrounded Facebook’s push to get its users to sign up for a unique direct URL to their profiles? Sure, maybe Tatianuh Xzanadu had no problems getting her direct URL with no competition, but for the rest of us, it was like an Oklahoma land run, or a crushing Black Friday sale, waiting for the clock to hit the magic time when we could hurriedly type in our first and last name and finally claim a personalized Facebook URL, a chance at allowing people to access the real me (as far as anyone’s Facebook profile actually does that).

This would all be complicated enough, except that these days not only are people with the same names being misjudged online for the behavior of others with the same name, but poor celebrities and famous authors are having their personas and online identities and even their styles co-opted. Again, for example, the gentleman who formerly tweeted as Christopher Walken under the handle “CWalken,” who delighted thousands on Twitter by impersonating the idiosyncratic and gloomy actor in his tweets about everyday observations and occurrences.

The Wrap interviewed “CWalken” and described the Twitter feed thusly,

“What’s great about the “CWalken” feed is that it sounds like Christopher Walken, yet it’s got the consistent tone and point of view that only a committed writer can achieve. “CWalken” reads as if the actor himself were emerging from a surreal haze a few times a day to note the stupidity, oddness, and weird beauty of the everyday world:”

And the mystery Tweeter, when interviewed, similarly made some really interesting points:

“The politics, tastes and observations are my own. That is — I am not trying to speak for Christopher Walken. I am simply borrowing his voice and reworking my words in his cadence. Some people crochet, I do this.”

It’s problematic because some celebrities feel that their identity and their reputation is at stake, that something they have lived a lifetime to build has been stolen from them. But in some cases, this really is high art. As The Wrap author points out, the CWalken tweets were focused and really well-written, probably much more so than Mr. Walken himself could have achieved.  Alas, the “CWalken” account was eventually shut down because Twitter maintains a policy of cracking down on impersonator accounts.

However, other online persona impersonators have had similar success, such as the perennial favorite: The Secret Diary of Steve Jobs, or one of my recent obsessions, “RuthBourdain” where Alice Waters was anonymously tweeting as a combined persona of Ruth Reichl mashed with Anthony Bourdain. That little venture  even earned Waters a humor award.

I mean, that gets really complicated. At that point we have a celebrity chef who is world renowned and celebrated in her own right, assuming the persona of not just one, but two other luminaries in the food world as an outlet for her nasty and rye, humorous side.

One last example I just came across today introduces yet another new genre, blog as Yelp Review as famous author: check out Yelping with Cormac. This Tumblr blog assumes the writing style and occasional subject favorites of Pulitzer prize winning author and presumed hermit Cormac McCarthy in order to write Yelp-style reviews of well known commercial establishments in the Bay Area. A fascinating concept, but here we have clearly gone completely down the persona-stealing online rabbit hole.

Where will the rabbit hole take us next?


Nielsen just dropped its Q3 2011 Social Media Usage Report, and some of the stats here are pretty interesting.

At a Glance:

  • Social networks and blogs continue to dominate Americans’ time online, now accounting for nearly a quarter of total time spent on the Internet
  • Social media has grown rapidly – today nearly 4 in 5 active Internet users visit social networks and blogs
  • Americans spend more time on Facebook than they do on any other U.S. website
  • Close to 40 percent of social media users access social media content from their mobile phone
  • Social networking apps are the third most-used among U.S. smartphone owners
  • Internet users over the age of 55 are driving the growth of social network-ing through the Mobile Internet
  • Although a larger number of women view online video on social networks and blogs, men are the heaviest online video users overall streaming more videos and watching them longer
  • 70 percent of active online adult social networkers shop online, 12 percent more likely than the average adult Internet user
  • 53 percent of active adult social networkers follow a brand, while 32 percent follow a celebrity
  • Across a snapshot of 10 major global markets, social networks and blogs reach over three-quarters of active Internet users
  • Tumblr is an emerging player in social media, nearly tripling its audience from a year ago

Here are a few of the graphics that go into a little more detail:


Any student of communications worth his or her salt will have studied the infamous Nixon-Kennedy Presidential debates of 1960. Why? Because they were the first ever televised presidential debates, and they marked an inflection point in American politics, where hearts and minds were not won merely by talented rhetoricians and charming radio personalities, but increasingly by physical appearances and a demonstrated ease in front of a camera.

As the story goes, Nixon was ugly and evil looking normally, but on the date of the first of the four debates he would have with Kennedy, his physical appearance was worse than usual: “Nixon had seriously injured his knee and spent two weeks in the hospital. By the time of the first debate he was still twenty pounds underweight, his pallor still poor. He arrived at the debate in an ill-fitting shirt, and refused make-up to improve his color and lighten his perpetual ‘5:00 o’clock shadow.’” I think we can all imagine.

However, Kennedy’s appearance was another story, “Kennedy, by contrast, had spent early September campaigning in California. He was tan and confident and well-rested. ‘I had never seen him looking so fit,’ Nixon later wrote.”

Whether Kennedy’s handlers were much more prophetic about the impact of TV, or whether Kennedy just lucked out, we may never know. What we do know is that Kennedy’s appearance on TV during that debate changed the path of American politics forever. A majority of Americans who listened to the debate solely via radio pronounced Nixon the winner. A majority of the over 70 million who watched the televised debate pronounced Kennedy the easy winner.

Are you beginning to see why this appeals to comms geeks? The suggestion that a newly introduced medium could so profoundly impact the perspectives of so many people in the context of a very high stakes popularity contest was tantalizing. It remains tantalizing today.

Fast forward 51 years to Obama conducting a Townhall meeting streaming on Facebook, and to GOP Presidential candidates using Twitter and Facebook metrics to potentially supplant traditionally collected polling information.

What would happen if you could use Twitter, Facebook or good old Google Analytics to accurately predict the outcome of the 2010 Presidential Election? Some growing social media analytics companies such as Likester are doing just that by measuring the uptick in popularity of pages and social networking presences. In fact, Likester accurately predicted this year’s American Idol winner way back in April.

But how scientific is this data, and what exactly is being measured? As Mashable reports, Likester mostly measures popularity and predicts winners based on the aggregation of “likes” on Facebook in concert with high-profile events. For the GOP debate, “The stand-out frontrunner was Mitt Romney, who ended the night with the greatest number of new Facebook Likes and the greatest overall Likes on his Page.” As we can see, Likester basically began the ticker right when the debate began and distinguished between unique “likes,” or “likes” that occurred after the debate had started, from overall likes. In the end Romney had 19,658 unique or new “likes” during the debate, resulting in 955,748 total “likes,” representing a 2.06% increase in overall “likes” during and directly following the televised debate.

Likester reported, “Michelle Bachmann ranked second in the number of new Likes on her Facebook Page.” In numbers that came out to 9,232 unique or new “likes,” 326,225 total, representing a 2.75% increase.

Care of nation.foxnews.com

Naturally, AdWeek threw their two cents into the discussion, arguing:

“Polling has always been an important element to any electoral bid, but now a new type of impromptu assessment is coming to the fore. Third parties, such as analytics startup Likester, are carving out a space for themselves by processing data that is instantaneously available.”

I’ll give you instantaneously available, but, again, how scientific is this? After all, no one seems to be taking into account what I would call the “hipster correlate.” The hipster correlate is the number of Facebook users who would have “liked” a Romney or Bachmann or Ron Paul page in a stab at some hipster-ish irony, thus proving to those who check their Facebook page or read their status updates their outstanding skills of irony in becoming a fan of a Tea Partier web page, etc. If we’re really doing statistical regressions here, what’s the margin of error here, Likester?

Additionally, how closely can we attach the fidelity of someone who “likes” a Facebook page to a living, breathing politician? On my Facebook page I think I have “liked” mayonnaise, but if there were to be a vote between mayo and ketchup to determine which condiment would become the new official condiment of the U.S., would I necessarily vote mayo? That’s actually kind of a crap analogy, but you get what I mean.

Before we are forced to read more blogs and news articles (like this one!) pronouncing exit polls dead and Facebook and Twitter as the new polling media, I’d like to see a very solid research study conducted as to how closely becoming a fan of a political Facebook page correlates to Americans’ actual voting behavior. In other, more web-based marketing terms, what’s the voting conversion rate for political Facebook pages?

Has anyone seen anything like that?

In other words, please, social scientists and pollsters, show us whether yet another new medium is disrupting the way that Americans individually see and interact with their political candidates, and how that medium has begun to shape the way those political candidates are regarded by an American audience as a whole.


OK, stay with me, because this entry will be jam-packed with seemingly unrelated elements, but I promise (hope?) it will all come together in the end.

In today’s NYTimes Thomas Friedman wrote an open letter to Chinese President Hu Jintao called “Advice for China.” In the open letter, Friedman asserts that Jintao had asked for impressions about what has now been termed the Arab Spring (I wish that I were creative enough to attach a pre-landing page to that link that first asked, “Seriously? You don’t know what this is?” ).

In his column, Friedman reports,

“Our conclusion is that the revolutions in the Arab world contain some important lessons for the rule of the Chinese Communist Party, because what this contagion reveals is something very new about of how revolutions unfold in the 21st century and something very old about why they explode.”

As you can imagine, this particular article is chock full of rhetoric about how social media platforms like Facebook and Twitter are changing the way that revolutions are born, are changing the way revolutionaries connect, etc. Read the article if you want the whole gist.

What stuck out for me in here was:

“The second trend we see in the Arab Spring is a manifestation of ‘Carlson’s Law,’ posited by Curtis Carlson, the C.E.O. of SRI International, in Silicon Valley, which states that: ‘In a world where so many people now have access to education and cheap tools of innovation, innovation that happens from the bottom up tends to be chaotic but smart. Innovation that happens from the top down tends to be orderly but dumb.’ As a result, says Carlson, the sweet spot for innovation today is “moving down,” closer to the people, not up, because all the people together are smarter than anyone alone and all the people now have the tools to invent and collaborate.”

As someone who read Surowiecki’s “Wisdom of Crowds” and found it to be such a breathtakingly accurate portrait of why social media matters in a modern political context, this paragraph really struck me. I guess I’m wondering if we have, in fact, all agreed that “all the people together are smarter than any one alone.”

Care of noobpreneur.com

I mean, I have personally read enough to be convinced that such a statement is quite accurate, despite my fears of groupthink and a mob mentality, I can see now very real and very tangible examples of why democracy is actually better than any other style of government (note please that I say better, not perfect). But have we all agreed on that?

I’m particularly inclined to believe in democracy as the best-yet model for government not only against the backdrop of what has been happening in the Arab world, but also because I have been reading up on my insects (cue the confused silence of the readers- Really? I thought that was an excellent segue).

Peter Miller’s “Smart Swarm,” is a great book for any communication or ‘wisdom of crowds’ geek. The book is sub-titled, “What ants, bees, fish, and smart swarms can teach us about communication, organization, and decision-making” and boy has it been teaching me a few things.

So far I’ve read about the fascinating networks and collaborative processes which exist inherently within colonies of bees, ants and termites. Difficult tasks such as locating new shelter, finding and foraging for food, and building a geometrically (and one might even say architecturally) complex living vessel are undertaken and achieved on a daily basis by insects to whom we ascribe the smallest of intellectual abilities. These insects all have different ways of building consensus about the best way to proceed. Bees have a special “figure 8” dance that they do in sequences at particular angles to ostensibly vote with a dancing fervor for their particular choice for the next nesting area. Ants leave scent trails behind them when striking out for food and the scent grows strong as more and more ants follow the same trail, collecting food and bringing it back to the rest of the colony.

“The Smart Swarm” basically goes to some lengths to offer a window onto how specific populations of insects and animals can offer clues as to how consensus and productivity may be alternatively achieved. The problem with humans, it would often seem, is that we have these big brains and these big mouths. Both of those things often get in the way of us agreeing, and on getting things done.

Care of brainz.org

The principle behind all of these comparisons between insects and humans is the study of biomimicry, which, as Wikipedia describes, is “the examination of nature, its models, systems, processes, and elements to emulate or take inspiration from in order to solve human problems.” For example, velcro is one of biomimicry’s earliest and most infamous products. [Would anyone like to go down the rabbit hole with me on this: please provide any comments or feedback on how you think biomimicry is generally regarded as a cool, smart-people thing, but anthropomorphism is generally considered to be the realm of lunatics and cat ladies.]

For communication geeks who love to examine how different groups of people can get together to solve big problems, this stuff is gold. If you’re a real biomimicry zealot, the amazing and tantalizing fact of it is that nature holds all of the answers to our problems already, as long as you’re ready to go out and closely watch it play out. Which brings me back to this notion of the democratization of information, which Cesar Alierta writes about in Chapter 1.4 of the Global Information Technology Report.

In the chapter, Alierta focuses mainly on ICT as the platform which brings about the democratization of information. But in reality, if you follow biomimicologists(?) like Miller, information is already everywhere around us, in nature, just waiting to be plucked and used to solve problems. Alierta refers to the so-called “Solow Paradox,” which asserts that “there is a lag between investing in or deploying ICT and the generation of positive effects on productivity.” And he goes on to say “no less important (than ICT to productivity gains) is the extent to which the impact of new technologies in the social sphere benefits the entire economy.”

But as most of us know, the investment in resources such as ICT is often a top-down decision, so naturally, if Friedman’s assessment is to be believed, that “innovation that happens from the bottom up tends to be chaotic but smart. Innovation that happens from the top down tends to be orderly but dumb” we’re constantly giving the purse strings and the power to invest in better innovation to the wrong folks.

A hive of bees leaves the decision of where to locate the next hive squarely in the capable hands (wings?) of drone worker bees to go out in search of suitable locations and come back and perform a vigorous dance for the location of their choice until a decision is made through consensus. A colony of ants puts the decision for where and how to find food for the colony solidly in the hands of its forager ants, and as they forge new trails and leave their scents behind, more and more ants find and retrace those steps, making the scent stronger and stronger and creating consensus in that fashion. The difference is, these are largely decentralized systems of building consensus, making decisions, and acting in favor of the greater good.

Which all brings me back to Friedman’s assessment of the use of social networking and messaging platforms during the incredible revolutions of the Arab Spring! As Alierta writes in Chapter 1.4 of the Global information technology Report, “technological change has not led to a progressive isolation of the individual. Instead, technology is facilitating the emergence of how forms of interaction- among individuals, groups, and companies- creating a new kind of cooperative that overcomes limitations of space, time and place.”

In other words, the Arab Spring was inevitable both from the technological and biological standpoint. The accelerated adoption of mass communication technologies in the Arab world coupled with a new awareness of the fact that what had been done and how it had been done had been harming a greater community of people than had been felt before access to these ICT were available made the hive revolt against its nasty queens in favor of what is believe to be a system for the greater good- that is, a system closer to democracy.

Gee, I hope Hu Jintao reads my blog, too.