Archive for the ‘Mobile Technology’ Category


Just because twitter is an American company, does it not have to play by other countries’ laws when it becomes embroiled in legal cases involving free speech?

That’s exactly the sort of mess that Twitter finds itself in today in the U.K. where a “British soccer player has been granted a so-called super-injunction, a stringent and controversial British legal measure that prevents media outlets from identifying him, reporting on the story or even from revealing the existence of the court order itself” in order to avoid being identified by name in scandalous tweets.  Unfortunately for the player, the super injunction has been ineffective and “tens of thousands of Internet users have flouted the injunction by revealing his name on Twitter, Facebook and online soccer forums, sites that blur the definition of the press and are virtually impossible to police.”

But I would argue that what is being blurred here is not necessarily the definition of the press, but rather the physical borders of a country where it meets the nebulous nature of the net.

How do we reconcile the physical, geographical and legal boundaries in which we live with the boundless expanses of the internet? I’m sure many people would agree that the democratic (in that it’s arguably free, fair and participatory) nature of Twitter’s platform and mission is inherently American. That Twitter’s ‘Americanness’ is built into its very code. So how do you transplant an American messaging platform such as Twitter’s in other countries and then expect it to be above or fly below the laws of another country?

“Last week…the athlete obtained a court order in British High Court demanding that Twitter reveal the identities of the anonymous users who had posted the messages.” But back in January of this year, as the New York Times reports, “Biz Stone, a Twitter founder, and Alex Macgillivray, its general counsel, wrote, ‘Our position on freedom of expression carries with it a mandate to protect our users’ right to speak freely and preserve their ability to contest having their private information revealed.’”

So what law should be followed in this case? According to the NYTimes, “Because Twitter is based in the United States, it could argue that it abides by the law and that any plaintiff would need to try the case in the United States, legal analysts said. But Twitter is opening a London office, and the rules are more complicated if companies have employees or offices in foreign countries.”

Yet our technologies and our corporations are very much U.S. representatives overseas. Google’s, Microsoft’s, Twitter’s, etc. offices in other parts of the world are nearly tantamount to U.S. embassies abroad.  These companies in large part bear the brunt of representing American ideals and encapsulate American soft power. Even the average Chinese person who may never encounter a flesh-and-blood American will most likely interact with multiple different examples of American cultural goods in his or her lifetime, largely due to the global proliferation of our technologies and media. Which is why, in large part, Twitter and Google have been banned in China. Too democratic for the Chinese government’s liking.

In his chapter (Chapter 1.4) of the Global Information Technology Report (GITR), Cesar Alierta with Telefonica argues that we are in the middle of “the fifth revolution.” The first revolution was the Industrial Revolution, then came Steam Power, then Electricity, then Oil, and now we are in the Information and Communication technology revolution- the fifth. He writes, “Each of these eras has entailed a paradigm shift, more or less abrupt or disruptive, which has led to profound changes in the organization of the economy, starting with individual businesses, and eventually, transforming society as a whole.”

What if we assume that- even if it’s not the original intent- the tacit intent of a technology is to become embedded in someone’s life until it’s nearly impossible to remember living before it. If America’s technologies are little carriers of soft power democratic beliefs and practices, aren’t those beliefs also becoming embedded as well? If so, really, where do we draw the line about the use of a technology in a country other than the one where it was invented?

And though indeed this is an example of a conflict occurring between two very first world countries (the U.S. and the U.K.) This may be one of the greatest barriers to ICT adoption in emerging and developing economies.

In their chapter (Chapter 1.2) of the 2011 Global Information Technology report, Enrique Rueda-Sabater and John Garrity from Cisco Systems, Inc. argue that the “treatment of broadband networks…as basic infrastructure; the recognition that competition is one of the best drivers of technology adoption, and imaginative policies that facilitate access to spectrum and to existing infrastructure that can be shared by networks” are necessary preconditions to accelerated Information and Communication Technologies (ICT). However, the beliefs that underlie those preconditions, 1) that all citizens of a country deserve unlimited access to the internet as a basic human right, and 2) that competition (which can be read as capitalism here) is one of the best drivers of technology adoption, do not seem to necessarily be universal values.

Certainly the belief that unlimited access to the internet is a basic human right is a fast-growing belief among developed economies of the world. As Karim Sabbagh, Roman Friedrich, Bahjat El-Darwiche, and Milind Singh of Booz & Company write in their Chapter (Chapter 1.3) on “Building Communities Around Digital Highways,” “In July 2010…the Finnish government formally declared broadband to be a legal right and vowed to deliver high-speed access (100 megabytes per second) to every household in Finland by 2015. The French assembly declared broadband to be a basic human right in 2009, and Spain is proposing to give the same designation to broadband access starting in 2011.”

But Finland and Spain are both democracies, and France, is a republic with strong democratic traditions. Democracies tend to believe in transparency, accountability and the free dissemination of information, so naturally the adoption of technologies which put the ability to freely disseminate and consume information squarely in the hands of its people jibe with those beliefs. But that is not so in non-democratic societies. I would thus argue that some form of democracy, well-established, should also be considered as a pre-condition for the accelerated adoption of ICT.  And if a country has already heavily adopted and invested in ICT, just as Britain has, that then, as we have seen here, the accelerated deployment of ICT will also bring about accelerated petitioning for expanded democratic rights among its people.


The theme of this 10th edition of the Global Information Technology Report is “Transformations 2.0.” And a large theme of this report every year- and the basis for why it is relevant- are the connections it draws between economic development and information and communication technologies (ICT). Namely,

“The next decade will see the global Internet transformed from an arena dominated by advanced countries, their businesses, and citizens, to one where emerging economies will become predominant. As more citizens in these economies go online and connectivity levels approach those of advanced markets, the global shares of Internet activity and transactions will increasingly shift toward the former.” (page x)

As the report repeats (ad nauseum) in each of its otherwise excellent, guest-authored chapters, increased traction and penetration of ICT in developing countries and emerging economies is dependent upon two factors:

1)       “the availability of personal computers (PCs),” and

2)       “the density of pre-existing phone lines and cable”

Which is interesting on its own, because as Chapter 2, authored by Enrique Rueda-Sabater and John Garrity of Cisco Systems, Inc. asserts, the adoption or existence of PCs isn’t necessarily a precondition to the use of ICT. In fact, many emerging economies in Africa have leapfrogged PC ownership and moved straight into robust mobile internet access with limited or no access to PCs.

Each of these chapters seeks to answer deeper questions provoked by this positive correlation between ICT and economic development.

“we continue to be challenged by questions that were raised by John Gage of Sun Microsystems in the first edition of the GITR: “Can we apply ICT to improve the condition of each individual? Can ICT, designed for one-to- one links in telephone networks, or for one-to-many links in radio and television networks, serve to bond us all? And how can new forms of ICT—peer-to-peer, edge-to-edge, many-to-many networks—change the relationship between each one of us and all of us?” (page 3)

It is the new forms of ICT that this report largely focuses on, and in so doing introduces an interesting subset of factors to consider in evaluating new communications media:

 “Transformations 2.0 are difficult to accurately envisage, evolving technology trends are pointing to the most likely directions they will take over the next few years—what we term as the move toward SLIM ICT:

• S for social: ICT is becoming more intricately linked to people’s behaviors and social networks. The horizons of ICT are expanding from traditional processes and automation themes to include a human and social focus.

L for local: Geography and local context are becoming important. ICT provides an effective medium for linking people and objects (and processes) with local environments. This will allow differentiation across local contexts and the provision of tailored services.

• I for intelligent: ICT will become even more intelligent. People’s behaviors, individual preferences, and object interactions among other elements will be more easily stored, analyzed, and used to provide intelligent insights for action.

M for mobile: The wide adoption of the mobile phone has already brought ICT to the masses. Advances in hardware (screens, batteries, and so on), software (e.g., natural language interfaces), and communications (e.g., broadband wireless) will continue to make computing more mobile and more accessible.” (page 29)

There’s nothing absolutely groundbreaking here, or even new, really. But it is an interesting subset within which to view evolving and emerging economies.

I can already think of a number of people who would say that these lenses are actually partially contradictory- for instance, “mobile” and “local” seem at odds with each other from one vantage point, since mobile access allows anyone to reach out globally across previously restrictive limits of space and time, contradicting the notion that geography and context are becoming increasingly important.

Still, one of the most exciting things about emerging communications technology and media is that they can develop and burst on the scene in initially contradictory ways, later settling into their deeper contexts in a web of compatibility we would have earlier thought impossible. I think the social, local, intelligent and mobile aspects are good ones to zero in on as a platform for analysis.


The White House finally came forward last week with the decision not to circulate the graphic images that confirmed Osama Bin Laden’s death, and I immediately I believe I heard people around the U.S. (and the world, perhaps?) breathe a mostly collective sigh of relief. Or was that just me?

It is a favorite pronouncement that we are now an image-driven culture, focused chiefly on video, photos, and graphics to learn, retain and discuss the world around us. This pronouncement is made, particularly, in the context of discussions about the RSS-ification of news and information, where all the news that’s fit to print is expected to fit into 140 characters.

See, as the thinking goes, our brains are attempting to consume so much more information than ever before, so the introduction of new forms of media and imagery (read: not text) will help our brains to better retain and render more realistic those discrete and fast-coming pieces of information.

Whatever the strategy of getting information to us, as consumers of information, it is still worth fighting for the chance to use our own discretion when it comes to how we, as humans, want to digest our information. Often we seem to have no choice- the newspapers, site managers, TV and movie producers and editors do that for us. But when we are presented with the choice, many of us would still choose not to see graphic images of death and violence.

[I can already hear the devil on my shoulder wanting to advocate for his side of the story, so as an aside, I will say that I do believe there is power in images. And I believe that things can be rendered more real in our everyday lives by seeing them, even if only through a photographer’s lens. That is often a good thing, particularly for the politically sheltered and/or apathetic masses. But I also believe that things can be too real, and hinder a person’s ability to move on with their life. Or images can be so real, but so simultaneously staggeringly outside the context of someone’s own experience that they  are unreasonably and ineffectively disturbing. I believe the release of images of OBL’s death would have such an effect for many Americans.]

Which is why, I believe, so many Americans have keyed in on the photo taken by the White House photographer and posted on their Flickr feed, of Obama’s staff watching the live feed of the raid in Pakistan.

This photo has become the focal and symbolic  photo of the moment that OBL was killed, and has stirred so many different reactions. For me the photo is staggering on a number of levels:

1)      President Obama is not front and center.

2)      The expression of Secretary of State Clinton’s face (whether she likes it or not)

3)      The fact that we are experiencing the ultimate surveillance moment- through the eyes of someone who was watching the scene through a camera lens, we are watching those who are watching live footage of what was happening.

4)      It is perfect voyeurism, but it is also intensely primal. We are observing the reactions of other human beings to an event we know we must also react to. In their reactions we search for our own feelings about the event, and we take cues.

Incredibly, in their recent Opinionator entry, Gail Collins and David Brooks  brought up pretty much everything that I was thinking when I first saw this photo, but it’s something I think everyone should take a look at, because there is so much to discuss within the limits of this image.

On a similar note, and related to my earlier post about the news of Bin Laden’s death and the role of Twitter in breaking that news, here are some outstanding digital images of the flow of information across the Twitter-verse in the hours preceding and following the White House announcement, care of the SocialFlow blog.

For those of you who are unfamiliar with the company, SocialFlow is a social media optimization platform that is used …to increase engagement (clicks, re-tweets, re-posts and mentions) on Twitter. Our technology determines the optimal time to release the right Tweet based on when your audience is most receptive.”


In March of 2011, Pelago, the company known for having produced Whrrl, wrote a mini essay detailing their ideas about a concept they labeled “anti-search.” Anti-search, they claimed, was a movement in search of “serendipitous world discovery,” writing: “Search engines are good at addressing those “high intent” situations, like “where’s the closest Starbucks?” or “what kind of food does this place serve?” or “how are the reviews for this restaurant?”  You know what you’re looking for and it’s easy to express your intent as a query” and continues, “Serendipity is “zero intent” discovery, i.e. when you aren’t actually looking for something, but a great idea finds you.   Between these two extremes are discovery missions of varying degrees of intent, e.g. “I’m hungry” or “I’m bored.”

Which they represented by this interesting little graphic:

For me, this brings up the question, has the deliberate searching and querying of our surroundings via technology– whether those surroundings are natural or unnatural—really precluded the opportunities for actually, well, discovering places and things? Is there a chance that with the proliferation of location aware technologies, and geographic social mobility coupled with mobile internet access, we are no longer actually capable of physically seeing and interacting with what is actually around us? Are we completely incapable of tripping down a little ivy-laden alley and discovering a mural, or a coffee shop, or a funky shoe store without the aid of a mobile device or online coupon website?

According to Pelago, anti-search is comprised of three elements:

  1. “The right data in order to “know” a user.  I.e. user actions like check-ins, the social graph, interactions among users (which I’ll talk about in a second), etc.
  2. The right algorithms.  We need to take all this data and turn it into personalized recommendations.
  3. The right social ecosystem.  This is decidedly the hardest part.  The necessary content and data is locked up in people’s heads and hearts – we need to make it motivating and easy to get that content out, to get people taking the necessary actions to create the data to feed the algorithms that ultimately allow us to provide an amazing discovery experience.” (http://www.pelago.com/blog/community/2011/03/its-time-for-real-world-anti-search/)

But I would argue that the act of discovery does not rule out the possibility that the discoverer will stumble upon something they don’t like, something they wouldn’t have chosen. I would also argue that to prevent each of us from doing so is robbery, plain and simple, of the experience being challenged in our sense of taste. How are we supposed to define what we don’t like about something if we’re never faced with the distasteful something in the first place?

Besides, the word serendipity – in part- refers to an unintended experience. How can you possibly achieve that if your intention is to plug a social recommendation engine full of data to steer you towards intended unintended situations or experiences?

Which is why, with Groupon’s reported acquisition of Pelago, the whole ridiculous ethos of these sites and recommendation engines (which are, at their heart, merely designed to sell you things) has come full circle in a doomed cycle of self-mockery.

This acquisition clearly runs counter to Whrrl’s stated “anti-search” goal of “serendipitous world discovery.”

Case in point: how many among us have purchased at least one Groupon at this point (i.e. are unique Groupon users)? There aren’t any real numbers on that at this point, but it’s safe to say that number is in the millions, given that the number of Groupons bought at the time that this was published was in the 40 million range. Yet how many of us have subsequently struggled to find the time or the energy to use said coupon, or let the coupons pile up until one or two have expired without being used? I’d wager that number is in the high hundreds of thousands, if not also in the millions.

So someone tell me how that’s not intent or a deliberate attempt to make the time to go somewhere and use something that was purchased with that specific intent in mind. It’s not serendipity, it’s a scheduled appointment to go spend money at a pre-determined location.

At the risk of sounding like a complete luddite, the next time someone wants to indulge in a little “serendipitous world discovery,” I would honestly recommend that they go for a walk in their neighborhood- no headphones, no phone- just them and the buildings, parks, animals, and people around them.


October 19, 2010- http://www.gather.com/viewArticle.action?articleId=281474978616650

This last week a number of well-respected analysts and research centers released reports discussing the pervasive dominance of “post-PC devices.” Specifically the discussion is revolving around mobile phones, smartphones, tablets and e-readers. The Pew Research Center released its report on “gadget ownership” which demonstrated the overall dominance of cell phones in the U.S. Technology market. Pew surveyed 3,001 American adults and decided upon the “key appliances of the Information Age.” Those which came out on top were cellphones, PCs, e-readers, and Mp3 players.

Gartner and Forrester also threw their hat into the “buzz” ring. Gartner released its own report Friday which proclaims tablets the new cellphone. In the study Gartner reports that tablet sales have reached 19.5 million units this year and estimated that tablet sals would increase to 150 million units by 2013. In fact, Carolina Milanesi, a research vice president at Gartner, claims mini notebook computers “will suffer a “strong cannibalization” as the price of media tablets” drops nearer to the $300 mark.

In their report, Forrester chose to address head-on the new era security concerns that companies and consumers will experience as society continues to adopt these post-PC devices. In its report, Forrester discusses the additional security companies will have to implement for mobile devices commonly now used both at work and at play.

Yet as these respected analysts hail the new post-PC era, tablets and e-readers are still exploring the new hazards of a post-PC world. Just this week the New York Times posted an article which discusses the temperature control problems for Apple’s iPads and compares it to Amazon’s Kindle. Or, as the New York Times wrote it, “It seems that some iPads do not like direct sunlight, saunas or long walks on the beach.” And with the iPhone 4G’s antenna challenges earlier this summer, it’s clear that we’re nowhere near having worked out all of the kinks even as the mobile device market continues to innovate and we adopt their emerging products.


October 04, 2010- http://www.gather.com/viewArticle.action?articleId=281474978571983

How does technology play a role in keeping the Chilean miners both psychologically and physically fit?

As modern day technology consumers, many people around the world have integrated their technology use into their ritual of daily habits. For example, studies have shown that at least half of us turn on our computers first thing in the morning, even before we use the bathroom or drink our coffee.

Technology has so ingrained itself into our daily rituals that it is now considered vital to our mental survival, and has factored highly into the list of amenities currently being proffered to the 33 Chilean miners who have been stuck a half-mile below the surface of the earth since August 5th, after an enormous rock slide impeded their exit from the mine.

As Newsweek noted in a recent article, the miners are against an incredible number of odds as a result of the harsh underground living conditions, “To survive, they must endure constant 90 percent humidity, avoid starvation, battle thirst, guard against fungus and bacteria, and stay sane enough to safely do the work necessary to aid their own rescue.”

However, this is not your traditional mining disaster. The 33 Chilean miners are being treated to a modern-day approach to human survival. That means the miners are able to have their laundry done, three hot meals a day and occasionally ice cream.

As Newsweek has reported, the rescue effort’s lead psychiatrist, Alberto Iturra Benavides, is implementing a strategy which leaves the miners “no possible alternative but to survive” until drillers finish rescue holes, an operation whose completion date is estimated for early November.

What’s more amazing than even the basic services of laundry and hot meals is how technology has been able to play a vital role in their daily rituals and the quality of their survival a half-mile down. MSN reported that each weekend the miners have been able to communicate with their families via video chat for nearly eight minutes per miner. Also, as Newsweek reported, “When the miners do get moments to relax, they can watch television  — 13 hours a day, mostly news programs and action movies or comedies, whatever is available that the support team decides won’t be depressing.” Dramatic television and movies are barred, and the news they receive is being censored. The censorship is performed on the miners’ behalf, allowing them only positive and escapist entertainment- nothing too serious or grim.

Interestingly, though television and movies are allowed, personal music players are not. The reason given for this is that they tend to “isolate people from one another.” The rescue operations feel that the most important thing the miners can do is to be there for one another and be united in their efforts to survive. Personal music or game players would impede that effort. Newsweek reports that the lead psychiatrist on the case, Iturra, has proclaimed “What they need is to be together.”

There are, of course, some restraints for what technology may reach the miners. At this stage in the rescue efforts any and all technology must be able to fit through the incredibly narrow holes (approximately 3.19”) which are the sole means of communication and transport between the surface of the earth and the miners.

To continue following the efforts to rescue the 33 trapped miners in Chile, including the possibility that they might be rescued as early as late October check out these links:

http://news.yahoo.com/s/ap/lt_chile_mine_collapse

http://www.salon.com/life/feature/2010/09/16/chile_miners_waiting
http://www.cnn.com/2010/WORLD/americas/08/26/mine.disasters.survivors/index.html


September 22, 2010- http://www.gather.com/viewArticle.action?articleId=281474978538816

Hear that? It’s the sound of the content aggregator death knell. On October 1st, popular web-based RSS reader and news aggregator Bloglines, run by the team at Ask.com, will discontinue service. When asked why, Ask’s team reported “that social media sites like Twitter and Facebook killed it.”

And Bloglines is only the first. Other aggregators such as Google Reader, Digg, Reddit, and StumbleUpon are sure to be next. According to Hitwise, “visits to Google Reader are down 27 percent year-over-year.” The New York Times recently reported “a more pivotal reason that Digg is falling behind, analysts say, is that users are simply spending more time on Facebook and Twitter than they are on Digg.”

How did this happen? Is it truly impossible that content aggregation sites such as Google Reader, StumbleUpon, Digg and ReddIt can not exist side-by-side with the type of social news aggregation offered by Facebook and Twitter? What does this mean for RSS? RSS, which stands for Real Simple Syndication, is a protocol which helps push website updates to readers around the world so they don’t have to search for new content or endlessly hit refresh on a favorite web page. In 2005 RSS was a game changer. Today? Not so much.

According to the Bloglines message about its own end-date, “the Internet has undergone a major evolution. The real-time information RSS was so astute at delivering (primarily, blog feeds) is now gained through conversations, and consuming this information has become a social experience…being locked in an RSS reader makes less and less sense to people as Twitter and Facebook dominate real-time information flow. Today RSS is the enabling technology – the infrastructure, the delivery system.”

As a 2009 NeilsenWire also reported, part of this trend comes from the fact that blogs and news sites are no longer the endgame news tool, our friends are. “Socializers trust what their friends have to say and social media acts as an information filtration tool…If your friend creates or links to the content, then you are more likely to believe it and like it. And this thought plays out in the data.”

Does Mark Zuckerberg know that his company has driven content aggregators to the grave? Undoubtedly, yes. A recent New Yorker profile quoted Zuck as saying, “It’s like hardwired into us in a deeper way: you really want to know what’s going on with the people around you.” In fact, Facebook’s Open Graph feature allows users to see which articles their Facebook friends have read, shared, and liked. “Eventually,” the New Yorker observed, “the company hopes that users will read articles, visit restaurants, and watch movies based on what their Facebook friends have recommended, not, say, based on a page that Google’s algorithm sends them to.”

Some argue that content aggregators, or RSS readers, were always destined for the internet graveyard simply because they were too complicated and allowed users to become completely overwhelmed by the sheer bulk of information that was being pushed to them. One thing is for sure, if content aggregators don’t find a way to better integrate with, or at least successfully co-exist with social networking offerings like Facebook and Twitter, they will soon be relegated to the ever-growing category of “old news.”


September 22, 2010 – http://www.gather.com/viewArticle.action?articleId=281474978538594

Sneaking off to smoke cigarettes. Experimenting with alcohol. Sexual experimentation. These are all Hollywood hallmarks and symbols of adolescent youth in the United States. Americans think of the teenage years as the time to get out into the world, try new (and perhaps forbidden) things, and become an adult in the process. But what if in the process, the adolescent become a felon?

Every once in a while a high profile case involving teens and the internet hits the web, and parents start to squirm. Generally, however, these cases highlight how the Net may be perilous to the teen, not how the teen may be perilous to the Net. While in some situations those warnings are legitimate, it may be time for parents to begin to consider another way in which their child’s internet use may be perilous to their future: hacking.

Today’s teens are more tech savvy than any other generation, and the generation that follows them will be all the more savvy. According to a February 2010 study conducted by the Pew Research Center, “Internet use is near ubiquitous among teens and young adults. In the last decade, the young adult internet population has remained the most likely to go online. Over the past 10 years, teens and young adults have been consistently the two groups most likely to go online, even as the internet population has grown and even with documented larger increases in certain age cohorts (e.g. adults 65 and older).”

Thus it is no longer sufficient to think of every teen as wide-eyed and naive about the varied functions and uses for the Net. Many teens are way ahead of the rest of us, hacking and writing code, doing their own programming and creating the next generation’s tools. However, the same teen urges that drive them to experiment with drugs and sex– those strong hits of hormones and a sense of invincibility– also today lead them to commit crimes on the web.

Just this week a 17 year-old Australian teen caused a “massive hacker attack on Twitter which sent users to Japanese porn sites and took out the White House press secretary’s feed.” The teenager, Pearce Delphin, simply revealed a Twitter code security flaw and publicized it. The flaw was then exploited by hackers who subsequently wreaked havoc on Twitter’s user base of more than 100 million for nearly five hours. When asked why he would do such a thing, Delphin reportedly replied, “I did it merely to see if it could be done … that JavaScript really could be executed within a tweet…I had no idea it was going to take off how it did. I just hadn’t even considered it.””

But the story gets better, before the Associated Press could actually hypothesize what the danger of this hack might have been, 17-year old Delphin came through with it first, “Delphin said it could have been used to ‘maliciously steal user account details.’” He told the reporters, “The problem was being able to write the code that can steal usernames and passwords while still remaining under Twitter’s 140 character tweet limit.”

Likewise, in 2008 another 17-year old from Pennsylvania admitted to crashing Sony’s PlayStation site after being banned for cheating in a game called SOCOM U.S. Navy Seals. By intentionally infecting the Sony site with a virus, the teenage honors student was able to crash the site for a duration of 11 days in November 2008. In that case the kid got lucky, rather than pursue the case as a grand jury investigation, the authorities decided to let the teen’s local juvenile court handle the charges. In the end, the 11th grade student was judged delinquent and charged with unlawful use of a computer, criminal use of a computer, computer trespassing and the distribution of a computer virus.

Somewhat humorously, Net security sites like Symantec and McAfee have pages dedicated to teen use and abuse of the Net. Symantec’s is titled “The Typical Trickery of Teen Hackers,” and addresses questions such as “I discovered that my teenager had figured out my computer password and logged in, resetting the parental controls we had installed. How did this happen?.” In their recent 2010 study McAfee reports that, “85 percent of teens go online somewhere other than at home and under the supervision of their parents, nearly a third (32 percent) of teens say they don’t tell their parents what they do while they are online, and 28 percent engage with strangers online. The survey results should serve as a wake-up call for many parents.”

While teenage tomfoolery and trickery is generally regarded as humorous (thanks Hollywood) and as a coming-of-age tendency, the trouble begins when a teen’s future is jeopardized because he or she has not developed a sense for the moral and ethical implications of their actions on the web. Because hacking is not as tangible as, say, stealing a T-shirt at the mall, it is harder for teens to grasp how a few key strokes can be considered criminal. Yet it is up to today’s and tomorrow’s parents to put in the extra effort to educate teens about how their activities online may jeopardize their extremely valuable future.


September 12, 2010- http://www.gather.com/viewArticle.action?articleId=281474978513602

As the 2010 regular NFL football season begins, fans are reminded of everything they love about the game- the rushing roar of the home team crowd, the crisp fall weather, the complex plays, the strut and swagger of the scoring players. But fans should also take note of the new technology constantly being deployed and tested on football’s biggest fan base- the TV audience.

It may not be obvious why football fans would be such early technology adopters, but it begins to make more sense as you consider how statistically obsessed and fantasy football-involved the modern fan is. A Democrat and Chronicle article reporting on the effects of technology on modern NFL football consumption reported that one average fan they interviewed for their article is “never without his iPhone as he is constantly fed game updates and statistics each Sunday. At home, he watches games on his new big-screen plasma high-definition television through the Dish Network and writes a fantasy football blog at http://www.ffgeekblog.com.”

The same article listed some interesting stats on NFL media consumption, “While 1 million fans watch NFL games in person each week, an average of 16 million watch on television.” TVbytheNumbers.com reported that, according to Nielsen, this year’s first regular season game between the Minnesota Vikings and New Orleans Saints on September 10th was the most watched first regular season game ever.

With technologies such as high definition quality, the virtual visual 1st down line, access to any game via the Sunday Ticket, replays, and other league scores rotating on the screen, there’s no doubt that the ability to consume NFL games on TV is better-than-ever. But at stake are ticket sales for the live games, which suffer in terms of convenience and overall costs. Fewer people buying tickets to live games means more local blackouts. NFL team owners and stadium managers are investigating options such as seatback tv screens to bring that experience to the live game, but mobile and wireless technologies are still reigning supreme.

All of this adds up to make American football fans (college as well as NFL) some of the biggest consumers of home entertainment centers, TV equipment, and cable and satellite TV packages. However, as the future of network and cable TV looms ever more uncertain, and as web-based offerings work harder and harder to enhance the scope of their offerings, it seems inevitable that newly emerging products that incorporate the TV and web-browsing experience such as Google TV and Apple TV are perfectly suited to cater to these NFL early adopters with cutting edge offerings. How they do so and how much they cater to this influential demographic of TV fans still remains to be seen.


August 30, 2010- http://www.gather.com/viewArticle.action?articleId=281474978482524

At the beginning of this year Mark Zuckerberg famously announced that privacy was dead, stirring the pot and increasing concerns among the majority of internet users that their identities and personal information were being appropriated for capital gain.

Arguably, 2010 has been the year of “location aware technology,” whether the location is two dimensional or three dimensional. These days your computer knows where you’ve been online, where you’re going, and why you buy things there, and your phone can tell any satellite where you physically are on the globe and what advertising you’re passing at that very moment. Clearly, marketers are doing their best to collect as much of that information as possible and to use it.

One of the main issues in the ongoing debate about whether location aware technology and geotagging are net-positive or net-negative developments (or somewhere in between) centers on the concession that advertising and marketing are not going away any time soon. Advertising is an institutionalized facet of American life, especially in major urban centers. That being said, marketers like to argue that with more information they can better speak to a consumer’s interests and needs, as opposed to leading a consumer to buy something he or she doesn’t need.

Leaving that argument for a minute, the real concern here is over privacy, and educating the masses on how to protect their own privacy. A recent article in the New York Times cautioned readers against geotagging photos at their homes, and cited the example of Adam Savage, one half of the “MythBusters” team who had geotagged a Twitter photo of his car in front of his personal residence in the Bay Area. The Times pointed out that by doing Adam Savage had just informed all of his Twitter followers of his personal address, the make and model of his car, and that he was leaving for work at that very moment, “geotags… are embedded in photos and videos taken with GPS-equipped smartphones and digital cameras. Because the location data is not visible to the casual viewer, the concern is that many people may not realize it is there; and they could be compromising their privacy, if not their safety, when they post geotagged media online.”

Now with Facebook Places, a new feature which allows its users to tag their locations in their status updates, and the increasing use of Twitter and FourSquare, organizations such as the ACLU are concerned that the spread of technology is one again outpacing usage education and awareness of the risks of information abuse, “The organization highlighted the element of the new service that allows users to “tag” an accompanying friend and post his or her location to Facebook – even if the friend does not have an iPhone, which is currently the only platform on which the application is available.”

The other side of this coin involves how browsers and advertisers track our movements online. After all, this is a huge market that Facebook plans to tap, 50 percent of Facebook’s over 400 million users log in to the site at least once a day, and more than a quarter of that overall number access the service from mobile devices. However, despite all of the hype, new research shows that most users still decline to announce their location publicly.

According to a recent Forrester Research report, “Just 4 percent of Americans have tried location-based services, and 1 percent use them weekly…Eighty percent of those who have tried them are men, and 70 percent are between 19 and 35.”

Returning to the modern marketer’s argument that the more information they can gather on a person’s interests, habits and locations, the more applicable an ad will be for a consumer, there is strong evidence to support this. Personalized ad retargeting, where ads for specific products that consumers have perused online follow them around while they continue to browse the web, are becoming more pervasive. And marketers are big believers, “‘The overwhelming response has been positive,’ said Aaron Magness, senior director for brand marketing and business development at Zappos, a unit of Amazon.com.”

Still, consumer sentiment about being monitored, whether online or off, reflects overall concern and creepy feelings. Ongoing education about how browsers and advertisers collect behavioral information both online and off might serve to eliminate the two-way mirror feeling that many consumers experience. However, it has not yet proven to completely allay consumer fears and concerns about a potentially serious breach of privacy.

In other words, while consumers feel uncertain as to where all of this leaves their privacy, advertisers are increasingly certain of where consumers stand. Literally.