Archive for the ‘Search’ Category


September 22, 2010- http://www.gather.com/viewArticle.action?articleId=281474978539098

Hegel famously proclaimed that “history is a dialectic,” that is, a dialogue between people who may hold differing views, but who seek to accomplish a basis of truth by debating together. In other words, history has no discernible truth, but more closely attains the overall goal of “truth” through discussion from all of the voices of history and their personal accounts of what happened.

This quotation of Hegel’s is often cited in the context of discussions about the literary canon, or the “western canon,” as some refer to it. The term “Western canon” is used to denote the selection of books, music, art and general cultural that have most influenced the shape ofWestern civilization and culture over time.

As demonstrated, a simple search on Wikipedia for either of these terms will tell you much about what they are. However, Wikipedia doesn’t explicitly tell us is that it is also holding the record of how the modern canon is determined, and how the truth of history is being determined by the myriad of voices which contribute to it everyday.

A recent Bits blog from the New York Times mentioned the trail of edits that the Internet provides to anyone who is looking for it. James Bridle, founder of BookTwo is particularly interested in what the future of literature holds, but also how that discussion is playing out and how we can track where the discussion has been. In one of his recent entries Bridle points out that although an article on Wikipedia may tell a specific story, the edits show a process of opinion, correction, and the potential biases of each writer. In this respect Wikipedia, and every constantly updated website represents an archive of evolving information over time. What interests Bridle is the offer of two distinct stories: one that is front-facing to the reader and one that reveals the behind-the-scenes editing, writing and creative process.

To illustrate the point, Bridle selected the topic of the Iraq war as an entry in the Wikipedia canon and had all of the history of the entries surrounding the Iraq War published into physical volumes. In his entry, Bridle writes, “This particular book — or rather, set of books — is every edit made to a single Wikipedia article, The Iraq War, during the five years between the article’s inception in December 2004 and November 2009, a total of 12,000 changes and almost 7,000 pages.” Bridle notes that the entire set comes to twelve volumes, which nearly approximates the size of a traditional encyclopedia.

Which brings us to the favorite comparison of Wikipedia and your parents’ Encyclopedia. Is one or the other more reliable? Who gets to decide what is a part of the overall Western canon? Shouldn’t we all be alarmed by a process in which a child may be permitted to contribute to an online encyclopedia which many now claim is an expert source?

In fact, Bridle’s point reminds us of a standard strategy employed to defend the credibility of Wikipedia and its process against its would-be detractors. The strategy is to cite a story central to the process under which the Oxford English Dictionary was compiled in the 19th century. Simon Winchester’s book, The Professor and the Madman: A Tale of Murder, Insanity, and the Making of The Oxford English Dictionary details a Jekyll and Hyde story of the brilliant but clinically insane Dr. W.C. Minor who provided thousands of entries to the editors of the OED while he was committed at the Broadmoor Criminal Lunatic Asylum. In other words, if a mad man may contribute significantly to a tome of the English language which is still very much the authoritative text today, why can a perfectly sane pre-teen not contribute to the modern canon of information about frogs, Major League Baseball, or global warming?Should we be preventing anyone from contributing to the ever-evolving conversation about what is truth and what is history?

As sites such as Twournal –which offers the Narcissistic boon of publishing your very own Tweets through time in print form– begin to proliferate, each of us can possess our very own piece of the modern web canon, whether in print or online. As Twournal describes itself, “Over time tweets can tell a story or remind us of moments. In 20 years we don’t know whether twitter will be around but your Twournal will be. Who knows maybe your great grandkids will dig it up in the attic in the next century.”

That means that each of us now has access to print a credible-looking book of our own (often misspelled) musings and meanderings as representative of history, according to us. Yet in the absence of a forum in which people can engage with our Tweeted observations, there’s no real dialectic. It therefore seems safe to conclude that Hegel would have preferred Wikipedia to Twitter, or to your Twournal.


September 22, 2010- http://www.gather.com/viewArticle.action?articleId=281474978538816

Hear that? It’s the sound of the content aggregator death knell. On October 1st, popular web-based RSS reader and news aggregator Bloglines, run by the team at Ask.com, will discontinue service. When asked why, Ask’s team reported “that social media sites like Twitter and Facebook killed it.”

And Bloglines is only the first. Other aggregators such as Google Reader, Digg, Reddit, and StumbleUpon are sure to be next. According to Hitwise, “visits to Google Reader are down 27 percent year-over-year.” The New York Times recently reported “a more pivotal reason that Digg is falling behind, analysts say, is that users are simply spending more time on Facebook and Twitter than they are on Digg.”

How did this happen? Is it truly impossible that content aggregation sites such as Google Reader, StumbleUpon, Digg and ReddIt can not exist side-by-side with the type of social news aggregation offered by Facebook and Twitter? What does this mean for RSS? RSS, which stands for Real Simple Syndication, is a protocol which helps push website updates to readers around the world so they don’t have to search for new content or endlessly hit refresh on a favorite web page. In 2005 RSS was a game changer. Today? Not so much.

According to the Bloglines message about its own end-date, “the Internet has undergone a major evolution. The real-time information RSS was so astute at delivering (primarily, blog feeds) is now gained through conversations, and consuming this information has become a social experience…being locked in an RSS reader makes less and less sense to people as Twitter and Facebook dominate real-time information flow. Today RSS is the enabling technology – the infrastructure, the delivery system.”

As a 2009 NeilsenWire also reported, part of this trend comes from the fact that blogs and news sites are no longer the endgame news tool, our friends are. “Socializers trust what their friends have to say and social media acts as an information filtration tool…If your friend creates or links to the content, then you are more likely to believe it and like it. And this thought plays out in the data.”

Does Mark Zuckerberg know that his company has driven content aggregators to the grave? Undoubtedly, yes. A recent New Yorker profile quoted Zuck as saying, “It’s like hardwired into us in a deeper way: you really want to know what’s going on with the people around you.” In fact, Facebook’s Open Graph feature allows users to see which articles their Facebook friends have read, shared, and liked. “Eventually,” the New Yorker observed, “the company hopes that users will read articles, visit restaurants, and watch movies based on what their Facebook friends have recommended, not, say, based on a page that Google’s algorithm sends them to.”

Some argue that content aggregators, or RSS readers, were always destined for the internet graveyard simply because they were too complicated and allowed users to become completely overwhelmed by the sheer bulk of information that was being pushed to them. One thing is for sure, if content aggregators don’t find a way to better integrate with, or at least successfully co-exist with social networking offerings like Facebook and Twitter, they will soon be relegated to the ever-growing category of “old news.”


September 08, 2010- http://www.gather.com/viewArticle.action?articleId=281474978505328

Banking on the fact that people read more quickly than they type, and that they have once again designed a feature that will change the way the world searches for information, Google has launched Google Instant.

Instant provides real-time potential search results based on each letter typed into the query box, and works with lightning-quick speed.

Currently Google claims that Instant “saves two to five seconds per search” and “will cumulatively save people more than 3.5 billion seconds every day, or 11 hours every second.” Was searching taking us all too long before? Was it, say, so tediously long that it was preventing us from spending time with our families or volunteering at our local charities. Not likely. However, there are those who would say that faster is always better.

Still, there are bound to be skeptics, many of whom will and are saying that Instant is merely a ploy to make Google look more cutting-edge, without necessarily representing truly large changes in how Google “organizes the world’s information.” In fact, Google itself admitted that “While the behind-the-scenes engineering that generates those results is a big reason Google gets the majority of searches, it can be hard for average users to notice. The instant results make this much clearer.”

PC Mag compared Google Instant to Bing’s Type Ahead functionality, which has been in place for a while, and found that Google Instant doesn’t necessarily come out on top. Specifically, reviewer Lance Ulanoff mentioned “Google Instant, for now, only works when you’re signed in and may be using some search history to intuit results. It combines type ahead with live results, while Bing only offers you a list of probable word matches. Still, the word matches in Bing are pretty solid, and if Google Instant is showing you a page you weren’t interested in anyway, then what’s the value in it?”

For the skeptics, cynics, and those with sensitive eyeballs, Google Instant does offer the chance to opt out (hint: look to the right of the search box, see the blue link reading: “Instant is on.” Click that), as Gadgetwise reports along with other tips on how to use the new feature.

As Gizmodo reported on the announcement event, Google Instant will be available on Chrome, Firefox, Safari and IE 8 starting today. Additionally, it is not yet available on browser toolbars or for mobile phones. That rollout is expected to occur in the coming months.

Every new browser innovation that is announced also reminds us again about the delayed promise of the “answer engine,” embodied most famously by the “computational knowledge engine” introduced to us by the Wolfram Alpha people a couple years ago. Answer engines such as Wolfram Alpha, launched in 2009 are supposed to collect and organize all of the information from authoritative databases and engines to obtain the answer to a specific question. In other words, the next step was supposed to be to skip the list of search results, and to send us straight away to the answer backed up by authoritative and well listed information.

Clearly Google has not yet come to adopt that model of search engine, and Wolfram Alpha has not yet even begun to compete with Google’s search market domination. Alas, as users we shall all have to be satisfied with how much faster we are delivered the search results that Google supplies, and follow the progress and promise that answer engines still have yet to deliver on.

Looking for more about Google Instant? Check out Google’s YouTube video about Google Instant.


August 30, 2010- http://www.gather.com/viewArticle.action?articleId=281474978482524

At the beginning of this year Mark Zuckerberg famously announced that privacy was dead, stirring the pot and increasing concerns among the majority of internet users that their identities and personal information were being appropriated for capital gain.

Arguably, 2010 has been the year of “location aware technology,” whether the location is two dimensional or three dimensional. These days your computer knows where you’ve been online, where you’re going, and why you buy things there, and your phone can tell any satellite where you physically are on the globe and what advertising you’re passing at that very moment. Clearly, marketers are doing their best to collect as much of that information as possible and to use it.

One of the main issues in the ongoing debate about whether location aware technology and geotagging are net-positive or net-negative developments (or somewhere in between) centers on the concession that advertising and marketing are not going away any time soon. Advertising is an institutionalized facet of American life, especially in major urban centers. That being said, marketers like to argue that with more information they can better speak to a consumer’s interests and needs, as opposed to leading a consumer to buy something he or she doesn’t need.

Leaving that argument for a minute, the real concern here is over privacy, and educating the masses on how to protect their own privacy. A recent article in the New York Times cautioned readers against geotagging photos at their homes, and cited the example of Adam Savage, one half of the “MythBusters” team who had geotagged a Twitter photo of his car in front of his personal residence in the Bay Area. The Times pointed out that by doing Adam Savage had just informed all of his Twitter followers of his personal address, the make and model of his car, and that he was leaving for work at that very moment, “geotags… are embedded in photos and videos taken with GPS-equipped smartphones and digital cameras. Because the location data is not visible to the casual viewer, the concern is that many people may not realize it is there; and they could be compromising their privacy, if not their safety, when they post geotagged media online.”

Now with Facebook Places, a new feature which allows its users to tag their locations in their status updates, and the increasing use of Twitter and FourSquare, organizations such as the ACLU are concerned that the spread of technology is one again outpacing usage education and awareness of the risks of information abuse, “The organization highlighted the element of the new service that allows users to “tag” an accompanying friend and post his or her location to Facebook – even if the friend does not have an iPhone, which is currently the only platform on which the application is available.”

The other side of this coin involves how browsers and advertisers track our movements online. After all, this is a huge market that Facebook plans to tap, 50 percent of Facebook’s over 400 million users log in to the site at least once a day, and more than a quarter of that overall number access the service from mobile devices. However, despite all of the hype, new research shows that most users still decline to announce their location publicly.

According to a recent Forrester Research report, “Just 4 percent of Americans have tried location-based services, and 1 percent use them weekly…Eighty percent of those who have tried them are men, and 70 percent are between 19 and 35.”

Returning to the modern marketer’s argument that the more information they can gather on a person’s interests, habits and locations, the more applicable an ad will be for a consumer, there is strong evidence to support this. Personalized ad retargeting, where ads for specific products that consumers have perused online follow them around while they continue to browse the web, are becoming more pervasive. And marketers are big believers, “‘The overwhelming response has been positive,’ said Aaron Magness, senior director for brand marketing and business development at Zappos, a unit of Amazon.com.”

Still, consumer sentiment about being monitored, whether online or off, reflects overall concern and creepy feelings. Ongoing education about how browsers and advertisers collect behavioral information both online and off might serve to eliminate the two-way mirror feeling that many consumers experience. However, it has not yet proven to completely allay consumer fears and concerns about a potentially serious breach of privacy.

In other words, while consumers feel uncertain as to where all of this leaves their privacy, advertisers are increasingly certain of where consumers stand. Literally.


August 16, 2010- http://www.gather.com/viewArticle.action?articleId=281474978448715

The recent attention surrounding Verizon and Google’s agreement about net neutrality has unearthed manifold issues which are buzzing in the minds of the world’s web users- how free is the internet? And is that freedom an active function of American democracy? Much like free and fair elections, first amendment rights to free speech and the right to congregate, the Internet can be a phenomenal asset to American democracy. In fact, many modern political theorists consider the Web as a pillar of the modern public sphere. However, unlike those variables, a “neutral” Internet is not guaranteed to Americans under constitutional law.

But the issue also pulls in the more capitalistic challenges of the internet which include how to continue to strengthen the American broadband infrastructure and how ISPs can profit from the business of providing access without compromising the neutrality of the content. Certainly the US would not benefit from imposing stringent regulations on ISPs seeking to do business in the US, as the US must also consider the recent news that China has just surpassed Japan as the world’s second largest economy, and is digging in its heels to become #1. In order to remain competitive in the global economy, the business of improving upon the network infrastructure as well as encouraging healthy competition among ISPs will remain very important for the United States.

The issue of net neutrality is also inextricably enmeshed in the ongoing debate concerning Google’s policies of “Don’t Be Evil,” a mantra that has come under fire in recent years due to political fiascos such as Google’s compromises with China. Now Google stands under fire for compromising on their commitment to net-neutrality, and their credibility in the search market may take a hit as a result.

The last issue that is implicated in the net-neutrality debate is whether or not mobile access should be treated the same way that home or PC access is treated? The strains on mobile networks as evidenced by AT&T’s constant game of infrastructural catch-up since signing on with the iPhone have been widely covered, and so it’s easy to see why Verizon is anxious to nip that issue in the bud with Google at the onset.

Each of these issues is clearly significant enough to require full coverage by the news media, but there are deeper implications for American democracy and the freedom of information in the country. Americans often speak of the “right to access the world’s information” in the context of the glorious early days of the Internet, and of course, of Google’s appearance on the world’s stage. However, how far will Americans go to secure that access as a formal right? And would Americans vote for political regulations and requirements that may ultimately limit the quality of that access in favor of guaranteeing it for all?


July 22, 2010: http://www.gather.com/viewArticle.action?articleId=281474978387268

The news recently broke that Foursquare is forming agreements to start charging search engines such as Google and Bing for their geographic location data. Instantly various news sources launched stories seeking to satisfy user curiosities by positing what these information transactions might lead to in the future. Among the many educated guesses were enhanced real-time search, social mapping, and more strongly developed mobile search. I would add one more: more strongly targeted traditional advertising and marketing media.

Internet analysts and emerging media connoisseurs may write disproportionately more about innovative new technologies, but if you ask the advertising and marketing executives of the world if they have abandoned traditional media as part of their integrated campaigns, the answer would be a resounding “no.” The data that Foursquare will provide is a solid reinforcement of retaining those traditional marketing strategies. What we physically see and interact with outside of the realm of our computer and television screens still matters.

Still, it might surprise most people to learn that the data they generate by using Foursquare’s geo-location technology will be used to determine what shows up on their local billboards. Yes, you heard right– billboard. Even if, admittedly, these days that billboard might be digital and therefore closer to a television than the enormous printed posters the term still conjures.

If you think about it, it makes perfect sense. Geo-location data brings the internet back to the earth by collecting information on where you were when you saw what. With apps like Foursquare, suddenly it’s not who you are, but where you are and when that matters most again. That means that physical advertising efforts such as billboards can be even better data-driven and targeted to the interests of local populations.

How do you feel about these types of emerging social media and GPS-oriented advertising ventures that will know where you go, where you shop, and where you eat? Do you think of this type of geographically-targeted advertising as convenience, or as an invasion of privacy?