Google this site (~ 2000 pages):

RSS Feed RSS Feed

« Previous: zur zukunft von openBC und linkedIN | Next: project idea: AIDS-house in ouagadougou »

gene smith proposes in his post “search tagging” the seemingly logical extension (or anticipation) of the recent folksonomy-hype caused by and into the area of search. while social search might indeed be the next big thing in search i think – contrary to IA-guru louis rosenfeld, whom i admire – there are some fundamental flaws in gene’s concept:

Search terms are tags on an URL, based on clickthroughs.

seems to be obvious at first glance but give it a closer look: tags would thus be applied to URLs by search engine algorithms, with the user only playing a minor role in judging the relevancy of the snippet on SERPs. machines applying tags is not really the classic definition of a folksonomy.. so at least a feedback-cycle would have to be closed to reflect the relevance of the query to the URL, for example with the help of the google toolbar – and i’m not sure if that would be sufficient to create a really useful taxonomy. besides that clickthroughs can easily be manipulated by search engine spammers.

Search history is shared. Search terms and selected results are shared in the same way shares tags and URLs.

that’s where the problem lies. while it would be incredibly useful to access other people’s searches this usefulness will cause the multi-million-dollar SEO-industry to quickly find ways to fake truckloads of users that would sneak their carefully designed doorway- and affiliate-pages prominently into your (ex-precious but now useless) tag-pages. by the way: exactly this can and will happen to as soon as it’s commercially interesting enough.

Search terms and results selection help improve search results. (I wonder if anyone’s doing this now?)

google does click tracking on a very small sample of result pages, for quality assurance (can sometimes be seen by the they use). they clearly state however that they don’t use the data to give clicked pages a boost (sidenote: wouldn’t be sure if they same was true for data from the google toolbar..), to avoid search engine spammers producing clickthroughs automatically. so good idea theorectially but absolutely not feasable due to spammers, again.

Exploration and recommendations. Users can explore tags, URLs, users and their visited results. For each search they see weighted recommendations (“People who searched for ‘celiac disease’ also searched for…”) and recommended links based on others’ searches.

they only way i can see that happen is within trusted networks. i imagine a closed system of mutual approval of users (including maybe “friends of friends”, to achieve a bigger sample) would work, and keep spammers locked out.

Ad hoc social networks. No adding people as contacts or joining networks.

while i do think that ad hoc social networks work best in most cases they are also most vulnerable to spam. as the search engine industry is – together with email – the most affected victim of spam, i’m quite sure this model wouldn’t work here.

despite everything said i’m quite sure dozens of googlers are thinking hard about cooperative search in this very moment. keyword sharing is just a tiny step away from their existing feature search history, the “only” problem is spam.

any developer out there feeling like developing a prototype using the google API? i would be pleased to contribute concept and specification.. ;-)


« Previous: zur zukunft von openBC und linkedIN | Next: project idea: AIDS-house in ouagadougou »

No Comments

Sorry, the comment form is closed at this time.