data

Everything is data.

Visualizations for Navigation : Experiments on my blog

This is a meta post describing two features on this blog that I don’t think I’ve documented before. Apologies for the navel-gazing, I hope there’s enough useful information here to make it worth reading

Most folks read my blog through the RSS feed, but those who peruse the web version get to see many different forms of navigational aids to help the user around the website. Since the blog runs on Drupal , I get to deploy all sorts of fun stuff. One example is the Similar Entries module, that uses MySQL’s FULLTEXT similarity to show possibly related posts1. This allows you to jump around on the website reading posts similar to each other, which is especially useful for readers who come in from a search engine result page. For example, they may come in looking for Magic Bus for the iPhone , but given that they’re probable iPhone users, they may be interested in the amusing DIY iPhone Speakers post.

The Timeline Footer

However, given that this blog has amassed about a thousand posts over seven years now, it becomes hard to expose an “overview” of that much information to the reader in a concise manner. Serendipitous browsing can only go so far. Since this is a personal blog, it is interesting to appreciate the chronological aspect of posts. Many blogs have a “calendar archive” to do this, but somehow I find them unappealing; they occupy too much screen space for the amount of information they deliver. My answer to this is a chronological histogram, which shows the frequency of posts over time:

Each bar represents the number of blog posts I posted that month, starting from August 2002 until now2. Moving your mouse over each bar tells you which month it is. This visualization presents many interesting bits of information. On a personal note, it clearly represents many stages of my life. June of 2005 was a great month for my blog — it had the highest number of posts, possibly related to the fact that I had just moved to Bangalore, a city with and active Blogging community. There are noticeable dips that reflect extended periods of travel and bigger projects.

In the background, this is all done by a simple SELECT COUNT(*) FROM nodes GROUP BY month type query. Some smoothing is applied to the counts due to the high variance, for my usage, Height = Log base 4 (frequency) gave me pretty good results. This goes into a PHP block, which is then displayed at the footer of every blog page. The Drupal PHP snippets section is a great place to start to do things like this. Note that the chart is pure HTML / CSS; there is no Javascript involved3.

The Dot Header

Many of my posts are manually categorized using Drupal’s excellent taxonomy system. A traditional solution to this is to create sections, so that the user can easily browse through all my Poems or my nerdy posts. The problem is that this blog contains notes and links to things that I think are “interesting”, a classification that has constantly evolved as my interests have changed over the past decade. Not only is it hard for me to box myself into a fixed set of categories, maintaining the evolution of these categories across 7+ years is not something I want to deal with every day.

This is where tags and automatic term extraction come in. As you can see in the top footer of the blog mainpage , each dot is a topic, automatically extracted from all posts on the website. I list the top 60 topics in alphabetical order, where each topic is also a valid taxonomy term. The aesthetics are inspired by the RaphaelJS dots demo, but just like the previous visualization, it is done using pure CSS + HTML. The size and color of the dot is based on the number of items that contain that term. Hovering over each dot gives you the label and count for that dot, clicking them takes you to an index of posts with that term. This gives me a concise and maintainable way to tell the user what kinds of things I write about. It also addresses a problem that a lot of my readers have — they either care only about the tech-related posts (click on the biggest purple dot!), or only about the non-tech posts (look for the “poetry” dot in the last row!).

This visualization works by first automatically extracting terms from each post. This is done using the OpenCalais module (I used to previously use Yahoo’s Term Extractor, but switched since it seems Yahoo!‘s extractor is scheduled to be decommissioned soon). The visualization is updated constantly using a cached GROUP BY block similar to the previous visualization, this time grouped on the taxnomy term. This lets me add new posts as often as I like, tags are automatically generated and are reflected in the visualization without me having to do anything.

So that’s it, two simple graphical ways to represent content. I know that the two visualizations aren’t the best thing since sliced bread and probably wont solve World Peace, but it’s an attempt to encourage discoverability of content on the site. Comments are welcome!


Footnotes:

1 I actually created that module (and the CAPTCHA module) over four years ago; they’ve been maintained and overhauled by other good folks since.

2 Arnab’s World is older than that (possibly 1997 — hence the childish name!), but that’s the oldest blog post I could recover.

3 I have nothing against Javascript, it’s just that CSS tends to be easier to manage and usually more responsive. Also, the HTML generated is probably not valid and is SUPER inefficient + ugly. Hopefully I will have time to clean this up sometime in the future.

HAMSTER: Using Search Clicklogs for Schema and Taxonomy Matching

Just got done with the HAMSTER presentation; here is the paper, and here are my abstract and slides:

We address the problem of unsupervised matching of schema information from a large number of data sources into the schema of a data warehouse. The matching process is the first step of a framework to integrate data feeds from third-party data providers into a structured-search engine’s data warehouse. Our experiments show that traditional schema- based and instance-based schema matching methods fall short. We propose a new technique based on the search engine’s clicklogs. Two schema elements are matched if the distribution of keyword queries that cause click-throughs on their instances are similar. We present experiments on large commercial datasets that show the new technique has much better accuracy than traditional techniques.

I received a few questions after the talk, hence I thought I’d put up a quick FAQ:

Q: Doesn’t the time(period) of the clicklog affect your integration quality?

A: Yes. And we consider this a good thing. This allows trend information to come into the system, e.g. “pokemon” queries will start coming in, and merge “japanese toys” with “children’s collector items”. Unpopular items that are not searched for may not generate a mapping, but then again, this may be ok since the end goal was to integrate searched-for items.

Q: You use clicklogs. I am a little old company/website owner X. Since my company’s name doesn’t start with G, M or Y, I don’t have clicklogs. How do I use your method?

A: You already have clicklogs. Let’s say you are trying to merge your company/website X’s data with company Y’s data. Since both you (X) and Y have websites, you both run HTTP servers, which have the facility to log requests. Look through your HTTP server referral logs for strings like:
URL: http://x.com
REFERRER: http://www.google.com/?q=$search_string$

This is your clicklog. The url http://x.com has the query $search_string$. You can grep both websites to create clicklogs, which can then be used to integration.

Q: My website is not very popular and I don’t have that many clicks from search engines. What do I do?

A: Yup, this is a very real case. Specifically, you might have a lot of queries for some of your items, but not for others. This can be balanced out. See the section in our paper about Surrogate Clicklogs. Basically you can use a popular website’s clicklog as a “surrogate” log for your database. From the paper:

…we propose a method by which we identify surrogate clicklogs for any data source without significant web presence. For each candidate entity in the feed that does not have a significant presence in the clicklogs (i.e. clicklog volume is less than a threshold), we look for an entity in our collection of feeds that is most similar to the candidate, and use its clicklog data to generate a query distribution for the candidate object.

Q: I am an academic and do not have access to a public clicklog, or a public website to do get clicklogs from. How do I use this technique?

A: Participate in the Lemur project and get your friends to participate too.

|

All Sorts of Visualization

Aldo Cortesi uses Cairo and python to come up with these neat visualizations for sort algorithms :

(via)

|

Maintained Relationships on Facebook

Facebook Research Scientist Cameron Marlow has some interesting thoughts about Maintained Relationships, people who often stalk each other’s feeds, but don’t necessarily talk that much:


In the diagram, the red line shows the number of reciprocal relationships, the green line shows the one-way relationships, and the blue line shows the passive relationships as a function of your network size. This graph shows the same data as the first graph, only combined for both genders. What it shows is that, as a function of the people a Facebook user actively communicate with, you are passively engaging with between 2 and 2.5 times more people in their network. I’m sure many people have had this feeling, but these data make this effect more transparent.

I’m really jealous of the Facebook Data Team. They get to play with all that data!

Getting django-auth-openid to work with Google Accounts

update: This blog post is meant for older versions of django-authopenid. The latest version available at pypi has implemented a fix similar to this one, and hence works out of the box, you wont need this fix.
Thanks to Mike Huynh for pointing this out!

I've been playing with Django over the past few days, and it's been an interesting ride. For a person who really likes PHP's shared- nothing, file-based system model (I'm mostly a drupal guy), Django comes across as overengineered at first, but I'm beginning to see why it's done that way.

I was trying to get single-signon working, and settled on django-authopenid over the other django openid libraries, django-openid, django-openid-auth and django-oauth. It was easy to use and understand, and wasn't seven million lines of code.

My intention was to use the OpenID extension to get the user's email address during the sign on process. However, it doesn't seem to work with Google's OpenID implementation, because Google uses the an Attribute Exchange (ax) extension instead of the Simple Registration (sreg) OpenID extension that is implemented in the library. A quick hack to django-authopenid's views.py makes it work:


51c51
- from openid.extensions import sreg
---
+ from openid.extensions import ax
94c82
- sreg_request=None):
---
+ ext_request=None):
113,114c101,102
- if sreg_request:
- auth_request.addExtension(sreg_request)
---
+ if ext_request:
+ auth_request.addExtension(ext_request)
195,210c172,185
- sreg_req = sreg.SRegRequest(optional=['nickname', 'email'])
- redirect_to = "%s%s?%s" % (
- get_url_host(request),
- reverse('user_complete_signin'),
- urllib.urlencode({'next':next})
- )
-
- return ask_openid(request,
- form_signin.cleaned_data['openid_url'],
- redirect_to,
- on_failure=signin_failure,
- sreg_request=sreg_req)
---
+ ax_req = ax.FetchRequest()
+ ax_req.add(ax.AttrInfo('http://schema.openid.net/contact/email', alias='email',required=True))
+ redirect_to = "%s%s?%s" % (
+ get_url_host(request),
+ reverse('user_complete_signin'),
+ urllib.urlencode({'next':next})
+ )
+
+ return ask_openid(request,
+ form_signin.cleaned_data['openid_url'],
+ redirect_to,
+ on_failure=signin_failure,
+ ext_request=ax_req)

Obviously this is a very cursory edit. I'm too lazy to improve and submit this as a patch, so readers are encouraged to submit it to all relevant projects!

|

Database Autocompletion Demo

The video overview of the SIGMOD 2007 demo is now up!

|

My Research Papers, now more accessible

Many readers have complained that this blog is always full of artsy and time-wasting material… “what about all the technical stuff? Aren’t you a computer person?!” they ask. To pacify these masses, I have just converted three of my recent papers to HTML format. For the first two, I used the HEVEA LaTeX to HTML converter, which I found slightly better than LaTeX2HTML. For the 3rd paper, I have inexplicably misplaced the source files, and hence the HTMLization was done via Gmail’s PDF Viewer.

The picture above is from the 2007 SIGMOD demo paper. I’ll post videos of the demo in a later post. Here’s a quick preview of each paper:

  • Qunits: queried units in database search CIDR, 2009
    Keyword search against structured databases has become a popular topic of investigation, since many users find structured queries too hard to express, and enjoy the freedom of a “Google-like” query box into which search terms can be entered. Attempts to address this problem face a fundamental dilemma. Database querying is based on the logic of predicate evaluation, with a precisely defined answer set for a given query. On the other hand, in an information retrieval approach, ranked query results have long been accepted as far superior to results based on boolean query evaluation. As a consequence, when keyword queries are attempted against databases, relatively ad-hoc ranking mechanisms are invented (if ranking is used at all), and there is little leverage from the large body of IR literature regarding how to rank query results.
  • Effective Phrase Prediction VLDB, 2007
    Autocompletion is a widely deployed facility in systems that require user input. Having the system complete a partially typed “word” can save user time and effort. In this paper, we study the problem of autocompletion not just at the level of a single “word”, but at the level of a multi-word “phrase”. There are two main challenges: one is that the number of phrases (both the number possible and the number actually observed in a corpus) is combinatorially larger than the
    number of words; the second is that a “phrase”, unlike a “word”, does not have a well-defined boundary, so that the autocompletion system has to decide not just what to predict, but also how far. We introduce a FussyTree structure to address the first challenge and the concept of a significant hrase to address the second. We develop a probabilistically driven multiple completion choice model, and exploit features such as frequency distributions to improve the quality of our suffix completions. We experimentally demonstrate the practicability and value of our technique for an email composition application and show that we can save approximately a fifth of the keystrokes typed
  • Assisted querying using instant-response interfaces SIGMOD 2007
    We demonstrate a novel query interface that enables users to construct a rich search query without any prior knowledge of the underlying schema or data. The interface, which is in the form of a single text input box, interacts in real-time with the users as they type, guiding them through the query construction. We discuss the issues of schema and data complexity, result size estimation, and query validity; and provide novel approaches to solving these problems. We demonstrate our query interface on two popular applications; an enterprise-wide personnel search, and a biological information database.
|