my year in cities, 2011

To continue the tradition from past years, here’s a list of cities I have been to this year:

End of an era: 5.5 years in the making, the thesis is submitted, bags are packed, memories are tucked away into a special part of my heart. It’s a little crazy to think about, but I’ve called A2 “home” for just about as long as any other city I’ve ever lived in!

  • Ann Arbor, Michigan

Job hunt: The end of a Ph.D. brings with it an epic multi-month fly-everywhere-and-talk-about-yourself ordeal. Scheduling this was quite the epic traveling-salesman setup. (Turns out, we didn’t have an utterly optimal solution — I flew the DTW->PHX->SJC I think 4 times this year!)

  • Bay Area (Palo Alto, Sunnyvale, Santa Clara, San Jose and San Francisco)
  • Columbus, Ohio
  • Redmond, Washington: I come back here a few months later for a friend’s wedding
  • Madison, Wisconsin
  • Florham Park, New Jersey
  • New York City, New York

One last epic summer internship:

  • Mountain View, California

My first conference travel as a non-student:

  • Seattle, Washington

Going home: back to India after many years!:

  • Delhi
  • Mumbai: though Delhi is technically “home”, 2 weeks in Delhi, and I just had to leave for Mumbai. Maybe it’s the waves at Juhu beach. Maybe it’s the bustle of the trains. Or maybe it’s the huge list of seafood restaurants I just had to eat through!
  • Kolkata: Like an old, wizened grandma, it doesn’t matter HOW many years you come back after, Kolkata is always the same.

Southeast-Asia vacation with parents:

  • Singapore
  • Putrajaya & Kuala Lumpur, Malaysia
  • Genting, Malaysia
  • Penang, Malaysia
  • Phuket, Thailand
  • Krabi Island, Thailand

Seven days in Rajasthan : Longest I’ve been away from my laptop!

  • Udaipur
  • Jodhpur
  • Jaisalmer
  • Jaipur

Next up:

  • Columbus, Ohio!

21 cities, 5 countries, 2 continents — looking forward to 2012!


Guided Interaction: Rethinking the Query-Result Paradigm

Just got done with my presentation at VLDB 2011's Challenges and Visions Track! The paper is available here, as are the talk slides. Abstract and browsable slides are below!

Abstract:Decades of research, coupled with continuous increases in computing power, have enabled highly efficient execution of queries on large databases. For many databases, far more time is spent by users formulating queries than by the system evaluating them. It stands to reason that, looking at the overall query experience we provide users, we should pay attention to how we can assist users in the holistic process of obtaining the information they desire from the database, and not just the constituent activity of efficiently generating results given a complete precise query.

In this talk, we examine the conventional query-result paradigm employed by databases and demonstrate challenges encountered when following this paradigm. We recognize that the process of query specification itself is a major stumbling block. With current computational abilities, we are at a point where we can make use of the data in the database to aid in this process. To this end, we propose a new paradigm, guided interaction, to solve the noted challenges, by using interaction to guide the user through the query formulation, query execution and result examination processes.

There are significant engineering challenges to constructing the system we envision, and the technological building blocks to address these challenges exist today.


We'll be right back after the following messages from our sponsor

Recent life changes have kept me away from the blog. I hope to resume posting soon; until then, watch this space!

For now you can:


BaconSnake to core: Pig 0.8 released!

My contribution to Python UDFs in Pig has finally released as part of the new and shiny 0.8 release! I’ve been meaning to blog about how this came about when I had time, but Dmitriy saved me the work, so I’ll just quote him instead:

This is the outcome of PIG-928; it was quite a pleasure to watch this develop over time — while most Pig tickets wind up getting worked on by at most one or two people, this turned into a collaboration of quite a few developers, many of them new to the project — Kishore Gopalakrishna’s patch was the initial conversation starter, which was then hacked on or merged into similar work by Woody Anderson, Arnab Nandi, Julien Le Dem, Ashutosh Chauhan and Aniket Mokashi (Aniket deserves an extra shout-out for patiently working to incorporate everyone’s feedback and pushing the patch through the last mile).

Yay Open Source! Documentation is available on the Pig website, but here’s a one-line example of what you can now do:

register '' using jython as myfuncs;

Notice that we’re using a python script instead of a jar. All python functions in become available as UDFs to Pig!

Going forward, there are two things I didn’t get to do, which I’d love any of my dear readers to pick up on. First, we’re still not at par with SCOPE’s ease-of-use since UDFs can’t be inlined yet (a la BaconSnake). This is currently open as a separate Apache JIRA issue, would be great to see patches here!

The second feature that I really wanted to make part of this patch submission, and didn’t have time for, was Java source UDF support as a JaninoScriptEngine. Currently, Pig users have to bundle their UDF code as jars, which quickly becomes painful. The goal of the ScriptEngine architecture is to let multiple languages be implemented and plugged in, so I really hope someone takes the time to look at the Janino project, which will allow Java source UDFs with zero performance penalty. This would also make it the first dynamically compiled query system over Hadoop (as far as I know), and opens up a world of query optimization work to explore.

United States of Autocomplete, the Live Version

This blog post by Dorothy at Very Small Array caught my attention, it's very cute. Eytan suggested I do an automated version. So here we go, a live, javascript version of The United States of Autocomplete that is personalized to what you see in Google suggest!

(note: seems to work on Google Chrome / Firefox + SVG only for now, will fix it later)

Click here to see the live map!

| |

The Story of Stuff: Electronics

This short film should be made mandatory watching for anyone who buys electronics! From their website :

The Story of Electronics, released on November 9th, 2010 at, takes on the electronics industry’s “design for the dump” mentality and champions product take back to spur companies to make less toxic, more easily recyclable and longer lasting products. Produced by Free Range Studios and hosted by Annie Leonard, the eight-minute film explains ‘planned obsolescence’—products designed to be replaced as quickly as possible—and its often hidden consequences for tech workers, the environment and us. The film concludes with an opportunity for viewers to send a message to electronics companies demanding that they “make ‘em safe, make ‘em last, and take ‘em back.”



iFixit for-profit approach to solve this problem is pretty great — by making every device on the planet repairable, we save so many devices from being trashed.

| |

Fixing Facebook's Font Size Frustration

I don't know if I'm part of an alpha-beta test or if they rolled this out to all of its hundreds of millions of users, but Facebook recently reduced the font size of all status messages on its homepage without any explanation:


As a heavy user, this has been causing me a lot of eye strain. So I decided to create a quick script to fix this. You can either:

Note: I've tested it for Safari, Chrome and Firefox; for other browsers, fix it yourself! :)

Also, standard and obvious disclaimer: I am not responsible for any code you run.

| |

XKCD, the NLP remix

Just saw this old XKCD strip and came up with a more real-world-applicable, NLP version of the strip:

[ original image © Randall Munroe ]

For my non-NLP readers, I might as well explain the terms:

Wrapper Induction is “a technique for automatically constructing wrappers from labeled examples of a resource’s content.”. More details here .

CRF stands for Conditional Random Field and “is a type of discriminative probabilistic model most often used for the labeling or parsing of sequential data, such as natural language text or biological sequences.”


Shadow Tables using MySQL Triggers

One popular principle behind database-driven applications (websites, services, etc) is to never throw anything away. For example, if you’re writing a “To-do list” application, you might be tempted to run a DELETE FROM TODO WHERE ID = 123 whenever things are checked off.

However, this means that you’ve lost the data forever, so if you wanted to mine the data to gain insights, or wanted to provide features like Undo, you’re out of luck. One way to solve this problem is to always use UPDATE to set a “deleted” flag on the tuple, but this means your deleted data and your active data are in the same table, and you have to rewrite all your SELECTs to include the delete flag.

An alternative way is to move all the deleted tuples into a shadow1 table using TRIGGER. Here’s how:

CREATE TRIGGER todo_shadow
INSERT DELAYED INTO _todo_shadow values(OLD.col1,OLD.col2,...);

Now, every time a tuple is deleted from your “todo” table, it gets inserted first into the “_todo_shadow” table. “_todo_shadow” is a table that is identical to “todo”, but without any keys, attributes or indexes — MyISAM would work well here since we don’t plan to delete / update on this table. Note the use of DELAYED, an optional optimization2 to defer shadow inserts.

While I use MySQL as an example, you can tweak it to work with Postgres, SQLite and SQL Server as well, all of them support triggers. Note that triggers have varying impacts on sharding, indexes and transactions, so make sure you read up on table locks before you deploy this at a nuclear plant!

1 Shadow tables usually store all provenance, not just deletes. For full support, you can create an additional trigger for updates as well. You can even add a timestamp attribute that defaults to NOW() to store the time of deletion.

2 See docs for reasons why you may want to omit DELAYED.


Adwords CPC Dips: Google Instant and Ad Pricing

I was explaining Google Instant to my housemate yesterday and had this thought1:

Are Google Ads and SEO going to be targeted on prefixes of popular words now?

For example, let’s consider the word “insurance”. There are a lot of people bidding on the whole word, and a lot of people bidding on the word “in”. Since Google Instant can show ads at every keystroke2, perhaps it would be cheaper to buy ads on the word “insura”, where the number of searches will be just as high, but since there are fewer people bidding on it, the CPC is low?

Here’s some data I pulled from Google Adwords Estimator :

The charts superimpose CPC, ad position(evidence of competition), Daily clicks and monthly searches for prefixes of 4 words, “target”, “insurance”, “doctor” and “lawyer”. Note the dips in the CPC at various lengths, and the fact that they’re not always correlated with ad position or search volume. I’m assuming these numbers will rapidly change over the next few months as instant search gets rolled out, uncovering interesting arbitrage opportunities for those who’re looking hard enough!

1 Disclaimer: I am not an expert on ads or auction markets, this stuff is just fun to think about.

2 While it can show ads, Adwords may not show ads based on various confidence metrics.

| |