Google

United States of Autocomplete, the Live Version

This blog post by Dorothy at Very Small Array caught my attention, it's very cute. Eytan suggested I do an automated version. So here we go, a live, javascript version of The United States of Autocomplete that is personalized to what you see in Google suggest!

(note: seems to work on Google Chrome / Firefox + SVG only for now, will fix it later)

Click here to see the live map!

| |

Adwords CPC Dips: Google Instant and Ad Pricing

I was explaining Google Instant to my housemate yesterday and had this thought1:

Are Google Ads and SEO going to be targeted on prefixes of popular words now?

For example, let’s consider the word “insurance”. There are a lot of people bidding on the whole word, and a lot of people bidding on the word “in”. Since Google Instant can show ads at every keystroke2, perhaps it would be cheaper to buy ads on the word “insura”, where the number of searches will be just as high, but since there are fewer people bidding on it, the CPC is low?

Here’s some data I pulled from Google Adwords Estimator :
cpcdips

The charts superimpose CPC, ad position(evidence of competition), Daily clicks and monthly searches for prefixes of 4 words, “target”, “insurance”, “doctor” and “lawyer”. Note the dips in the CPC at various lengths, and the fact that they’re not always correlated with ad position or search volume. I’m assuming these numbers will rapidly change over the next few months as instant search gets rolled out, uncovering interesting arbitrage opportunities for those who’re looking hard enough!

1 Disclaimer: I am not an expert on ads or auction markets, this stuff is just fun to think about.

2 While it can show ads, Adwords may not show ads based on various confidence metrics.

| |

Thoughts on Scribe

As someone who works with autocompletion, this week has been a good one. Google launched two products relevant to my research: the first one was Google Scribe, a Labs experiment that uses Web n-grams to assist in sentence construction. This system solves the same problem addressed in my VLDB’07 paper, “Effective Phrase Prediction” (paper, slides). The paper proposes a data structure called FussyTree to efficiently serve phrase suggestions, and provides a metric I called “Total Profit Metric”(TPM) to evaluate phrase prediction systems. Google Scribe looks quite promising, and I thought I’d share my observations.

To simplify writing, let’s quickly define the problem using a slide from the slide deck :

Query Time:
Latency while typing is quite impressive. There is no evidence of speculative caching(a la Google Instant), but interaction is fairly fluid, despite the fact that an HTTP GET is sent to a Google Frontend Server on every keystroke. I’m a little surprised that there isn’t a latency check (or if it exists, it’s too low) — GET requests are made even when I’m typing too fast for the UI to keep up, rendering many of the results useless even before the server has responded to them.

Length of Completion:
My experience with Google Scribe is that the length of completion is quite small; I was expecting it to produce large completions as I gave it more data, but I couldn’t get it to suggest beyond three words.

Length of Prefix+Context:
It looks like the length of the prefix/context(context being the text before the prefix, used to bias completions) is 40 characters, with no special treatment to word endings. At every keystroke, the previous 40 characters are sent to the server, with completions in return. So as I was typing in the sentence, this is what the requests look like:

this is a forty character sentence and i
his is a forty character sentence and it
is is a forty character sentence and it
s is a forty character sentence and it i
_(and so on)_

I’m not sure what the benefit of sending requests for partial words is. It’s hard to discern the prefix from the context by inspection, but the prefix seems to be quite small(2-3 words), which sounds right.

Prediction Confidence:
Google Scribe always displays a list of completions. This isn’t ideal, since it’s often making arbitrary low-confidence predictions. This makes sense from a demo perspective, but since there is a distraction cost associated with the completions, it would be valuable to completions only when they are of high-confidence. Confidence can either be calculated using TPM or learned from usage data(which I hope Scribe is collecting!)

Prediction Quality:
People playing with Scribe produced sentences such as “hell yea it is a good idea to have a look at the new version of the Macromedia Flash Player to view this video” and “Designated trademarks and brands are the property of their respective owners and are”. I find these sentences interesting because they are both very topical; i.e. they seem more like outliers from counting boilerplate text on webpages than “generic” sentences you’d find in, say an email. To solve this issue and produce more “generic” completions, one solution is to cluster the corpus into multiple topic domains, and ensure that the completion is not just popular in one isolated domain.

I was also interested in knowing, “How many keystrokes will this save?”. To measure this, we can use TPM. In these two slides, I describe the TPM metric with an example calculation:

While it would be nice to see a comparison of the FussyTree method vs Google Scribe in terms of Precision, Recall and TPM, constructing such an experiment is hard, since training FussyTree over web-sized corpora would require some significant instrumentation. Based on a few minutes of playing with it, I think Scribe will outperform the FussyTree method in Recall due to the small window size — i.e. it will produce small suggestions that are often correct. However, if we take into account the distraction factor from the suggestion itself, then Scribe in its current form will do poorly, since it pulls up a suggestion for every word. This can be fixed by making longer suggestions, and considering prediction confidence.

Overall, I am really glad that systems like these are making it into mainstream. The more exposure these systems get, the more chance they have to get better and more accurate, saving us time and enabling us to interact with computers better!

|

Reputation Misrepresentation, Trail Paranoia and other side effects of Liking the World

trafficspike

A few months ago, I wrote up some quick observations about Facebook’s then just-launched “Like” button, pitching “Newsfeed Spam” as a problem exacerbated by the new Like Buttons. The post went “viral”, so to speak, bouncing off Techmeme, ReadWriteWeb / NYTimes, even German news websites. Obviously this is nothing compared to “real” traffic on the Internet, but it was fun to watch the link spread. This is meant to be a follow-up to that post, based on thoughts I’ve had since.

In this post, I'll be writing about five "issues" with the Like button, followed by four "solutions" to these issues. Since this is a slightly long post, here's an outline:


Big Deal!


facebook stats

The Facebook Like Button has been huge success. With over 3 billion buttons served, and major players such as IMDB and CNN signing up to integrate the button (and other social plugins) into their websites, the chance of encountering a Facebook Like button while browsing on the web is quite high; if not certain. Many folks have questioned whether this is a big deal -- IFRAME and javascript based widgets have been around for a long time (shameless self-plug: Blogsnob used a javascript-based widget to cross polinate blogs across the internet as early as 8 years ago). Using the social concept of showing familiar faces to readers isn't new either; MyBlogLog has been doing it for a while. Then why is this silly little button such an issue? The answer is persistent user engagement. With 500 million users, out of which 50% of them log into Facebook at any given day, you're looking at an audience of 250 million users. If you're logged into Facebook while browsing any website with a social plugin, the logged in session is used. Now if you're like me, you'll probably have "remember me" checked at login, which means you're always logged into Facebook. What this means is that on any given day, Facebook has the opportunity to reach 250 million people throughout their web browsing experience; not just when they're on Facebook.com[1]. So clearly, from a company's perspective, this is important. It is a pretty big deal! But why is this something Facebook users need to be educated about? Onwards to the next section!

Issues with the Like Button


Readers should note the use of the word "Issues", as opposed to "Security vulnerability", "Privacy Leak", "Design Flaw", "Cruel Price of Technology", or "Horrible Transgression Against Humankind". Each issue has its own kind of impact on the user, you're welcome to decide which is which!

Screen shot 2010-07-21 at 1.37.51 AM

To better understand the issues with the Like button, let's understand what the Like button provides:
1) It provides a count of the number of people who currently "Like" something.
2) It provides a list of people you know who have liked said object, with profile pictures.
3) It provides the ability to click the button and instantaneously "Like" something, triggering an update on your newsfeed.
All of this is done using an embedded IFRAME -- a little Facebook page within the main page that displays the button.

In the next few paragraphs, we'll see some implications of this button on the web.

Reputation Misrepresentation


The concept of reputation misrepresentation is quite simple:
a not-so-popular website can use another website's reputation to make the site seem more reputed or established to the user.

Here's a quick diagram to explain it:

reputation misrepresentation

Simply put, as of now, any website(e.g. a web store) can claim they are popular (especially with your friends) to gain your trust. Since Facebook doesn't check referrer information, Facebook really doesn't have the power to do anything about this either. A possible solution is to include verifying information inside the like button, which ruins the simplicity of it all.

Browse Trail Inference


This one is a more paranoid concept, but I've noticed that people don't realize it until I spell it out for them:
Facebook is indirectly collecting your entire browsing history for all websites that have Facebook widgets. You don’t have to click any like buttons, just visiting sites like IMDB.com or CNN.com or BritneySpears.com will enable this.

Here's how it works:

browsetrail

Here, our favorite user Jane is logged into Facebook, and visits 2 pages on IMDB.com, checks the news on CNN, and then heads to Yelp to figure out where to eat. Interestingly enough, Facebook records all this information, and can tie it to her Facebook profile, and can thus come up with inferences like "Jane likes Romantic Movies, International News and Thai Food -- let's show her some ads for romantic getaways to Bali!"

(Even worse, if Jane unwittingly visits a nefarious website which coincidentally happens to have the Like button, Facebook gets to know about that too!)

Most modern browsers send the parent document's URL as HTTP_REFERER information to Facebook via the Like IFRAME, which allows Facebook to implicitly record a fraction of your browsing history. Since this information is much more voluminous than your explicit "Likes"; a lot more information can be data-mined from it; which can then be used for "Good"(i.e. adding value to Facebook) or "Evil"(i.e. Ads! Market data!)

What I like about this is that this is an ingenious system to track user's browsing behavior. Currently, companies like Google, Yahoo and Microsoft(Bing/Live/MSN) have to convince you to install a browser toolbar which has this minuscule clause in its agreement that you share back ALL your browsing history, which can be used to better understand the Web(and make more money, etc. etc.). Since Facebook is getting all websites to install this; it gets the job done without getting you to install a toolbar! I'll be discussing how I deal with this in the last section, "My solution".

Newsfeed Spam


In a previous post, I demonstrated how users could be tricked into "Liking" things they didn't intend to, leading to spam in their friends' newsfeeds. A month later, security firm Sophos reported an example of this, where users were virally tricked into spreading a trojan virus through Facebook Likes, something that could easily be initiated by Like buttons across the web, where you can easily be tricked into liking arbitrary things.

Again, this issue has the same root cause as Reputation Misrepresentation: since all the Like button shows you is a usercount, pictures and the button itself, there really is no way to know what you're liking. A solution to this is to use a bookmarklet in your browser, which is under your control.

"Likejacking"


This interesting demo by Eric Kerr demonstrates how to force unwitting users into clicking arbitrary like buttons. The way this works is by making a transparent like button, and make it move along with the users mouse cursor. Since the user is bound to click on the page at some point of time, they're bound to click the Like button instead.

Like Switching


likeswitch

Like switching is an alternative take on Like Jacking -- the difference is that the user is explicitly shown a like button with a prestigious like count and familiar friends first. When a user reaches out to click on it, the like button is swapped out for a different one, triggered by an onmouseover event from the rectangle around the button.

"Solutions"

Given these issues, let's discuss some solutions, responses and fixes. Note the use of quotes -- for many people can argue that nothing is broken, so we don't need solutions! Regardless, one piece of good news is that the W3C is aware of the extensive use of IFRAMES on the web, and has introduced a new "sandbox" attribute for IFRAMES. This will lead to more fine-grained control of social widgets. For example, if we can then set our browsers to force "sandbox" settings for all Facebook IFRAMES, we can avoid handing over our browsing history to Facebook.


Facebook's approach


While I don't expect companies to rationalize every design decision with their users, I am glad that some Facebook engineers are reaching out via online discussions. Clearly this is not representative of the whole company, but here's a snippet:
Also, in case it wasn't clear, as soon as we identify a domain or url to be bad, it's impossible to reach it via any click on facebook, so even if something becomes bad after people have liked it, we still retroactively protect users.

I like this approach because it fits in well with the rest of the security infrastructure that large companies have: the moment a URL is deemed insecure anywhere on the site, all future users are protected from that website. However, this approach doesn't solve problems with user trust -- it's relying on the fact that Facebook has flagged every evil website in the world before you chanced upon it -- something I wouldn't bet my peace of mind on. It's as if the police told you "We will pursue serial killers only after the first murder!"Would you sleep better knowing that? In essence, this approach is great when you're looking at it from the side of protecting 500 million users. But as one of the 500 million, it kinda leaves you out in the dark!


Secure Likes

As we mentioned in the Reputation Misrepresentation section, another interesting improvement would be to include some indication of the URL that is being "Liked" inside the button itself. An option is to display the URL as a tooltip when the user hovers his/her cursor over the button, especially if it disagrees with the parent frame's URL. Obviously placing the whole URL would make the button large and ugly. A possible compromise is to include the favicon(the icon that shows up for each site in your browser) right inside the Like button. The user can simply check if the browser icon is the same as the one on the like button to make sure it's safe. This way, if a website wants to (mis)use BritneySpears.com's Like Button, it will be forced to use BritneySpears.com's favicon too! Here's a mockup of what "Secure Like" would look like for IMDB:

securelike


A browser-based approach


Screen shot 2010-07-26 at 5.11.57 AM

This approach, best exemplified by "Social Web" browser Flock and recently acknowledged by folks at Mozilla, makes you log into the browser, not a web site. All user-sensitive actions(such as "Liking" a page) have to go through the browser, making it inherently more secure.

My Current Solution


dock

At this point, I guess it's best to conclude with what my solution to dealing with all these issues is. My solution is simple: I run Google and Facebook services in their own browsers, separate from my general web surfing. As you can see from the picture of my dock, my GMail and Facebook are separate from my Chrome browser. That way, I appear logged out[2]. Google Search and Facebook Likes when I surf the web or search for things. On a Mac, you can do this using Fluid.app; on Windows you can do this using Mozilla Prism.

And that brings us to the end of this rather long and winded discussion about such a simple "Like" button! Comments are welcome. Until the next post -- Surf safe, and Surf Smart!

 

 

Footnotes:
[1] To my knowledge, there is only one other company that has this level of persistent engagement: Google's GMail remembers logins more aggressively than Facebook. When you're logged into Gmail, you're also logged into Google Search, which means they log your search history as a recognized user. This is usually a good thing for the user, since Google then has a chance to personalize your search. Google actually takes it a step further and personalizes even for non-logged in users.

[2] Yes, they can still get me by my IP, but that's unlikely when I'm usually behind firewalls.

 

Cite this post!:


@article{reputationmisrepresentation,
title={{Reputation Misrepresentation, Trail Paranoia and other side effects of Liking the World}},
author={Nandi, A.},
year={2010},
journal={{Arnab's World}}
}

"Move Fast, Break Trust?"

This week’s blog post is written by fellow PhD Candidate Nicholas Gorski, who came across yet another bug in Facebook’s privacy during the latest rollout. The post germinates from a discussion about how the motto “Move Fast, Break Stuff” sounds fun for an engineer, but is this attitude apt when it comes to your relationships with your friends and family? As an explicit clarification to the engineers at Facebook: This post is intended to incite thought about attitudes towards privacy models, and not make any claims about coding abilities or the inevitability of bugs. —arnab

 

Mark Zuckerberg’s motto for Facebook, now used as company differentiator in engineering recruiting pitches, is “move fast, break stuff.” As previously reported, Facebook certainly broke things in changes pushed out Tuesday evening: By previewing the effects of your privacy settings, you were briefly able to see your profile as if you were logged in to a friend’s account, which enabled you to view your friends’ live chats as they were taking place, as well as look at pending friend requests.

Tuesday’s changes apparently also broke another privacy setting, though. By now, everyone is aware that Facebook exposes privacy settings for personal information in your profile. This includes items such as your Bio, description, Interested In and Looking For, and Religious and Political Views. However, Tuesday’s changes appear to expose this information to everyone in your network regardless of your privacy settings and even whether or not they are your friend.

[click the pictures to enlarge]
Screen shot 2010-05-06 at 2.27.40 PM

Screen shot 2010-05-06 at 2.27.49 PM

Try it out for yourself. First, set the privacy settings for some of your personal information to exclude certain friends of yours that are in your network, and then preview your profile as them. If the privacy breach hasn’t been fixed yet, your friend will still be able to see your personal information even though they shouldn’t be able to according to your privacy settings. As we mentioned, this extends beyond your friends: anyone in your network may be able to view your personal information (it may even extend beyond your network).

Screen shot 2010-05-06 at 2.27.55 PM

Screen shot 2010-05-06 at 2.28.02 PM

(Note: the privacy leak may have since been fixed… although an awful lot of people now have public quotations on their profiles.)

Unfortunately, it’s unlikely that this bug is going to get the attention that it deserves. Facebook is exposing a privacy policy to its users, but is broken such that it ignores this policy. Upon rolling out Buzz, Google was lambasted in the press for defaulting to a public privacy policy for your contacts – if you opted in to creating a public profile. In this case, Facebook let you set an explicit privacy policy, but then exposed that information anyway.

How could this seemingly minor privacy leak hurt anyone, you might ask? The canonical example of the danger of Buzz’s public contacts was the case of the female blogger with an abusive ex-husband. No harm actually befell this security-conscious blogger, but it certainly could have. In the case of Facebook’s privacy breach, the information that was made public was only profile information relating to your biography, religion and romantic preferences. Given the masses of Facebook users, how many people’s sexual preferences could have been inadvertently outed? How many people could have had potentially embarrassing biography information exposed to their parents, people in their network, or potential employers? The privacy safeguards are there for a reason, after all.

One might be inclined to write it off as a mistake, potentially a bug in a PHP script written by a junior software engineer — something hard to believe, given the reported talent of their employees. But Facebook’s motto, and their current agenda, makes it clear that the privacy leaks that have come to light this week are more than that. They are a product of corporate indifference to privacy; indeed, Facebook’s corporate strategy for monetizing their site depends on making as much of your information public as they can. The EFF has repeatedly sounded alarms about the erosion of privacy on Facebook, but is it too late?

Much of the information that was once personal and guarded by privacy settings has now migrated to the public portion of the site, and has been standardized in order to facilitate companies using your personal information to tie in to their marketing and advertising campaigns. The books that you like, the music that you listen to, your favorite movies: all of these are valuable data that companies will pay Facebook for, in aggregate. It will allow them to target you more specifically. When you expose this information publicly, though, are you really aware of how it will be used – not just today, but tomorrow? Information will persist forever in Facebook’s databases, long after you delete it from your profile.

In the meantime, Facebook’s corporate attitude of playing fast and loose with your profile information makes it likely that future privacy leaks will occur — that is, if any of your profile information remains private for much longer.

Google Search's Speed-based Ranking, Baking and Frying

I am looking for confirmations from other Drupal developers regarding details and corroborations. Comments are welcome here. PHBs need not worry, your Drupal site is just fine.

This post is about an inherent problem with Google’s recently announced “Speed-as-a-ranking-feature” and its problems with content-management systems like Drupal and Wordpress. For an auto-generated website, Google is often the first and only visitor to a lot of pages. Since Drupal spends a lot of time in the first render of the page, Google will likely see this delay. This is both due to a problem with how Drupal generates pages, and Google’s metric.

Google recently announced that as a part of it’s quest to making the web a faster place, it will penalize slow websites in its ranking:

today we’re including a new signal in our search ranking algorithms: site speed. Site speed reflects how quickly a website responds to web requests.

Since Google’s nice enough to provide webmaster tools, I looked up how my site was doing, and got this disappointing set of numbers:

Screen shot 2010-04-11 at 10.35.31 PM

I’m aware 3 seconds is too long. Other Drupal folks have reported ~600ms averages. My current site does under 1s second on average based on my measurements. This is probably because I occasionally have some funky experiments going on in some parts of the site that run expensive queries. Still, some other results were surprising:

Investigating further, it looks like there are 3 problems:

Screen shot 2010-04-11 at 10.49.44 PM

DNS issues & Multiple CSS: Since Google Analytics is on a large number of websites, so I’m expecting their DNS to be prefetched. CSS is not an issue since the 2 files are client media specific(print / screen).

GZip Compression: Now this is very odd. I’m pretty sure I have gzip compression enabled in Drupal (Admin > Performance > Compression). Why is Google reporting lack of compression? To check, I ran some tests, and discovered that since Google usually sees the page before it’s cached, it’s getting a non-gzipped version. This happens due to the way Drupal’s cache behaves, and is fixable. Ordinarily, this is a small problem, since uncached pages are rendered for only the first visitor. But since Google is the first visitor to a majority of the pages in a less popular site, it thinks the entire site is uncompressed. I’ve started a bug report for the uncached page gzip problem.

A flawed metric: The other problem is that Drupal (and Wordpress etc) use a fry model ; pages are generated on the fly per request. On the other hand, Movable Type, etc., bake their pages beforehand, so anything served up doesn’t go through the CMS. Caching in fry-based systems is typically done on the first-render, i.e. the first visit to a page is generated from scratch and written to the database/filesystem, any successive visitor to that page will see a render from the cache.

Since the Googlebot is usually the first (and only) visitor to many pages in a small site, the average crawl would hit a large number of pages where Drupal is writing things to cache for the next visitor. This means every page Googlebot visits costs a write to the database. While afaik Drupal runs page_set_cache after rendering the entire page and hence the user experience is snappy, I’m assuming Google counts time to connection close and not the closing </html> tag, resulting in a bad rendering time evaluation.

This means that Google’s Site Speed is not representative of the average user(i.e. second, third, fourth etc visitors that read from the cache), it only represents the absolute worst case situation for the website, which is hardly a fair metric. (Note that this is based on my speculation of what Site Speed means, based on the existing documentation.)

Getting middle-click to work for tab browsing in Mac OS X

If you’re a recent convert to Mac OS X (Tiger / Leopard / Snow Leopard / etc) or someone who uses multiple operating systems at the same time, the differences in mouse and keyword shortcuts get confusing, even irritating sometimes. One of the most irritating ones for me is the difference in what happens when you middle-click on the mouse.

In Windows / Linux, middle clicking in browsers is used to open and close tabs. In OS X, this doesn’t work because middle click is used to trigger the Dashboard. Every time I would want to open or close a tab, the dashboard would show up! To disable this, all you have to do is go to System Preferences > Exposé & Spaces and set the mouse shortcut to “-”.

For the newbies, here’s a screenshot guide. First select “System Preferences”:

Then click on the “Exposé & Spaces” button:

Set the “Dashboard” mouse shortcut to “—” :

So that it looks like this:

And that’s it! You will now be able to middle click to open and close tabs in Mozilla Firefox and Google Chrome. For Safari, you can open tabs, but closing tabs don’t work.

And there you have it, middle click tabs on Mac OS X!

Street View Fun

Here’s a fun video:

Ever wonder what it’s like for the dudes who have to drive those Google camera cars around? I think it’s a little something like this…

 
On the other side of the Atlantic, to promote the UK band Editors’s new album, Sony has produce a new streetview hack, where you have to browse around London to discover hidden things :

This is how it works: a cleverly hacked version of Google Street View allows users to preview tracks from the album in the areas of London that inspired them. As well as being able to move around as you would in the normal Google Street View, there are red arrows to find in nine different London locations (one for each track of the album) that each point to a location off the road – click it to find custom panoramic photographs of the band, shot at night by photographer James Royall.

Here’s a preview video of the game:

|

Barack Obama Nobel Prize Sound Bites

From the Wikipedia :

On October 9, 2009, U.S. President Barack Obama was awarded the Nobel Peace Prize less than one year after his taking office (in fact, the nominations closed on February 1, about 11 days after Obama took office). While the committee praised his ambitious foreign policy agenda, it acknowledged that he had not yet actually achieved many of the goals that he had set out to accomplish. Former Polish President Lech Wałęsa, a 1983 Nobel Peace laureate, commented: “So soon? Too early. He has no contribution so far. He is still at an early stage. He is only beginning to act.”

This is pretty amazing news. My Facebook, News and IM streams are flooded with one-liners. I though I’d collect them all:

  • “I too would like a Nobel Peace Prize for the thesis I am about to write in the future.” — me
  • “it’s a pretty swell booby prize for losing out on the Olympics” – n.d.
  • “Surely preventing Sarah Palin from taking over the free world deserves a prize… even if it is a Nobel?” — v.b.
  • ““NASA bombs moon”; “Obama wins Nobel Prize” — is today Onion News Day?” — me
  • “Barack Obama linked to terrorist Yasser Arafat” — fark via a.a.
  • “The Nobel? Really? I mean, cool…but it seems like we have our cart on the wrong side of the horse. Not that it isn’t a very nice cart.” — c.m.
  • “…thinks they might as well have given him the Nobel Prize for Literature, Chemistry (we’ve all seen the shirtless photos), Physics and Economics as well. Oh and made him a Knight Commander of the Order of the Bath” — r.d.
  • “Nobel Committee Rewards Obama For Not Being Bush” — f.n.
  • “I just want to point out that the Nobel Committee made its decision BEFORE Miley Cyrus quit Twitter.” — j.h.
  • “Obama will win a second Nobel next year if he can restrain himself from reacting to the snark generated by this one.” — m.w.
  • “Pretty sure Obama will just trade in his Nobel for a Google Wave invite.” — t.b.
  • “The news of Obama’s Nobel Peace Prize spreads. Across the miles I can almost HEAR my dad’s eyes rolling.” — p.g.
  • “Obama wins Nobel Peace Prize? About time Rakhi Sawant wins an Oscar, then.” — s
  • “If you don’t think Obama deserves that Nobel, then you’ve never seen Sasha and Malia fight.” — a.e.
  • “Apparently Arizona State has a higher standard than the Nobel Committee. Good thing I never tried to apply there.” — r.m.
  • Business Insider has some more.

Yahoo: Just like the old times

I’m excited to go to work today, knowing that I will be witness, first hand, to one of the more incredible business deals being announced in the valley: Microsoft powering Yahoo Search.

There’s a lot that I want to say about this, but for now, I will leave you with this image. This is from when Yahoo! used to be powered by Google. (Many people believe that powering Yahoo was what made Google popular with the mainstream audience, and the Google owes who it is today to Yahoo.)

An excerpt from the Wikipedia:

In 2002, they bought Inktomi, a “behind the scenes” or OEM search engine provider, whose results are shown on other companies’ websites and powered Yahoo! in its earlier days. In 2003, they purchased Overture Services, Inc., which owned the AlltheWeb and AltaVista search engines.

AlltheWeb, Altavista, Overture, Inktomi. That’s a lot of heritage.

|