drupal

Google Search's Speed-based Ranking, Baking and Frying

I am looking for confirmations from other Drupal developers regarding details and corroborations. Comments are welcome here. PHBs need not worry, your Drupal site is just fine.

This post is about an inherent problem with Google’s recently announced “Speed-as-a-ranking-feature” and its problems with content-management systems like Drupal and Wordpress. For an auto-generated website, Google is often the first and only visitor to a lot of pages. Since Drupal spends a lot of time in the first render of the page, Google will likely see this delay. This is both due to a problem with how Drupal generates pages, and Google’s metric.

Google recently announced that as a part of it’s quest to making the web a faster place, it will penalize slow websites in its ranking:

today we’re including a new signal in our search ranking algorithms: site speed. Site speed reflects how quickly a website responds to web requests.

Since Google’s nice enough to provide webmaster tools, I looked up how my site was doing, and got this disappointing set of numbers:

Screen shot 2010-04-11 at 10.35.31 PM

I’m aware 3 seconds is too long. Other Drupal folks have reported ~600ms averages. My current site does under 1s second on average based on my measurements. This is probably because I occasionally have some funky experiments going on in some parts of the site that run expensive queries. Still, some other results were surprising:

Investigating further, it looks like there are 3 problems:

Screen shot 2010-04-11 at 10.49.44 PM

DNS issues & Multiple CSS: Since Google Analytics is on a large number of websites, so I’m expecting their DNS to be prefetched. CSS is not an issue since the 2 files are client media specific(print / screen).

GZip Compression: Now this is very odd. I’m pretty sure I have gzip compression enabled in Drupal (Admin > Performance > Compression). Why is Google reporting lack of compression? To check, I ran some tests, and discovered that since Google usually sees the page before it’s cached, it’s getting a non-gzipped version. This happens due to the way Drupal’s cache behaves, and is fixable. Ordinarily, this is a small problem, since uncached pages are rendered for only the first visitor. But since Google is the first visitor to a majority of the pages in a less popular site, it thinks the entire site is uncompressed. I’ve started a bug report for the uncached page gzip problem.

A flawed metric: The other problem is that Drupal (and Wordpress etc) use a fry model ; pages are generated on the fly per request. On the other hand, Movable Type, etc., bake their pages beforehand, so anything served up doesn’t go through the CMS. Caching in fry-based systems is typically done on the first-render, i.e. the first visit to a page is generated from scratch and written to the database/filesystem, any successive visitor to that page will see a render from the cache.

Since the Googlebot is usually the first (and only) visitor to many pages in a small site, the average crawl would hit a large number of pages where Drupal is writing things to cache for the next visitor. This means every page Googlebot visits costs a write to the database. While afaik Drupal runs page_set_cache after rendering the entire page and hence the user experience is snappy, I’m assuming Google counts time to connection close and not the closing </html> tag, resulting in a bad rendering time evaluation.

This means that Google’s Site Speed is not representative of the average user(i.e. second, third, fourth etc visitors that read from the cache), it only represents the absolute worst case situation for the website, which is hardly a fair metric. (Note that this is based on my speculation of what Site Speed means, based on the existing documentation.)

Web 2.0 and the relational database

Yes, this is yet another rant about how people incorrectly dismiss state-of-art databases. (Famous people have done it, why shouldn’t I?) It’s amazing how much the Web 2.0 crowd abhors relational databases. Some people have declared real SQL-based databases dead, while some have proclaimed them to be as not cool any more. Amazon’s SimpleDB, Google’s BigTable and Apache’s CouchDB are trendy, bloggable ideas that to be honest, are ideal for very specific, specialized scenarios. Most of the other use cases, and that comprises 95 out of a 100 web startups can do just fine with a memcached + Postgres setup, but there seems to be a constant attitude of “nooooo if we don’t write our code like google they will never buy us…!” that just doesn’t seem to go away, spreading like a malignant cancer throughout the web development community. The constant argument is “scaling to thousands of machines”, and “machines are cheap”. What about the argument “I just spent an entire day implementing the equivalent of a join and group by using my glorified key-value-pair library”? And what about the mantra “smaller code that does more”?

Jon Holland (who shares his name with the father of genetic algorithms) performs a simple analysis which points out a probable cause: People are just too stupid to properly use declarative query languages, and hence would rather roll their own reinvention of the data management wheel, congratulating themselves on having solved the “scaling” problem because their code is ten times simpler. It’s also a hundred times less useful, but that fact is quickly shoved under the rug.

It’s not that all Web-related / Open Source code is terrible. If you look at Drupal code, you’ll notice the amount of sane coding that goes on inside the system. JOINs used where needed, caching / throttling assumed as part of core, and the schema allows for flexibility to do fun stuff. (Not to say I don’t have a bone to pick with Drupal core devs; the whole “views” and “workflow” ideas are soon going to snowball into the reinvention of Postgres’s ADTs; all written in PHP running on top of a database layer abstracted Postgres setup.)

If Drupal can do this, why can’t everyone else? Dear Web 2.0, I have a humble request. Pick up the Cow book if you have access to a library, or attend a database course in your school. I don’t care if you use an RDBMS after that, but at least you’ll reinvent the whole thing in a proper way.

postgres and drupal

I strongly urge all Drupal developers to read through this paper on Postgres. This is a 15 year old design document from PostgreSQL’s predecessor, but there are many gems in this which apply directly to where (I think) Drupal is headed. I intended to write this up properly as an informal presentation, but it came up on #drupal today, so thought I might as well share it with everyone.

|

aadl surprise

My town uses my code : the Ann Arbor District Library uses Drupal for their web interface, including my captcha module!

|

leaving drupal for drup.us

Frustrated with the never-ending betas of the imminent Drupal 4.7, I have decided to switch to a better project, Drup.us.

|

maintainers wanted

Today, I woke up and decided that I will give my drupal modules away, in the hope that they will be better maintained. If you’re a Drupal developer who would like to take over the Captcha, TextImage and Similar modules, please contact me. Note, I’m not unmaintaining them, I’m only looking for a person who has more time and energy than me to work out the kinks and make the modules work for every Drupal version and not bloat the features and prefers to make things more efficient. If I do not find such a person, I’ll just keep sporadically maintaining them like I was. Interested? Just send me your drupal.org username, and what module you’d like to take over.

|

myspace basics

Russell Beattie figures out MySpace, in component terms. Funny how every sentence of his reminded me of a specific Drupal module that provides that feature.

Then of course, we’re not counting the sentences that talk about hotties and risque photographs. I don’t think there’s a module (yet) to provide that.

|

yahoo web widgets!

The Yahoo! UI people just opened up their widget library, complete with graded browser support. Hmm. Maybe we would use the calendar widget for the Drupal archive.

|

multisite drupal: the importance of the sequence

Recent versions of Drupal have the oh-so-cool feature that allows you to host many websites off a single Drupal codebase. The coolest part about this is that you can share some tables accross multiple websites; which means you can do things like have a single username/password table accross all the websites. This can easily be done, as specified in the settings.php comments as:

* $db_prefix = array( * 'default' => 'main_', * 'users' => 'shared_', * 'sessions' => 'shared_', * 'role' => 'shared_', * 'authmap' => 'shared_', * 'sequences' => 'shared_', * );

Now here’s an important thing to note: The first table you have to share is the sequences table. This is the table that handles all the id counters, so if you don’t share this one, something like this can happen:

[you shared only the users table]

1. User 1 signs up on Site A, gets user id#1
2. User 2 signs up on Site A, gets user id#2
3. User 3 signs up on Site B, gets user id#….? The correct answer is not 3!

This happens because you didn’t share sequencesSite B uses it’s own sequence generator to render a duplicate userid… which the user table would not accept, and this would go on till the Site B sequence catches up with the Site A sequence, and then things would be normal. The code quality in user.module helps protect the user table from data corruption, but you will have many signups disappear into thin air with a set up like this. Hence, all you need to do is share the sequences table along with the users… and you’re all set!

Btw, hello Planet people!

|

drupal captcha 2.0

FINALLY finished the captcha.module for drupal. This is ONLY a first draft, lots of improvements to happen. Features:
* ability to protect any drupal form
* captcha API – make your own challenge response! (math and image are included in package)

(use cvs checkout to get)

|