Over the weekend, we rolled out a few speed improvements to the Stack Overflow engine.
First, we did a quick pass with ANTS Profiler (which is great, by the way) and identified a few places where redundant or unnecessary database queries slipped into our code. We like to do this every few months on common pages as a sanity check. We start a trace, refresh a given page 50 times, then view the hot code paths in the trace. It’s almost always database queries gunking up the works, but once in a blue moon we’ll write code so bad that it actually registers in the hot code paths. Anyway, the golden rule is to measure, then optimize, and that’s what we try to do.
We also took a long, hard look at optimizing the browser cookies we’re sending down to clients (and thus, clients are dutifully sending back to us in each HTTP request). You’d be surprised how big an impact on performance cookies can be. We were able to remove our ASP.NET forms authentication cookie entirely, and cut the length of our standard cookie key in half. I also removed a number of cookies that the /login page was storing which weren’t really necessary. In my testing our typical cookie is about 360 bytes now, compared with over 500 bytes before. Over time, these old unnecessary cookies will fall away naturally, but you may want to clear your domain cookies manually if you want the fastest possible Stack Overflow family browsing experience.
This isn’t as new, but it’s worth mentioning. A few weeks ago, we turned up the HTTP GZIP compression level for dynamic content from the default of 0 to 4. That’s ever-so-slightly slower, but offers an additional 10% reduction in page size. The tradeoff between CPU performance and file size for this setting is documented in exhaustive detail by Scott Forsyth and the “sweet spot” is definitely 4.
We’ve been long time users of YSlow, and more recently Google Page Speed. Some of the recommendations these tools make are only sensible if you are Google or Yahoo (a very rare and select club of the ‘gee, that’s a nice problem to have’ variety) — but many of them are indeed essential no matter how big your website is.
When the browser makes a request for a static image and sends cookies together with the request, the server doesn’t have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.
If your domain is www.example.org, you can host your static components on static.example.org. However, if you’ve already set cookies on the top-level domain example.org as opposed to www.example.org, then all the requests to static.example.org will include those cookies. In this case, you can buy a whole new domain, host your static components there, and keep this domain cookie-free. Yahoo! uses yimg.com, YouTube uses ytimg.com, Amazon uses images-amazon.com and so on.
Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache the components that are requested with cookies. On a related note, if you wonder if you should use example.org or www.example.org for your home page, consider the cookie impact. Omitting www leaves you no choice but to write cookies to *.example.org, so for performance reasons it’s best to use the www subdomain and write the cookies to that subdomain.
We registered the domain sstatic.net for this purpose a month ago, and I’m pleased to announce that all the static resources for the Stack Overflow family of websites are now hosted at sstatic.net. This domain is of course cookieless and optimized for serving static content with the lowest possible overhead (and, as before, a far-future expires header, so zero requests are made to the server for cached static elements).
Here’s a sample get / response for the new configuration.
<b>GET</b> /so/js/master.js?v=4143 HTTP/1.1 <b>Host:</b> sstatic.net <b>User-Agent:</b> Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:184.108.40.206) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729) <b>Accept:</b> */* <b>Accept-Language:</b> en-us,en;q=0.5 <b>Accept-Encoding:</b> gzip,deflate <b>Accept-Charset:</b> ISO-8859-1,utf-8;q=0.7,*;q=0.7 <b>Keep-Alive:</b> 300 <b>Connection:</b> keep-alive <b>Referer:</b> http://stackoverflow.com/questions/1252349 <b>Pragma:</b> no-cache <b>Cache-Control:</b> no-cache
And the response from sstatic.net:
Using another server for your static content is also a rudimentary form of load balancing; we’ve shaved off hundreds of thousands of requests from our primary servers and delegated them to another server explicitly optimized for and dedicated to that task. Web browsers also tend to “parallelize” their load patterns for the page when they see resources coming from different domains — or a different subdomain, at least.
Anyway, we believe that performance is a feature, and we’re serious about the Stack Overflow family of sites being as fast as we can make them. We continue to revisit our performance every couple of months and try to improve it a little more each time.
Do you also take performance seriously? Discover new career opportunities in network engineering.