Performance is a Feature

There is this new CDN for free called cloudflare.com
You should give it a try.

Brandon,

ASP.NET at its core is very good. It performs well and the tooling is excellent when you use Visual Studio. So with some practice you can be very productive with it, and crank out quality stuff.

ASP.NET Web Forms has had issues over the years though, essentially promoting a lot of bad practices on the web. The whole viewstate/postback methodology bled into many built-in controls. And that in turn led to usability problems with form submissions, and many websites have suffered. I know, because I built Web Forms for 8 years.

ASP.NET MVC, however, promotes a completely different way of thinking that is more web friendly in generalā€¦ friendly URLs, no more viewstate/postback mess, and so on.

I wrote an article comparing the two in greater detail awhile backā€¦
http://swortham.blogspot.com/2009/10/when-to-use-aspnet-web-forms-and-when.html

Jeff,

What does the ā€œcheck redirectsā€ section of the code do? While itā€™s only 50 ms, it looks like the 2nd slowest thing in the list. Is that by chance the MVC routing engine at work? And are you by chance using regular expressions in your routing? I just wonder if thereā€™s room for improvement there.

Steve

Jeff, I thank you for writing about this, but especially for the most effective use of the word ā€œperformantā€ Iā€™ve seen in a blog on this subject for a long time.

http://weblogs.asp.net/jgalloway/archive/2007/05/10/performant-isn-t-a-word.aspx

Jeff,

A remarkable post! Itā€™s hard to find ppl who stick with trying to improve the performance on their websites.

Iā€™ve never heard about MVC mini-profiler (thatā€™s because iā€™m for Java than .NET) and i was really curious about it.

Itā€™s not a too intrusive profiler? I mean, itā€™s quite commom for those kind of application to consume too much of the process that itself can possible cause a bottleneck.

Itā€™s really necessary to change the source of the specified pages to check their performance? Itā€™s not possible to just add it on the framework as a listener or something like that?

Keep pushing!
[]s

@www.google.com/accounts/o8/id?id=AItOawn9emGrf3fSSNT7Br5k7STGP_tPLpRKIpo

(lol why is that your ID)

No, I was adding to the point, that 100 ms for a homepage, becomes a lot more as you go deeper and the database queries get more complex. It can and DOES in real world applications bring this lag up to as much as 500-750ms, and then when you push that over the wire, its then additive to client side rendering lag.

So you have people staring a white god damn screen for over a full second. Users do notice, and you are wrong. Not everyone has a machine that can run 3 vms of Crysis simultaneously, especially in office environments. Thatā€™s great if you can save money on the backend too, good on you but the USER EXPERIENCE and how even small lags cause people to use alternatives that donā€™t suck is the crux of Jeffā€™s post, and I very much agree. Statistics show that people are not as stupid as you think. Google didnā€™t become #1 because of their cute holiday banners, its because they were leaner and faster then the rest.

TL:DR Optimization has many positives, we both feel strongly about different ā€œendsā€ of those positives, so to speak.

FYI, www.webpagetest.org allows you to select a location you want to test from. There are 28 locations to choose from worldwide. So you can get an idea of how your site is performing in different parts of the world.

Completely agree. Site ā€œsnappinessā€ is very important. When I was in my parents (like 4 or 5 years ago) they refused to get DSL, and I just had a 56kb modem (yes, dialup). Aside from the ā€œyahoo guidelinesā€, most sites were full of crap: Flash, huge js files with only a few functions used, enormous images, cropped via HTML. Now I have DSL and donā€™t even realise it, but a lot of websites are just clogging bandwidth (either by size or by requests). Good performance on the server side is just the icing on the cake. Youā€™ve done a good job at StackOverflow, I could even browse it a few months ago when I only had an unreliable wifi connection at 2kb/s.

Cheers,

Ruben @mostlymaths.net

Jeff,

You inspired me to do a little testing of my own. Iā€™ve been moving stuff over to Amazon CloudFront since I made a big website release yesterday. And one thing thatā€™s been troubling me is their lack of native gzip support. That, and the fact that theyā€™re still a little slow in some parts of the world motivated me to look around.

I found a good alternative thatā€™s also very cheap ā€“ RackSpace Cloud Files. They use the Akamai network. I documented my findings, including my performance tests hereā€¦

http://blog.bucketsoft.com/2011/06/amazon-cloudfront-vs-rackspace-cloud.html

As somebody who got the pagespeed rank to go from 60 to 96 (currently 90 due to some vendor who is having issues) I can share with you guys some of the highlights.
Our home page. https://www.atmcash.com was not using CDN, did not have GZIPs, sprites, or any sort of JS optimization.
We started using Lazy-loaders, CDN, very importantly! outsource our DNS, many people do not think that DNS may have a huge impact, but it does!. Use sprites, they are gold!
Make sure you combine as much css and JS as possible. Specially on your landing pages.
If you have any specific questions on how to optimize your webpage, feel free to message me. @EGBlue

The one other thing that is useful about performance analysis is that it oftens highlights errors or bad coding that would otherwise have been missed.

I have just read a blog post on the Windows Team Blog about performance in hotmail, and I find it funny that just after the video they wrote : ā€œWe believe performance is a feature [ā€¦]ā€ :wink: They must be reading your blog!

http://windowsteamblog.com/windows_live/b/windowslive/archive/2011/06/30/instant-email-how-we-made-hotmail-10x-faster.aspx

To all those who donā€™t see the big deal in shaving 100 or 50 msec off page load time:

see:
The Economic Value of Rapid Response Time
http://www.vm.ibm.com/devpages/jelliott/evrrt.html

and the two articles that preceded it
"Factors Affecting Programmer Productivity During Application Development" IBM Systems Journal 23(1): 19-35 (1984)

ā€œInteractive User Productivityā€ IBM Systems Journal 20(4): 407-423 (1981)

http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/t/Thadhani:Arvind_J=.html

To summarize: the quicker the response time to the human, the better the human can stay focused, keep on track, and exploit short term memory, rather than returning to the task at hand from a restful or wandering state of mind.

In one text mode application, during a lengthy calculation process I put up a progress bar of *'s as it worked and then erased them when the calculation completed. As PCā€™s got faster and I tuned my code, the recalculations dropped from typically 40 seconds, to 20, to 3, to less than 0.1 second. At that point I started getting calls from users that the ā€œprocessā€ key stopped working.

However, if the user held down the process key, the row of *'s would flicker, indicating that it was working, displaying *'s and erasing them, within the typamatic repeat rate of the keyboard.

I added a ā€œprocessing completeā€ text message to ā€œsolveā€ the ā€œproblem.ā€

I would have thought that the following HTML performance tip might no longer be necessary, but it is:

Use explicit width tags in HTML table definitions. e.g. WIDTH="50%"
This is especially true if the table will be very long. Otherwise, the HTML renderer does not know how to render the table until all the content has been received.

1 Like

I always try to optimize frontend with YSlow plugin on every site I build. For CDN rule, I never had to implement one, but others optims are done (and others, of course!).

Contrast these ā€œextremeā€ performance enhancing measures with the HIDEOUS performance hampering built-in delays and unreadable captchas of the many file-sharing / piracy sites.

Offer an illegal download of some console game and you can make people put up with all sorts of nastiness.

Hi there,
I gather you may flail me in public given the nature of your blog but here goes.
I am not a programmer. I have worked in the online industry tho for some time with systems built on SQL using classic ASP (v old now but still working fine!:), VB, VB script, JQuery etc etc but we have 2 challenges:

  1. How to quickly upgrade or even transfer to something totally newer and probably different(?)
  2. What would you use to build a totally new Twitter-based service that searches and stores tweets and does stuff with the Twitter reg ID @ @ info + fbook friends etc?
    I gather it would now prob be on MySQL (?) but written in what (for Web + Mobile)? Ruby, Python, PHP, C++,ā€¦??!
    Yeah, I know, pathetic but you see, Iā€™m not a programmer. I have always trusted great programmers & want to work with some more on this project but what sort should I be looking for? (Thatā€™s why I would love to have a recommendation on the language etc to focus my search).
    Hope upon hope you can help??
    Mac :slight_smile:

Hi Jeff,

Came across this after a quick Google search to send to someone and I noticed the link to the new hardware on the serverfault blog was broken.

I looked through the ServerFault archives and was wondering if this may be the current link:
http://blog.serverfault.com/2011/03/11/new-hardware-for-stack-overflow-database/

It came prior to your post and addresses new hardware for the database servers.

Just wanted to drop a line and say 1) thanks for the post and 2) uugh internet link rot, amirite?

Hi Guys, Any pointers to a list like this for Mobile apps?

Then StackOverflow should start going on the HTTP/2 bandwagon as well. :slight_smile:
Itā€™s still using HTTP/1.1 for the most part. Some of the CDNs have offloaded to H2 one thing that can be done is to preload the all.css, stub.js, jquery libraries (it canā€™t be server push because its a different server)