ASP.NET Web Performance Trace Debugging with Log4net

I’ve recently worked a fair bit with checking out performance and pinpointing errors within web applications where performance is a priority, which, to be fair, really should be all web applications out there. Over the course of my performance adventures, I’ve returned to messing around with something I don’t really see a lot of people use to the extent which they could/should, the ASP.NET Trace.

 

For those not familiar with this, it’s information about the current page request, including specifics regarding everything conceivable about the current request plus server information, and in particular when it comes to performance, it provides specific load times and a view of the control hierarchy – including render size and viewstate size.

 

I’ve made a quick and very simple sample to illustrate what you can do with this, and in my little sample, the load time trace information on my default page looks a little something like this.

  

This table shows the load times for each of the steps, and by the looks of things, there are things happening in the Page Load which takes an unusually high amount of time to load!

 

To get detailed information about which specific portions of the web page which spends the most time delaying my page from rendering, I’m starting by adding a simple log4net entry to the trace appender in my web.config file, and configuring it on application start.

 

My Page Load consists of 3 simple calls, I want to figure out which calls are being the most painful ones, and I wrap each of them with debug messages for testing, like this.

  

Keep in mind, this is just for a quick sample, in reality with complex systems you’d most likely want to place such messages inside the functions, and probably give them slightly more relevant information. Even better is if you already have a decent debug scenario set up across your solution, in which case you get a lot of free debugging here!

 

I recompile my solution now, and check the trace again, and find that the log4net messages have now decided to join in with the other trace messages, which adds a few more lines in-between the page load, as shown here. It’s important to note that the log4net messages now “break up” the step between begin and end load, as the trace counter simply counts from the last event, which log4net is now hooked up to.

 

This gives me an overview to work with stating that QuickStuff takes roughly 0.1 seconds to finish, SomeStuff roughly 2 seconds and LotsOfStuff is the winner with 5 seconds. Using these debug messages I can drill further down with more detailed debug messages to find the exact source of the "problem" and thus know exactly where I need to optimize code and/or queries to make the page load quicker.

 

By default, the ASP.NET Trace is disabled, and a lot of people tend to keep it that way, either by choice or by accident. To turn on this feature, you can add Trace=”true” to the page declaration of each individual page, but this does create some issues as you really don’t want to accidently put the trace out on a production site! I’ve found that a generic and more “on demand” way to enable trace on demand is to add a check for it through a querystring check in global.asax. Although, don’t put this in production either! :)

 

Doing something like this in global.asax will make the trace appear when you append ?trace=1 to whichever url you are working on, provided that it’s not overridden in the page declaration. It’s worth to note that working with ajax will sometimes mess up your trace, I tend to disable that temporarily if that’s the case – however, if I can’t work around this, I will add a fileappender to the log4net configuration, and check the delays between the entries there manually.

 

The last part I’d like to mention is the “page size” problem when it comes to performance. An issue which is more or less specific to ASP.NET WebForms is the added overhead with the viewstate. Using the Control Tree Information in the trace lets you detect how much of the page is viewstate, and how much is actual rendering.

 

I’ve adjusted my sample webpage to now have two labels, one of which has EnableViewState=”false”, both have had their Text properties set to Hello World! looped 100.000 times for over the top emphasis.

 

Here you can see the exact control hierarchy of all the controls and text on the page, total render size is 4mb, with each of the labels having 1.2mb of text and the label with the viewstate having 1.6mb on top of that in added viewstate.

 

Although this is a very far out example, it’s important to note that the viewstate of the label here is about 33% larger than the size of the actual label, and the client will have to download both of these, increasing the total download from a non-viewstate label to a viewstate by 133%. Also keep in mind that on client postbacks, the client will have to post the amount of viewstate back to the server, so if you’re having huge amounts of data to post for relatively simple operations, you might want to refactor or restructure your page if you’re experiencing performance issues.

 

A few obvious things I feel is worth pointing out here as a result of this is that if you have WebForm pages which don’t use postbacks at all, keep viewstate off! You can disable viewstate on a page level in the page declaration with an EnableViewState=”false”, the same goes for any controls you might have.

 

Furthermore, when binding complex objects, it’s ideal for performance to make sure you only bind objects with the properties you really need. If you’re using O/RMs in particular, you want to try to make as simple wrapper classes of your objects as possible, to ensure that you don’t keep pushing data you don’t need at all to both the client when it’s being read, but also on a potential postback to your server.