ASP.NET Web Performance Trace Debugging with Log4net

I’ve recently worked a fair bit with checking out performance and pinpointing errors within web applications where performance is a priority, which, to be fair, really should be all web applications out there. Over the course of my performance adventures, I’ve returned to messing around with something I don’t really see a lot of people use to the extent which they could/should, the ASP.NET Trace.

 

For those not familiar with this, it’s information about the current page request, including specifics regarding everything conceivable about the current request plus server information, and in particular when it comes to performance, it provides specific load times and a view of the control hierarchy – including render size and viewstate size.

 

I’ve made a quick and very simple sample to illustrate what you can do with this, and in my little sample, the load time trace information on my default page looks a little something like this.

  

This table shows the load times for each of the steps, and by the looks of things, there are things happening in the Page Load which takes an unusually high amount of time to load!

 

To get detailed information about which specific portions of the web page which spends the most time delaying my page from rendering, I’m starting by adding a simple log4net entry to the trace appender in my web.config file, and configuring it on application start.

 

My Page Load consists of 3 simple calls, I want to figure out which calls are being the most painful ones, and I wrap each of them with debug messages for testing, like this.

  

Keep in mind, this is just for a quick sample, in reality with complex systems you’d most likely want to place such messages inside the functions, and probably give them slightly more relevant information. Even better is if you already have a decent debug scenario set up across your solution, in which case you get a lot of free debugging here!

 

I recompile my solution now, and check the trace again, and find that the log4net messages have now decided to join in with the other trace messages, which adds a few more lines in-between the page load, as shown here. It’s important to note that the log4net messages now “break up” the step between begin and end load, as the trace counter simply counts from the last event, which log4net is now hooked up to.

 

This gives me an overview to work with stating that QuickStuff takes roughly 0.1 seconds to finish, SomeStuff roughly 2 seconds and LotsOfStuff is the winner with 5 seconds. Using these debug messages I can drill further down with more detailed debug messages to find the exact source of the "problem" and thus know exactly where I need to optimize code and/or queries to make the page load quicker.

 

By default, the ASP.NET Trace is disabled, and a lot of people tend to keep it that way, either by choice or by accident. To turn on this feature, you can add Trace=”true” to the page declaration of each individual page, but this does create some issues as you really don’t want to accidently put the trace out on a production site! I’ve found that a generic and more “on demand” way to enable trace on demand is to add a check for it through a querystring check in global.asax. Although, don’t put this in production either! :)

 

Doing something like this in global.asax will make the trace appear when you append ?trace=1 to whichever url you are working on, provided that it’s not overridden in the page declaration. It’s worth to note that working with ajax will sometimes mess up your trace, I tend to disable that temporarily if that’s the case – however, if I can’t work around this, I will add a fileappender to the log4net configuration, and check the delays between the entries there manually.

 

The last part I’d like to mention is the “page size” problem when it comes to performance. An issue which is more or less specific to ASP.NET WebForms is the added overhead with the viewstate. Using the Control Tree Information in the trace lets you detect how much of the page is viewstate, and how much is actual rendering.

 

I’ve adjusted my sample webpage to now have two labels, one of which has EnableViewState=”false”, both have had their Text properties set to Hello World! looped 100.000 times for over the top emphasis.

 

Here you can see the exact control hierarchy of all the controls and text on the page, total render size is 4mb, with each of the labels having 1.2mb of text and the label with the viewstate having 1.6mb on top of that in added viewstate.

 

Although this is a very far out example, it’s important to note that the viewstate of the label here is about 33% larger than the size of the actual label, and the client will have to download both of these, increasing the total download from a non-viewstate label to a viewstate by 133%. Also keep in mind that on client postbacks, the client will have to post the amount of viewstate back to the server, so if you’re having huge amounts of data to post for relatively simple operations, you might want to refactor or restructure your page if you’re experiencing performance issues.

 

A few obvious things I feel is worth pointing out here as a result of this is that if you have WebForm pages which don’t use postbacks at all, keep viewstate off! You can disable viewstate on a page level in the page declaration with an EnableViewState=”false”, the same goes for any controls you might have.

 

Furthermore, when binding complex objects, it’s ideal for performance to make sure you only bind objects with the properties you really need. If you’re using O/RMs in particular, you want to try to make as simple wrapper classes of your objects as possible, to ensure that you don’t keep pushing data you don’t need at all to both the client when it’s being read, but also on a potential postback to your server.

Logging with log4net

I’ve decided to write up a small post regarding logging in C#/ASP.NET applications, as there’s a fair few people over the years who have had questions and trouble regarding how and when they should log information, and what they should include in their logs.

 

First off, the disclaimer! These are my internal rules and thoughts based on my experiences and my personal preferences when it comes to logging. There are most likely other (and hopefully better) ways of implementing and maintaining a healthy level of logging. In this post I’m just trying to write down some of my basic guidelines and the thoughts behind them.

 

The logging implementation of choice for all my applications has always been log4net by Apache Logging Services. The simplicity, power and flexibility of the framework has all the requirements I’d like from such a framework covered nicely without me having to put any significant effort into it.

 

Setup and Installation of log4net is documented thoroughly at the official homepage http://logging.apache.org/log4net/index.html – I’ll skip that part and head to how to actually use logging in various applications.

 

Logging Goals

The main objective of logging at a debug-level should be to ensure that you have enough information to troubleshoot virtually any problem happening in your production systems, even if there isn’t an exception present. Having enough information to accurately reproduce a bug or scenario which happened in a controlled test environment will drastically reduce the time it takes to fix things as opposed to having to look at the problem and guess what went wrong.

 

Logging at an exception-level should be aimed towards making sure that critical exceptions are distributed to the right people as soon as possible for a quick response in regards to whatever needs fixing. A pre-requisite for this to work is to have decent exception-handling present in the system.

 

What this logging should *not* be, is a tool to use for business-related tasks. If you need to use logs generated by log4net as the main source to report anything or solve any business-related queries that aren’t a result of bugs or flaws in the system, you’re much better off writing custom logic and storing the data some place accessible through your reporting or support systems.

 

log4net Thresholds

All the messages generated by log4net has its own threshold value. These are in order from lowest to highest, Debug, Info, Warn, Error, Fatal. A typical setup will route different levels of errors to different handlers – common usage tends to be logging everything to a text file locally, while sending off error and fatal messages per mail. Setting a threshold in the configuration will return all messages at that threshold and above.

 

Here’s a relatively short overview of how I use the various thresholds with examples. It’s important to note that the intended use of thresholds must reflect the appenders used. If you want developers to get notified immediately when the system dies horribly, you want to make sure that only those messages have the highest threshold, and the instant notification appender only catches the messages with the highest threshold.

 

DEBUG

The lowest value and primarily used for what it says, debug information. These should be the messages with diagnostic information used to recreate what is happening during a given operation. They will also be the messages you’ll most likely end up having the most of in your system. As such, it’s important to not go overboard with the amount of these messages, all of them should have a good reason for being included.

 

If I have a function which will add a customer to my system, I would start the function with a debug call saying which parameters I got passed into the function. If at a later stage in the function, there happens to be some complex logic related to which type of customer I’m going to add, with separate creation paths based on the result, I will add on another debug message here with the reasoning and result of the selection to show which code path was taken. At the end of the function where the customer will be saved to a database, another debug message should be fired off at the success of creating the customer, including the primary key of the Customer table to ensure that everything worked out.

 

In the event that there is something wrong with a Customer being entered here, a developer diagnosing the problem can bring up the logs and trace it easily by searching for the primary key of the Customer table and check backwards what was done during the creation, as well as getting the exact parameters used in the function. If the parameters passed or path choice listed can’t immediately explain what the problem here is, you can easily recreate the same customer being created in a testing environment by calling the same function with the same parameters.

 

If you manage to cover all relatively complex and vital functions in this manner you should be off to a great start in being able to track down problems.

 

A special mention here should go to external calls such as web services/MSMQ and so on – as you really want to log the source coming in there in addition to everything else to ensure that you can recreate anything that might be faulty outside of your code, always make sure you can prove that others are messing up!

 

For Data Access / External Interfaces there’s also the option of implementing very generic debug messages using object oriented approaches, you need to weigh the benefits of this compared to the amount of debug messages you will receive and plan accordingly.

 

INFO

I don’t really use this level a whole lot. Currently I only use it for saying when applications or services are starting and stopping, as the only other reason I see for this level is to use it for “important debug messages” – but I haven’t seen the need for this just yet, the debug messages are still debug messages!

 

WARN

The first of the thresholds more commonly used for exceptions. I’ve found myself using this more and more now for exceptions that happen without there being any decent business implications, but still something that might be looked at “at one point”™ or “when we have time”™. In addition to this, I also use this in cases where it’s something that probably isn’t dangerous, but if it happens a lot we should look at it.

 

An example of using this in an ASP.NET application could be if you handle a RequestValidation exception on a generic level, which typically comes from someone entering html/script accidentally in some form before submitting the page. This isn’t really something to be too worried about, but in the case of an actual attack, it’s worth mentioning in the logs if you skim through them on a Warn level.

 

It’s important to just add Warn messages when you actually might want to look at these messages, if you’re parsing some optional variable which might fail most of the time and using a try/catch to ensure that everything goes smooth regardless, you don’t want to log this at all.

 

A slight note here, the level is WARN not WARNING, if you happen to add WARNING as a threshold, log4net will be confused and revert to no threshold, giving you all Debug and Info messages as well.

 

ERROR

The Error level is the main way to handle exceptions. A rule of thumb here is that errors which will have some sort of business impact if ignored, or system errors which will need to be fixed should be put in this category.

 

A typical example of this for an ASP.NET application would be to add log4net calls to the Application_Error event in case the exception which has been thrown wasn’t handled better, and the user typically will be sent to a custom error page. This is typically behavior from the system which should be fixed, as such, an error message is warranted.

 

Make sure that you include the actual exception in the call to the error-logging here, along with an informative error message with as much information as you can get a hold of. Seeing how these messages usually are called from within catch blocks, be mindful that you code this information defensively, as you don’t want to accidently trigger a null-reference exception while doing a logging-call.

 

Keep in mind that as long as you include the exception, you will maintain a stack-trace in the message, so you don’t need to include detailed information which is already included in a standard stack-trace.

 

FATAL

The highest level is the one you ideally don’t want to see or use at all. I reserve this for critical errors resulting in the business or systems in general not being able to operate at all, or other errors requiring immediate attention.

 

I feel it’s extremely important to distinguish between the Fatal and Error levels at a business level, if you’re an e-commerce site and you get errors while trying to save a new customer it’s not quite at the same level as being completely unable to connect to the database containing all customer information. Receiving a “Shutdown in Progress” from the SQL Server would be quite Fatal, whereas “String or binary data would be truncated” is sad, but you probably won’t get too many developers cutting their lunch short to fix it.

 

In short, use this for exceptions where you get indications that the system is globally threatened, business-critical services stopping/not-starting or other situations where you would consider permanently relocating to Mexico if it’s not detected within 5mins and resolved within 30mins.

 

Threshold Summary

It’s very important to know the intended use of each level and follow the guidelines set by your company as best you can. When you’re writing code, keep in mind that debug messages will help immensely when you discover an issue in production. Even if you feel that it’s nice and safe code executing, you never know what the next person touching your code will do, and properly inserted debug messages across the application will help both of you when you try to figure out what “he” obviously did wrong.

 

When you do exception handling, think of all the various exceptions that could happen inside the catch block, catch them individually and log them according to the implications of that specific exception being unhandled. If you know you won’t need to worry about the exception, don’t log it as an Error!

 

log4net Appenders

The actual implementation of how to deal with the logging in your code will vary greatly depending on the applications, setup, infrastructure and other factors. It’s also important to point out that you can have multiple appenders set up with the same criteria so you can put the messages several places if you want to. It’s also very simple to create your own custom appenders with a bit of work if you want that SmsMessageAppender for the fatal errors for instance.

 

As a result of the differences in applications here I don’t really have an overall tip of how to do things here, just a few hints and tips on using the various common appenders.

 

A list of the default appenders provided by log4net is listed at http://logging.apache.org/log4net/release/sdk/log4net.Appender.html and examples of how to use them at http://logging.apache.org/log4net/release/config-examples.html – I recommend browsing through the latter and see which ones seem most reasonable.

 

The commonly used appenders would include FileAppender, RollingFileAppender, SmtpAppender, AdoNetAppender and in debug environments ColoredConsoleAppender/AspNetTraceAppender depending on the application.

 

FileAppenders

The FileAppender and RollingFileAppender are probably amongst the most common appenders used for Debug messages. They simply add messages to a file, with the option to roll over on a given size if you specify a RollingFileAppender.

 

When you use these, you need to make sure that the application running log4net has write access to the path/folder you specify as the log directory, or the IIS/ASPNET process if it’s a web application. It’s also important to make sure you don’t put the log file somewhere accessible from outside if you decide to use this on your web server.

 

It’s also common to add these appenders in test/development environments as they’re quite helpful alongside a tail-tool such as BareTail, adding color-coding to BareTail on a second monitor and pointing it to your logfile is an excellent way to see what’s happening behind the scenes when you’re doing things from the front-end.

 

It’s also nice to note that the RollingFileAppender has options which will let you roll over to new log files based on dates or hours, so you’re able to maintain a separate log-file per day/hour to simplify searches through the files.

 

SmtpAppender

This appender is probably the most used Error-threshold appender of them all. Sending an e-mail whenever something hits the threshold is quite useful and a decent way to notify developers that something is wrong.

 

A few notes on the BufferSize parameter here, BufferSize is used to specify how many messages *before* the actual error it should include. This means that if you get an error-message, it will also include a lot of debug-messages that happened before the error. This means that it’s even more important to include decent and descriptive debug messages if you intend to use the SmtpAppender.

 

AdoNetAppender

This appender is the one I’ve seen most people have problems using. It logs the messages over the given threshold to the database, nothing more, but it’s the most configuration-heavy appender out there.

 

There are two main causes for problems here,  the first is the specific configuration of every field you need to insert into. If the type or size is off, you won’t see anything enter the database at all – and you won’t get any sort of notification of this unless you have set the log4net debug options on and specified an output there. I recommend that you check out the “How do I enable log4net internal debugging?” in the http://logging.apache.org/log4net/release/faq.html document, and write the output to file, then check if there’s anything wrong with the setup in that log.

 

The second problem is mainly due to the fact that BufferSize doesn’t mean the same as with the SmtpAppender. For the AdoNetAppender, this specifies how many messages the buffer must have before it actually writes the batch to the database. If you’re testing that this actually works, you want to set this to 1 to write messages as they come.

 

There are some obvious drawbacks when it comes to using this appender, namely that you rely on a database being up to write entries, and it’s much too easy and tempting to also use the production database to store the logging data. Depending on the amount of logging data you expect, you should be very careful when deciding whether or not to use this logger, and extremely careful with the threshold you decide to set for it. Unless you really need the option to perform SQL-queries against the messages you have stored away, or need to further process them, there are most likely better options out there to cover your needs better with other appenders.

 

Other Appenders

There’s a variety of other appenders for the different scenarios you might encounter. When I’m debugging services as console applications, I always include the ColoredConsoleAppender to give me an overview of what’s happening on the fly.

 

For ASP.NET pages during development, you can simply enable trace and include the AspNetTraceAppender to add the log4net messages into the ASP.NET trace as well, all your debug messages will serve their purpose well when you double-check that the input values you provided resemble what’s being passed into the lower layers if you followed the debug pattern specified under thresholds.

 

The other appenders also have their uses, but are less common. The Remoting/UDP/network appenders have their use if you have a centralized logging server for instance, and you could most likely anger your administrator quite a bit by adding an EventLogAppender with a debug threshold to the web.config of your production web server!

 

Custom Appenders

If you have some specific  task you want done with the error messages that somehow isn’t covered here, you can easily specify your own appender by creating a class inheriting from the AppenderSkeleton class. This really isn’t as painful as it might sound, so if you have something fairly custom in mind, I recommend trying this out.

 

Uses here could be if you have your own task management system, and you receive errors coming in, you could add an appender which automatically transforms an error message to a task to be resolved.

 

Appender Summary

When deciding which logging approach and appenders to go for, you want to gather information regarding how/where the applications will be set up, what the infrastructure is like and how the overall logging is implemented in the applications.

 

As mentioned earlier, a simple web server application can go for a RollingFileAppender with threshold Debug + SmtpAppender at threshold Error. If things get a bit more complex and you have things like load balancing, multiple servers, multiple services and multiple applications, a good way to handle things could be to have a centralized application server receiving all the log messages and processing them further.

 

If you have an extremely business-critical service you need to have running at all times, you could go with a custom coded Appender which at the Fatal threshold will, not only send email to the developers, but also send SMSes to certain people at night, or if it’s during work hours – perform some custom task which will max out the volume on a PC in the office and start playing “Crazy Train” to ensure that everyone knows there’s tomfoolery afoot!

 

Summary

Having proper logging in place with a clearly defined logging strategy in your applications will help improve internal testing during development as well as being able to troubleshoot issues in production systems whenever bugs manage to sneak into the system.

 

Setting the right standards for when to use the available log-thresholds alongside the appenders and proper exception handling will ensure that any errors occurring in production environments can be found and diagnosed quickly with logs available.

 

In other news, I realize I wrote a bit more than I intended to, hope someone found it slightly useful at least!