You know that they are awful, so I won’t elaborate on that point, but what about the memory leaks in .net framework?
We had an issue at work with our IISs going crazy and shutting down after a few days of operation. The sysadmin investigated and attributed it to excessive memory usage, having w3wp.exe reaching memory consumption numbers close to the theoretical maximum of a 32-bit process. This problem was postponed for a future investigation since there was no time to resolve it.
At the same time, I was trying to solve some performance issues on my own in the company’s flagship product. I tried using Visual Studio’s profiler (we have the Ultimate versions purchased as part of our Microsoft partnership), but the results were disappointing. While I could profile the (web) application’s CPU usage, there’s no way I could profile the timing of the methods; no matter how many changes and what blog and forum posts I followed on the net, I couldn’t make it work (something about an .axd).
I proceeded to download a trial of jetBrain’s performance profiler and I was back in business (it makes me wonder why Microsoft makes things so complicated at times; enterprise doesn’t have to mean complicated necessarily). It’s a good product, intuitive and easy to use. I’d totally recommend it for all your profiling needs (and keep in mind that this is not a paid post; I’m only posting my personal opinion).
The results were eye opening. There were two major bottlenecks in the application: One of them had to do with an expensive operation in the Data layer which would put it out of my jurisdiction (I don’t have database dev duties on this project), but I informed the DB developer responsible and the matter was solved.
The other one was trickier. You see, the application bases most of its templating logic on XSL and transformations. I wouldn’t agree with this practice today, but I realise that at the time it might have been the only reasonable solution (keep in mind that this project was initially developed on ASP classic). The logic is to have a collection of XSL templates which are rendered using XML data from the BLL.
The problem occured from the fact that an instance of XslCompiledTransform (the class that performs the actual transformations) was created at least once per request, then left to the garbage collector to deal with it after the page cycle had ended. But…
… XslCompiledTransform was introduced in .net 2.0 in order to replace XslTransform (which in turn was deprecated). The way it works is that you call the .Load() method passing an XSL uri/file/content as a parameter and it will proceed to dynamically create an assembly that will deal with the actual transformation. Which in turn means that you now have a new assembly loaded in the current application domain every time some loads a page. Which also means that they can’t be unloaded, since .net does not allow unloading a single assembly from an AppDomain; you must unload the whole AppDomain.
Not only did that cause the memory leaks (I’m afraid to actually imagine how many thousands of assemblies must have been generated during the application’s lifetime before IIS’s health checks kicked in and killed it) it also caused severe performance issues, since the generation of an assembly is never fast enough, be it with CodeDOM, Expression Trees or even Reflection.Emit.
The solution was rather simple: Cache it. by caching the resulting XslCompiledTransform into memory and reusing it, we saw about 50% improvement on page load times and severely reduced CPU usage on the web servers. I created a proxy method which, provided with an XSL file path, creates an XSL transform if needed and returns it from memory. It also supports debug support on demand and cache invalidation on file change.
And there you have it. Memory and processor usage went way down. Then I remembered that we are talking about XSL here which feels like eating dirt icecream. Actually, that would feel way better…