h1

being new – my new blog

October 23, 2008

I have decided to move blogs and have a change of direction in my posts and more regular updates.

You can find me at www.beingnew.net, please come visit and see what you think.

h1

Separation Of Concerns and MVC

May 4, 2008

A common idea in software engineering is a defining principle at the root of object orientated development that seeks to encapsulate parts of an application into distinct areas of concern/responsibility – the Separation Of Concerns principle.

The primary goal is to create functional parts that, when created or modified, do not affect other areas of the system.

The new MVC framework for ASP.NET seeks to address some of the inherent problems in the webforms arena (where concerns are all piled together in pages and controls) by offering an alternative that attempts to separate a request’s concern areas into the MVC pattern to provide encapsulation for each area of responsibility.

Separating presentation from content is at the heart of MVC and the ASP.NET version looks to provide a robust and flexible way of achieving this whilst providing an interfact-driven unit-testable platform to work with.

I’m working with the Preview 2 release of the MVC framework and will be posting about MVC in the near future as we get to grips with our new production applications we are building.

We are looking forward to developing with it and a large number of respected bloggers and developers seem to be doing the same – the ASP.NET tide seems to be turning to custom development on lightweight but sound foundations and is much better for it.

h1

Feature bombs

April 4, 2008

A devious plot by the CIA to sell bug-ridden hardware and software to the Soviet Union contributed to the end of the cold war in dramatic fashion:

In January 1982, President Ronald Reagan approved a CIA plan to sabotage the economy of the Soviet Union through covert transfers of technology that contained hidden malfunctions, including software that later triggered a huge explosion in a Siberian natural gas pipeline

This blast was so big it could be seen by U.S. Satellites and it shook the Soviets in more than explosive terms once they learned that they had been buying tampered goods:

In time the Soviets came to understand that they had been stealing bogus technology, but now what were they to do? By implication, every cell of the Soviet leviathan might be infected. They had no way of knowing which equipment was sound, which was bogus. All was suspect, which was the intended endgame for the entire operation

This modern day Trojan horse got me thinking about software projects and their applications – when you inherit an application there are many assumptions that you make and one of them is about the quality of the code involved.
Everyone codes in a different way and some inherited applications are better than others, however sometimes something strikes you as really bad.
Really, really bad.

In that instance do you assume it’s isolated to that one area or do you, like the Soviets in 1982, think the worst of all areas of the application?
If it’s a large project, do you know if you can make the changes you’ve been directed to do and trust in the rest of the application to hold up under the pressure?

I’ve found that features are a big part of the problem where they have been tacked onto an existing application or parts of the code have been mutilated to incorporate them.

Instead of pipeline hardware bombs these are feature bombs silently applying pressure to the application joints and waiting for the next code change to simply explode the application and be seen by your satellites – the client / your boss / the business owner (delete as appropriate).

I’m sure feature bombs are not the only kind – does anyone have any funny, pertinent or downright crazy examples?

h1

Our Server Application Unavailable?

February 20, 2008

Recently we had a really strange error – one of those hard to pin-point, drive you crazy kind of problems that has you running around in circles.

A test website that had a few pages and web services running in the 2.0 framework was working very well until we upgraded it with some new functionality – a HttpHandler that runs in a few other sites.
When we called a url with a specific extension (an RSS feed) with something like “/feed/rss.xml” we suddenly got the dreaded “Server Application Unavailable” error:

“The web application you are attempting to access on this web server is currently unavailable. Please hit the “Refresh” button in your web browser to retry your request.
Administrator Note: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. “

How can we get such a disasterous error message for just one url?
The event log was no help as the framework did not report any issues – it just bombed out.

We checked the <HttpHandler> section of the web.config that relates all “*rss/xml” extensions to a custom handler – this was correct. We checked the settings on the server and again everything seemed fine and IIS had the “.xml” extension configured to point to the aspnet_isapi.dll.

After running around in decreasing circle after circle and just when we thought we might go mad we hit our event horizon and finally found that it was this aspnet_isapi.dll association that was the issue.

This test site was running as a virtual directory under a test web site; in fact a number of sites operate in this way for quick development testing and the IIS configuration for the “.xml” was set at the higher test web site level (so all virtual directories inherit this).

Here lay our problem – the IIS extension configuration pointed to the 1.1 framework aspnet_isapi.dll and not the 2.0 framework that the site was currently running under. D’oh!

Obvious in hindsight, but sometimes don’t these bugbears just eat up your afternoon?

h1

Don’t assume it’s performant

January 18, 2008

When working as a contractor you inevitably inherit previous applications and databases with the remit to maintaining and upgrading them with a list of changes.

Sometimes you find something that just doesn’t look right and this week a colleague and I tackled just a problem.
When running SQL Studio Profiler on a server for another reason we noticed that a certain stored procedure that was effectively returning the information for one row was taking anywhere from 1.3 to 5 seconds to run – and this was a vital “data entity” that was constantly requested on the site.

Looking into the stored procedure we saw the SQL involved was something like:

SELECT t.A, t.B, t.C, t.D,……..t.Q,
, (SELECT COUNT(*) FROM t2 with (nolock) WHERE Field = t.ID) AS ACount
, (SELECT COUNT(*) FROM t3 with (nolock) WHERE Field = t.ID) AS OtherCount
FROM t
WHERE t.ID =@ID

The extra bit of information being queried from the other tables looked like it could be causing issues and by requesting an estimated execution plan we saw that there was a bottleneck of processing occuring on a clustered index scan of one of those tables:

ExecutionPlan

Checking that secondary table for indexes we found that there was only one on the primary key – there wasn’t an index for the “Field” that was being queried in the above SQL.

We introduced a non-unique, non-clustered index on that field and did the appropriate testing.

Once the changes were committed to the live enviroment we saw a dramatic change in performance:

PerformanceChange

The change occurs halfway down the image – the CPU usage and timings have virtually disappeared and our query is now in the milliseconds.
Needlesss to say the site’s speed is now sizzling!

The moral of this post?

Don’t assume that what you inherit is performant in any way. In this example the lack of a simple index on that secondary table was introducing a slow down on the site on a large scale and as we know best practices are not always followed!

My new mantra: Profiler is my new friend on new projects.

h1

Context switching on multiple projects

December 20, 2007

Recently I had a discussion with a client about project estimation and late delivery on recent projects and I advised them that faults in both these areas could perhaps be attributed to context switching costs caused by working on multiple projects at any one time.

I had recently been studying Team System engineering processes and in the book Software Engineering with Microsoft Visual Studio Team System by Sam Guckenheimer and Juan J.Perez, there is a chapter about value-up processes.
In one section of chapter the book illustrates a famous proposal by Gerald Weinberg in his book Quality Software Management: Systems Thinking:

…he proposed a rule of thumb to compute the waste caused by project switching:

No. Simultaneous Projects Percent of Time on Project Loss to Context Switching
1 100% 0%
2 40% 20%
3 20% 40%
4 10% 60%
5 5% 75%

There is no doubt that as a developer working on multiple projects you tend to leak time resources – re-establishing the current state, where you were at, what needs doing.
But the statistics quoted show that this loss is more significant than you might initially think when you go past two projects; it becomes a very big drain indeed.

I believe that one of the integral parts of programming is the fact that you have a lot of information in your head at any one time while you shape your code – wiping this clean as if clearing a whiteboard and then having to effectively having to re-draw it all again at a later date is undoubtedly costly.
A good, well organised developer can minimise this risk so that “writing it up again” takes minimum time, but it cannot be entirely removed.

In The Multi-Tasking Myth blog post from last year by Jeff Atwood (Coding Horror) he goes further to illustrate other forms of “distraction” context switching such as emails and phone calls (there is even a lovely graph!) and he summarises with:

We typically overestimate how much we’ll actually get done, and multi-tasking exaggerates our own internal biases even more. Whenever possible, avoid interruptions and avoid working on more than one project at the same time.

The short of it is that if you are able, stick to one project at a time and spend as much continuous time as possible.
It keeps you far more productive.

h1

Intellisense problem after ReSharper uninstall

December 18, 2007

After uninstalling JetBrains’ ReSharper tool I discovered that Intellisense wasn’t working and after a little digging around found that it was turned off in the Visual Studio options.
The ReSharper tool does not reinstate those options when it uninstalls (I am assuming it turns those options off when installed to disable native VS Intellisense and provide its own).

I activated it again very simply with
Tools -> Options -> TextEditor -> C#
Under the “Statement Completion” option I ticked the two boxes: “Auto list members” and “Parameter Information”.

Back in business!
I’m about to install DevExpress’ CodeRush and I’m told it is very light and a single install supports all VS versions, including 2008.
It will be intersting to compare this to what I thought was quite a clunky and performance draining add on (ReSharper) that was sometimes very intrusive with auto-completion. I can see it was a very good product, it just wasn’t for me and didn’t suit my style, hence the uninstall.

h1

How to schedule a Team Build

December 13, 2007

Team builds in TFS2005 do not have a built in automated scheduler available.
Yes, that’s right you read correctly – the tool billed as a continuous integration platform does not provide an easy way to schedule the build types out of the VS environment – that’s crazy!

However, there is a command line tool “TFSBuild.exe” that you can use to start a build.
Therefore you can manually create a windows schedule to kick the build off at set times and here is a quick step guide if you are having problems with this:

  1. In the source control folder for the team build type, create a .bat file (e.g. My
  2. insert the following in the bat file and check it in:
    TFSBuild start http://192.168.1.x:8080 ProjectName “Build Type Name”
  3. Log onto the TFS box and get latest from source control on the team build type
  4. Go to Start -> Programs -> Accessories ->System Tools ->Schedule Tasks
  5. Set up a new schedule task to run the bat file
  6. Ensure that the “Start in” path is set to “C:\Program Files\Microsoft Visual Studio 8\Common7\IDE”

In TFS 2008 this has been addressed and options are available on the right click of the build type in the Builds menu to facilitate this.

h1

AppDomain Events and HttpModules

November 9, 2007

In a previous post I discussed the cause of an error: A process serving application pool terminated unexpectedly.
I realised that I had a need to catch unhandled 2.0 exceptions and started looking at the AppDomain class.

An Application Domain serves as a boundary so that the runtime can isolate applications from one another and using AppDomain.CurrentDomain, you can retrieve the AppDomain instance that a thread is running under.

One of the events of the AppDomain class is the UnhandledException event and it is this that I could utilise to catch any unhandled exception no matter what thread it runs under.

Therefore if I create a new HttpModule I can provide an event handler for this event and can record these exceptions when that is called.

Pooled HttpApplications

One mistake I didn’t want to make though was one that others had made – simply hooking up an event handler in the Init() method of the HttpModule as usual without taking into account multiple application instances…

The ASP.NET pipeline keeps a pool of HttpApplication objects ready to serve requests and each one of these objects has a collection of HttpModules (and thus each HttpModule provides event handlers for its own HttpApplication).

Therefore because for any one AppDomain you have a number of HttpApplication/HttpModule instances I need to ensure that I only hook up to AppDomain events once.

I can achieve this by making the HttpModule thread/multiple instance aware using static members:

public class UnhandledModule : IHttpModule
{
#region static members – note, must be thread safe

static int unhandledExceptionCount = 0;
static object lockObject = new object();
static bool initialized = false;

#endregion

public void Init(HttpApplication context)
{
// We only want one instance of a HttpModule to handle event.
if
( !initialized)
{
// create a lock so no other instance can affect the static variable
lock (lockObject)
{
if( !initialized)
{

AppDomain.CurrentDomain.UnhandledException +=
new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);

initialized = true;
}
} // now lock is released and the static variable is true..
}}}

So with the above code I have ensured that only one HttpModule instance will provide an event handler for the unhandled exception event.
I can be confident that this will not happen multiple times and bloat the memory of the application.

However I have concerns…

The idea for this was obtained from an MSDN article describing the 2.0 unhandled exceptions behaviour change.

What I am concerned about is that because multiple HttpApplication instances are kept in a pool and managed – if any are killed off by the runtime because they are deemed unnecessary (the requests are less frequent for instance), how do I know that the single instance that provided event handler is still alive?

We’ve tested and installed the “UnhandledModule” as described above, yet our application still errors without catching the exception. My current guess is that it is caused by no event handler existing anymore.

I’ve posed this question on the ASP.NET forums and am still figuring out an answer – do the HttpApplication instance all stay alive?
And if not, should I have code in the Dispose() method of the one instance that has the event handler, so it can release the static initialized variable?
Hopefully if I can’t find it, someone out there knows the answer.

h1

A process serving application pool terminated unexpectedly

November 8, 2007

This error message appears on a third party ASP.NET application that we have installed and each time it happens the application seems to be restarting.
What’s more, we have inserted a custom HttpModule into the application that serves as a global event handler… yet this does not catch this particular error.

What is going on?

There are actually two errors we see in the System event log:

A process serving application pool ‘xxxx’ terminated unexpectedly. The process id was ‘yyyy’. The process exit code was ‘0x0’.

A process serving application pool ‘xxxx’ suffered a fatal communication error with the World Wide Web Publishing Service. The process id was ‘yyyy’. The data field contains the error number.

Both these errors refer to unhandled exceptions in .NET 2.0 where an error is generated outside of the asp.net request such as a worker thread.
However, this never used to happen in previous frameworks…

In 1.0 and 1.1 any exception that occurred outside the context of a request would cause that thread to die… and that was it. The global error handler would not catch it, it simply died away.
In some applications this was a very bad thing – any open resources in this thread would cause memory leaks and all manner of other unintended consequences.

As such, in 2.0 the default behaviour is for the application to quit – hence these messages in the event log.

Now this is where we can now understand why our HttpModule doesn’t report the error…

Because the exception is outside the context of an asp.request and therefore not part of a HttpApplication/HttpModule process any custom error handlers running from application events do not catch the error.

So how can we catch this error and log it so we can identify the real issue behind the event log error?

I’ve been experimenting with the AppDomain events and in my next post will discuss how I’ve used it to trap these errors and identify where to start to resolve the problem.