Reporting when Logging only the start and end of an outage condition

by JBrooks 3. June 2009 09:33

In a previous post I had a system to only log the start and end of an outage condition.  To report on the outages we need to first look at the table where the exceptions get logged:

    CREATE TABLE [dbo].[Exceptions](
        [ExTime] [datetime] NULL CONSTRAINT [DF_Exceptions_ExTime]  DEFAULT (getdate()),
        [Message] [varchar](8000) NULL,
        [StackTrace] [varchar](8000) NULL
      ) ON [PRIMARY]

And a sample of the data:

        SELECT TOP 12 ExTime, Substring(Message,1,60) Message
        FROM Exceptions

                  ExTime Message
5/5/09 13:21 Outage start of error, ErrorGuid: 595f890d-c337-49
5/5/09 13:28 Outage end of error, ErrorGuid: 595f890d-c337-493f
5/11/09 18:08 dbo.processRequestAll|Thread was being aborted.
5/12/09 14:13 The actual number of records in insert did not match the exp
5/15/09 13:00 Data Feed FAILED at 5/15/2009 1:00:04 PM
5/15/09 13:00 Data Feed Error occurred at 5/15/2009 1:00:04 PM
5/22/09 18:10 dbo.saveXML|The DELETE statement conflicted 
5/26/09 11:14 The actual number of records in insert did not match the exp
6/3/09 6:39 Outage start of error, ErrorGuid: c1c430f0-9a8f-4f
6/3/09 6:41 Outage end of error, ErrorGuid: c1c430f0-9a8f-4fca
6/3/09 12:36 Outage start of error, ErrorGuid: 06de0545-d152-44
6/3/09 12:39 Outage end of error, ErrorGuid: 06de0545-d152-4438


 Now to do the reporting we create a temp table with just the records we are interested in:

           select ExTime,
           case when charindex('start of',Message) > 0 then 'S'
           else 'E' end a,
           substring(Message,charindex('Guid',Message)+5,40) g  into #t
           from Exceptions
           where Message like 'Outage%'

The first column (badly named "a") is just a flag to say if it is a Start or End record.  The second column (named "g") is the Guid for the outage event.    

Now create another temp table where the guid is summarized:

        select g, max(case when a = 'S' then ExTime end) startTime,
        max(case when a = 'E' then ExTime end) endTime into #t2
        from #t
        group by g
        having count(1) = 2

Now we can report on this.

           select startTime,
           CASE DATEPART(weekday,startTime)
           WHEN 1 THEN 'Sunday'
           WHEN 2 THEN 'Monday'
           WHEN 3 THEN 'Tuesday'
           WHEN 4 THEN 'Wednesday'
           WHEN 5 THEN 'Thursday'
           WHEN 6 THEN 'Friday'
           WHEN 7 THEN 'Saturday'
           round((datediff(ss,startTime,endTime)+0.0)/60,2) Minutes
           from  #t2 t
           where not exists
           (     select 1
                 from #t2 tin
                 where t.startTime between tin.startTime and tin.endTime
                 and t.g <> tin.g)
          and startTime > '05/13/09'
          order by startTime

  This gives us the following report that shows us the start time of the outage and how many minutes it lasted:


               StartTime Day     Minutes
5/27/09 14:38 Wednesday 9.53
5/28/09 10:06 Thursday 9.68
5/28/09 13:21 Thursday 4.52
5/28/09 23:18 Thursday 9.64
5/30/09 13:21 Saturday 3.85





ASP.Net | Development


by JBrooks 3. June 2009 09:10

We found an issue with some of the older scripts having SET ANSI_NULLS OFF and this causes problems.  The ANSI_NULLS ON setting is a database option that determines how NULL comparisons are handled. Under ANSI_NULLS ON, two NULL values will never be equal to each other because the two separate values are unknown. With ANSI_NULLS OFF, two separate NULL values will evaluate to equal values.

So if I have:

        Select *
        from A
        inner join B
        on =

With ANSI_NULLS OFF every rows where NAME is null in A will match every row in B where NAME is null.   I can’t see when you would ever want this.  And SET ANSI_NULLS OFF in SQL 2008 and after will actually throw an error.

The second part of this problem is that for stored procedures, SQL Server uses the SET ANSI_NULLS setting value from the initial creation time of the stored procedure. Whenever the stored procedure is subsequently executed, the setting of SET ANSI_NULLS is restored to its originally used value and takes effect.   

The third part of this problem is that when we combine our sql scripts into 1 big text file and if one of the scripts has SET ANSI_NULLS OFF then every subsequent stored proc and SQL statement is affected if it doesn’t define this setting itself.

So going forward I'm going to make sure my store proc scripts start with:

This is generated for you automatically when you right click a stored proc and select “Modify” or “Script Stored Proc As…”


Development | SQL

Moving AppSettings to a Database Table.

by JBrooks 25. February 2009 11:38

In one of our ASP.Net application we wanted to change where the AppSettings were stored from a config file to a database table.   Since this is a very large application (over 670 .cs files), the first requirement was that the majority of the changes needed to be done with a search and replace.   In the end the change looked like this:


Would be replaced by:


We had app settings in web.config and the environment specific settings were in an WebEnvironment.config file that was referenced by the web.config file.  So our appSettings section in our web.config looked like:
    <appSettings file="WebEnvironment.config">
        <add key="AppVersion" value="13.9"/>


The problem with this is that we have to do a release to change one of the settings.  Whenever we were changing a setting in the WebEnvironment.config  we had to give the deployment team a different file for each environment and have them rename the correct file depending on where they were deploying to.   So they had a WebEnvironment.configTEST, WebEnvironment.configQA, WebEnvironment.configPROD and WebEnvironment.configDR, this is a little awkward and prone to errors.

We also liked the fact that if the app settings lived in a table then we could give ourselves a maintenance page where we could change the values without doing a deployment.

First I needed the table to hold the app setting.  This was simple enough:
    create table dbo.Config
    (Environment varchar(4),
    [Key] varchar(255),
    Value varchar(8000),
    Comment varchar(255),
    LastUpdated datetime default getdate())

Second a stored proc to retrieve the values depending on the environment. So this is:

    create proc dbo.GetConfig(@Environment varchar(4))

select [key], value
from AppConfig with(nolock)
where environment = @Environment
union all
select [key], value
from AppConfig with(nolock)
where environment = 'All'
and [key] not in
            (select env.[key]
            from AppConfig env with(nolock)
            where env.environment = @Environment
order by [key]


Now insert some sample data:
insert into Config
(Environment, [Key], Value, Comment)
values ('Dev','AppVersion','13.9', 'Version of the application')

insert into Config
(Environment, [Key], Value, Comment)
values ('Dev','key1','value1', 'Test data')

insert into Config
(Environment, [Key], Value, Comment)
values ('Prod','key1','value1prod', 'Test data')

Now I need my new class that will hold the data, this will only load once.

using System.Data;
using System.Collections.Generic;

public class cConfig
    public Dictionary<string, string> AppSettings = new Dictionary<string,string>();

    public cConfig(string environment)
        // the cApp.DAL is our data access layer and this just calls the stored proc and returns a table.
        foreach (DataRow dr in cApp.DAL.CommonData.GetConfig(environment).Rows)
            AppSettings.Add(dr["Key"].ToString(), dr["Value"].ToString());



Now we make the data available globally within the application and define it in such a way that it only insantiates the cConfig object once in a tread safe way.

using System;
using System.Collections.Generic;
using System.Configuration;

  public sealed class cApp
        // This is the only time System.Configuration.ConfigurationManager.AppSettings is called.
        // The appSetting ApplicationEnv is in machine.config and will be one of the values “Dev”, “Test”, “QA”, “Prod” or “DR”
        static readonly cConfig _config = new cConfig(System.Configuration.ConfigurationManager.AppSettings["ApplicationEnv"]));

        public static Dictionary<string, string> AppSettings
            get {
                return _config.AppSettings;

Now I can use this anywhere in my code by using the following:


Development | SQL

Visual Source Safe to Subversion Conversion

by JBrooks 24. February 2009 11:12
Our experience with the Visual Source Safe to Subversion conversion.

Visual Source Safe
In general Visual Source Safe is known to have problems when the database gets large, if there are a lot of users, or if it is used over a network.

On Microsoft’s site they have the following warnings:

Many factors can cause a SourceSafe database to become corrupted. These factors include the following:
•    Power loss
•    System hangs (stops responding)
•    User terminates a long process
•    Bad network connections
•    Running out of disk space
•    Network problems
•    Operating system problems

Visual Source Safe Best Practices and Our Database:

1. Best Practice: Visual SourceSafe database should not exceed 3 to 5 GB    
Our Database: The VSS database that our source code is in is currently 25 GB.  

2. Best Practice: Use the Analyze tool shipped with Visual SourceSafe to detect and repair problems in the database structure… you can discover small problems and fix them before they become worse.   You should run it as frequently as is practical—once a week is recommended, or at a minimum, once a month.
Our Database: Analyze hasn’t been run on our database since 2004 (if ever).
During the VSS to Subversion conversion Analyze was run on a copy of the VSS database and it showed many files had corruption in their history.  We could never get it to run completely on the full copy of the database – we had to delete large sections and then run Analyze.
3. Best Practice:  Limit the number of users to increase performance and limit corruption.
Our Database:  Our files are in a single VSS database that is also used by many other groups.  It currently has 198 users defined and over 50 are still active.

After We Moved to a Remote Site - Our Servers are Now 2 Miles Away.
Each of the causes of corruption listed on the Microsoft site were much more likely to occur given the new environment that we were now in after moving to the remote site.

Here is what we were experiencing when we moved to the remote site.:
1.    When opening a project in VSS we would consistently get “Error reading from file” messages.
2.    A one minute “get latest” operation would now take over 20 minutes.
3.    Adding files to VSS over the network from the remote site would fail.


Many different solutions were evaluated including AccuRev, CA – AllFusion Harvest, Microsoft Visual Studio Team System, Perforce, CVS, ClearCase, SourceGear, AnthillPro and others.   It is outside of the scope of this post to review the details of this evaluation, but it is worth mentioning that the final two contenders were Microsoft Visual Studio Team System and Subversion.

After the addition of some other supporting tools, nothing fit our requirements and environment as well as Subversion.  It is open source, one member of our team had used it in the past and some other areas in our company have been using it for a long time.

For a quick background on Subversion and a list of features, start with this link:

If someone wants to do more research this link is a good starting point:


We placed the requirement on ourselves that in converting from VSS to SVN all of the history of all of our files must be moved to SVN, not just the latest versions.  Had this not been a requirement then the conversion would have been little more than doing a “get latest” on VSS and then importing this to a SVN repository.

So we looked at a few tools that would convert all of the history.  None of them allowed you to take an archive of only the VSS directories that you were interested in and perform the conversion on that.  (VSS does something funny internally with the dates on archiving.)    So the conversion had to be done on a copy of the VSS (25 GB) database.

All of the conversion tools also required the source VSS database be clean of corruption before the conversion.  This turned out to be much harder and time consuming than you might think.  Running the VSS Analyze tool on a copy of the database showed hundreds of corruptions and would not run to completion without blue screening the computer it was running on.

To get around this we reduced the copy database by deleting the directories that we didn’t want to convert.  Unfortunately VSS will report each corruptions during the deleting process causing hundreds of message boxes that the user must mindlessly click for the process to continue.

Once that point was reached, we used the tool VSS2SVN to create dump files that were imported into Subversion.

Our current environment consists of the following:
1. Subversion (SVN)  - source code repository (FSFS file format).  
2. Apache for Windows 2.2.9 – Subversion server.
3. TortoiseSVN - Windows client to Subversion.
4. AnkhSvn - Visual Studio plug-in.  
5. SharpSvn - library used for coding our own tools that work with Subversion.  (Example is a tool called “MakeRelease.exe” that reads a list of file names, opens those files from Subversion, and combines all of them into a big SQL file for a release.)


There was a learning curve that the developers had to go through after the conversion to Subversion.  This was driven by the different versioning models.  VSS has the Lock-Update-Unlock model of source control that does not allow for concurrent updates of a single file by multiple developers.    

In Subversion’s versioning model no files are exclusively locked (unless forced) and multiple developers can work on a single file at the same time. Once changes are completed, conflicts are identified and said developers manually merge their changes before a commit is allowed.  If there are no conflicts, updates are automatically merged.


Here is a quick introduction to Subversion (SVN) that compares it to VSS:

Here is a quick video showing you basic usage:

Here is another link that show the details of using TortoiseSVN

And a link talking about using SVN with Visual Studio:

This is SVN’s home page:

This is TortoiseSVN home page:

Free Subversion book online, very complete.

How to install Subversion with Apache server.

Vss2Svn conversion program:



Where to put the connection string

by JBrooks 24. February 2009 08:10
Machine.config is used for machine specific items - thus the name. It never get moved from machine to machine.  So that is where you put your connection strings.  Doing this you will never have the horror of realizing your QA team has been hammering against your production data because someone forgot to account for the web.config when it was moved DOWN from prod to back out a bad release to QA (or whatever).  And all your web.configs will be the same and easy to promote when they do change.

You can also add a value to the machine.config so that you can tell which environment you are in at runtime.

So in my machine.config I added:
    <add key="ADEDevelopmentEnvironment" value="Yes"/>

Then in my code I have:
If Not String.IsNullOrEmpty(ConfigurationManager.AppSettings("ADEDevelopmentEnvironment")) Then
            Me.Label1.Text = "You are STILL in Dev?!!!"
            Me.Label1.Text = "You are on production"
End If


ASP.Net | Development

Connection Pool Misconceptions

by JBrooks 12. February 2009 08:38
Below is the sql to show how many connection are being used from my application.

declare @Database varchar(30),
    @LoginName varchar(30)

set @Database = 'MyDB'
set @LoginName = 'MyLoginName'

select max(blocked) hasBlocking,
    count(1) [Count],
    sum(open_tran) open_trans,
    max(waittime) maxWaitTime
from master.dbo.sysprocesses (nolock)
where db_name(dbid) = @Database and convert(sysname, rtrim(loginame)) = @LoginName
We needed this because there was a problem where our application would use up 100 connections (the default) and then bomb out.  The root cause was because of 2 misconceptions that a developer had.

First misconception:
Calling Close() on a DataReader closed the Connection.  This only releases the Connection object from the DataReader, but this does not close it.  You need to call Close() on the Connection object.  You can test this by seeing the Connection object's State will still be "Open" after calling Close()on the DataReader.

Second misconception:
Do not call Close or Dispose on a Connection, a DataReader, or any other managed object in the Finalize method of your class. In a finalizer, you should only release unmanaged resources that your class owns directly. If your class does not own any unmanaged resources, do not include a Finalize method in your class definition. For more information. For more information on this, see:

So if these 2 misconceptions are cleared up in your mind you are a lot less likely to run into this connection pool problem.



ASP.Net | Development

Checking All CheckBoxes in a GridView Using Client-Side Script

by JBrooks 11. February 2009 11:48

Given a GridView like the one below, I want to add the ability to check the check box in the header and have that event check all of the checkboxes in each row.


I did this by adding 12 lines of JavaScript code and adding 1 attribute to my header template.   Below is the markup followed by the code-behind. The parts I added to this simple page are in bold.

<html xmlns="" >
<head runat="server">
    <title>Test check all CheckBoxes</title>
       <script type="text/javascript" language="javascript">
        function changeAllCheckBoxes(sender) {
            var gridViewRows = GetParentElementByTagName(sender, "TABLE").rows;
            for (var i = 1; i < gridViewRows.length; ++i) {
                gridViewRows[i].cells[0].childNodes[0].checked = sender.checked;

        function GetParentElementByTagName(element, tagName) {
            var element = element;
            while (element.tagName != tagName)
                element = element.parentNode;
            return element;

    <form id="form1" runat="server">
    <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False"
        Caption="Pick Some Technologies" BackColor="White" BorderColor="#CC9966"
        BorderStyle="None" BorderWidth="1px" CellPadding="4">
            <asp:TemplateField HeaderText="Include" SortExpression="Include">
                    <asp:CheckBox ID="CheckBox1" runat="server" onclick="changeAllCheckBoxes(this)" />
                    <asp:CheckBox ID="CheckBox1" runat="server"  />
                <ItemStyle HorizontalAlign="Center" />
            <asp:BoundField DataField="Name" HeaderText="Technology" />
        <HeaderStyle BackColor="#990000" Font-Bold="True" ForeColor="#FFFFCC" />

using System;
using System.Data;

public partial class test : System.Web.UI.Page
    protected void Page_Load(object sender, EventArgs e)
        if (!IsPostBack)
            DataTable dt = new DataTable();

            dt.Columns.Add("Include", typeof(Boolean));
            dt.Columns.Add("Name", typeof(String));
            dt.Rows.Add(new Object[] { false, "C#" });
            dt.Rows.Add(new Object[] { false, "ASP.Net" });
            dt.Rows.Add(new Object[] { false, "SQL Server" });
            dt.Rows.Add(new Object[] { false, "VB.Net" });

            GridView1.DataSource = dt;





ASP.Net | Development | GridView

One Script

OneScript Continuous Integration for you database Scripts in version control are automatically combined into a release script.