All posts by jbrooks

SET ANSI_NULLS OFF

by JBrooks 3. June 2009 09:10

We found an issue with some of the older scripts having SET ANSI_NULLS OFF and this causes problems.  The ANSI_NULLS ON setting is a database option that determines how NULL comparisons are handled. Under ANSI_NULLS ON, two NULL values will never be equal to each other because the two separate values are unknown. With ANSI_NULLS OFF, two separate NULL values will evaluate to equal values.


So if I have:

        Select *
        from A
        inner join B
        on A.name = B.name

With ANSI_NULLS OFF every rows where NAME is null in A will match every row in B where NAME is null.   I can’t see when you would ever want this.  And SET ANSI_NULLS OFF in SQL 2008 and after will actually throw an error.

The second part of this problem is that for stored procedures, SQL Server uses the SET ANSI_NULLS setting value from the initial creation time of the stored procedure. Whenever the stored procedure is subsequently executed, the setting of SET ANSI_NULLS is restored to its originally used value and takes effect.   

The third part of this problem is that when we combine our sql scripts into 1 big text file and if one of the scripts has SET ANSI_NULLS OFF then every subsequent stored proc and SQL statement is affected if it doesn’t define this setting itself.


So going forward I'm going to make sure my store proc scripts start with:
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO


This is generated for you automatically when you right click a stored proc and select “Modify” or “Script Stored Proc As…”

Tags:

Development | SQL

Log only the start and end of an outage condition

by JBrooks 22. April 2009 06:25

We have an asp.net page that auto refreshes every 8 seconds using   <meta http-equiv="refresh" content="8">

This page would pull data from a linked server and whenever that link or the target server were down it would fill up our error log with an entry every 8 seconds until it came back up.   This is not very useful, so we wanted an entry for when the exceptions started and then one more entry for when it cleared.   And I wanted a way of matching off each set of start / stop events. 

This is how I did it.  First I added the following property to my page:

private bool inErrorState

{ get { return Request.Cookies["myPageError"] != null; }

set

{

    if (value != inErrorState)

    {

        if (value)  // first error.

        {

            string errorGuid = Guid.NewGuid().ToString();

            cApp.db.LogException("myPageError start of error, ErrorGuid: " + errorGuid);

            Response.Cookies["myPageError"].Value = errorGuid;

            Response.Cookies["myPageError"].Expires = DateTime.Now.AddDays(100);

 

        }

        else // coming out of error.

        {

            if (Request.Cookies["myPageError"] != null)

            {

                cApp.db.LogException("myPageError end of error, ErrorGuid: " + Request.Cookies["myPageError"].Value);

                Response.Cookies.Set(Request.Cookies["myPageError"]);

                Response.Cookies["myPageError"].Expires = DateTime.Now.AddDays(-1);

            }

 

 

        }

    }

}

}

 

So when the error first occurs the errorGuid is created, logged and then saved in the cookie. When it comes out of the error condition the errorGuid is again logged, but with different text, and then the cookie is cleared.

To use this property I just needed to set it to false if the databinding was sucessful and to true in the catch block when an exception is thrown.  Like such:

  try

            {

                myGrid.DataBind();

                inErrorState = false;

             }

            catch (Exception ex)

            {

              …

              inErrorState = true;

             }

This is all it took to just capture the start of an error condition and the matching clear of that condition.

 

 

 

 

 

 

 

Tags:

ASP.Net

Moving AppSettings to a Database Table.

by JBrooks 25. February 2009 11:38

In one of our ASP.Net application we wanted to change where the AppSettings were stored from a config file to a database table.   Since this is a very large application (over 670 .cs files), the first requirement was that the majority of the changes needed to be done with a search and replace.   In the end the change looked like this:

    System.Configuration.ConfigurationSettings.AppSettings["AppVersion"]

Would be replaced by:

    cApp.AppSettings["AppVersion"];

BACKGROUND    
We had app settings in web.config and the environment specific settings were in an WebEnvironment.config file that was referenced by the web.config file.  So our appSettings section in our web.config looked like:
    <appSettings file="WebEnvironment.config">
        <add key="AppVersion" value="13.9"/>

    </appSettings>

The problem with this is that we have to do a release to change one of the settings.  Whenever we were changing a setting in the WebEnvironment.config  we had to give the deployment team a different file for each environment and have them rename the correct file depending on where they were deploying to.   So they had a WebEnvironment.configTEST, WebEnvironment.configQA, WebEnvironment.configPROD and WebEnvironment.configDR, this is a little awkward and prone to errors.

We also liked the fact that if the app settings lived in a table then we could give ourselves a maintenance page where we could change the values without doing a deployment.

TABLE
First I needed the table to hold the app setting.  This was simple enough:
    create table dbo.Config
    (Environment varchar(4),
    [Key] varchar(255),
    Value varchar(8000),
    Comment varchar(255),
    LastUpdated datetime default getdate())
    go

STORED PROC
Second a stored proc to retrieve the values depending on the environment. So this is:


    create proc dbo.GetConfig(@Environment varchar(4))
    as
    begin 

select [key], value
from AppConfig with(nolock)
where environment = @Environment
union all
select [key], value
from AppConfig with(nolock)
where environment = 'All'
and [key] not in
            (select env.[key]
            from AppConfig env with(nolock)
            where env.environment = @Environment
            )
order by [key]

    end
    go

SAMPLE DATA
Now insert some sample data:
insert into Config
(Environment, [Key], Value, Comment)
values ('Dev','AppVersion','13.9', 'Version of the application')

insert into Config
(Environment, [Key], Value, Comment)
values ('Dev','key1','value1', 'Test data')

insert into Config
(Environment, [Key], Value, Comment)
values ('Prod','key1','value1prod', 'Test data')

CONFIG CLASS
Now I need my new class that will hold the data, this will only load once.

using System.Data;
using System.Collections.Generic;

public class cConfig
{
    public Dictionary<string, string> AppSettings = new Dictionary<string,string>();

    public cConfig(string environment)
    {
        // the cApp.DAL is our data access layer and this just calls the stored proc and returns a table.
        foreach (DataRow dr in cApp.DAL.CommonData.GetConfig(environment).Rows)
        {
            AppSettings.Add(dr["Key"].ToString(), dr["Value"].ToString());
        }

    }

}

CLASS VARIABLE
Now we make the data available globally within the application and define it in such a way that it only insantiates the cConfig object once in a tread safe way.

using System;
using System.Collections.Generic;
using System.Configuration;

  public sealed class cApp
    {
        // This is the only time System.Configuration.ConfigurationManager.AppSettings is called.
        // The appSetting ApplicationEnv is in machine.config and will be one of the values “Dev”, “Test”, “QA”, “Prod” or “DR”
        static readonly cConfig _config = new cConfig(System.Configuration.ConfigurationManager.AppSettings["ApplicationEnv"]));

        public static Dictionary<string, string> AppSettings
        {
            get {
                return _config.AppSettings;
            }
        }
....


Now I can use this anywhere in my code by using the following:
        cApp.AppSettings["MySetting"];

Tags:

Development | SQL

Visual Source Safe to Subversion Conversion

by JBrooks 24. February 2009 11:12
Our experience with the Visual Source Safe to Subversion conversion.

BACKGROUND
Visual Source Safe
In general Visual Source Safe is known to have problems when the database gets large, if there are a lot of users, or if it is used over a network.

On Microsoft’s site they have the following warnings:
(http://support.microsoft.com/kb/q133054/)

Many factors can cause a SourceSafe database to become corrupted. These factors include the following:
•    Power loss
•    System hangs (stops responding)
•    User terminates a long process
•    Bad network connections
•    Running out of disk space
•    Network problems
•    Operating system problems

Visual Source Safe Best Practices and Our Database:
(http://msdn.microsoft.com/en-us/library/bb509342(VS.80).aspx)

1. Best Practice: Visual SourceSafe database should not exceed 3 to 5 GB    
Our Database: The VSS database that our source code is in is currently 25 GB.  

2. Best Practice: Use the Analyze tool shipped with Visual SourceSafe to detect and repair problems in the database structure… you can discover small problems and fix them before they become worse.   You should run it as frequently as is practical—once a week is recommended, or at a minimum, once a month.
Our Database: Analyze hasn’t been run on our database since 2004 (if ever).
During the VSS to Subversion conversion Analyze was run on a copy of the VSS database and it showed many files had corruption in their history.  We could never get it to run completely on the full copy of the database – we had to delete large sections and then run Analyze.
 
3. Best Practice:  Limit the number of users to increase performance and limit corruption.
Our Database:  Our files are in a single VSS database that is also used by many other groups.  It currently has 198 users defined and over 50 are still active.


After We Moved to a Remote Site - Our Servers are Now 2 Miles Away.
Each of the causes of corruption listed on the Microsoft site were much more likely to occur given the new environment that we were now in after moving to the remote site.

Here is what we were experiencing when we moved to the remote site.:
1.    When opening a project in VSS we would consistently get “Error reading from file” messages.
2.    A one minute “get latest” operation would now take over 20 minutes.
3.    Adding files to VSS over the network from the remote site would fail.


EVALUATION OF ALTERNATIVES


Many different solutions were evaluated including AccuRev, CA – AllFusion Harvest, Microsoft Visual Studio Team System, Perforce, CVS, ClearCase, SourceGear, AnthillPro and others.   It is outside of the scope of this post to review the details of this evaluation, but it is worth mentioning that the final two contenders were Microsoft Visual Studio Team System and Subversion.

After the addition of some other supporting tools, nothing fit our requirements and environment as well as Subversion.  It is open source, one member of our team had used it in the past and some other areas in our company have been using it for a long time.

For a quick background on Subversion and a list of features, start with this link:
http://en.wikipedia.org/wiki/Subversion_%28software%29

If someone wants to do more research this link is a good starting point: http://en.wikipedia.org/wiki/Comparison_of_revision_control_software


CONVERSION

We placed the requirement on ourselves that in converting from VSS to SVN all of the history of all of our files must be moved to SVN, not just the latest versions.  Had this not been a requirement then the conversion would have been little more than doing a “get latest” on VSS and then importing this to a SVN repository.

So we looked at a few tools that would convert all of the history.  None of them allowed you to take an archive of only the VSS directories that you were interested in and perform the conversion on that.  (VSS does something funny internally with the dates on archiving.)    So the conversion had to be done on a copy of the VSS (25 GB) database.

All of the conversion tools also required the source VSS database be clean of corruption before the conversion.  This turned out to be much harder and time consuming than you might think.  Running the VSS Analyze tool on a copy of the database showed hundreds of corruptions and would not run to completion without blue screening the computer it was running on.

To get around this we reduced the copy database by deleting the directories that we didn’t want to convert.  Unfortunately VSS will report each corruptions during the deleting process causing hundreds of message boxes that the user must mindlessly click for the process to continue.

Once that point was reached, we used the tool VSS2SVN to create dump files that were imported into Subversion.
 
FINAL SETUP

Our current environment consists of the following:
1. Subversion (SVN)  - source code repository (FSFS file format).  
2. Apache for Windows 2.2.9 – Subversion server.
3. TortoiseSVN - Windows client to Subversion.
4. AnkhSvn - Visual Studio plug-in.  
5. SharpSvn - library used for coding our own tools that work with Subversion.  (Example is a tool called “MakeRelease.exe” that reads a list of file names, opens those files from Subversion, and combines all of them into a big SQL file for a release.)

LEARNING CURVE

There was a learning curve that the developers had to go through after the conversion to Subversion.  This was driven by the different versioning models.  VSS has the Lock-Update-Unlock model of source control that does not allow for concurrent updates of a single file by multiple developers.    

In Subversion’s versioning model no files are exclusively locked (unless forced) and multiple developers can work on a single file at the same time. Once changes are completed, conflicts are identified and said developers manually merge their changes before a commit is allowed.  If there are no conflicts, updates are automatically merged.


RESOURCES

Here is a quick introduction to Subversion (SVN) that compares it to VSS:
http://releasemanagement.wordpress.com/2006/12/16/svn-to-vss-how-to/

Here is a quick video showing you basic usage:
http://www.youtube.com/watch?v=4sUYnEylvU0

Here is another link that show the details of using TortoiseSVN
http://www.codeproject.com/KB/books/Subversion_TortoiseSVN.aspx

And a link talking about using SVN with Visual Studio:
http://www.west-wind.com/presentations/subversion/

This is SVN’s home page:
http://subversion.tigris.org/

This is TortoiseSVN home page:
http://tortoisesvn.tigris.org/

Free Subversion book online, very complete.
http://svnbook.red-bean.com/

How to install Subversion with Apache server.
http://svn.spears.at/#2

Vss2Svn conversion program:
http://www.pumacode.org/projects/vss2svn



Tags:

Development

Unexplained SQL Server Timeouts and Intermittent Blocking

by JBrooks 24. February 2009 10:00

We had a problem where our users would timeout for apparently no reason.  I monitored the SQL Server for a while and found that every once in a while there would be a lot of blocking going on.  So I need to find the cause of this and fix it.

If there was blocking going on, than there must have been exclusive locks somewhere in the chain of stored proc calls…. Right?

I walked thru the full list of stored procs that were called, and all of the subsequent stored procs, functions and views.  Sometimes this hierarchy was deep and even recursive.  

I was looking for any UPDATE or INSERT statements…. There weren’t any (except on temporary tables that only had the scope of the stored proc so they didn’t count.)

On further research I found the locking is caused by the following:

A.     If you use a SELECT INTO to create your temp table then SQL Sever places locks on system objects.  The following was in our getUserPrivileges proc:

                --get all permissions for the specified user
                select   permissionLocationId,
                    permissionId,
                    siteNodeHierarchyPermissionId,
                    contactDescr as contactName,
                    l.locationId, description, siteNodeId, roleId
                into #tmpPLoc
                from vw_PermissionLocationUsers vplu
                    inner join vw_ContactAllTypes vcat on vplu.contactId = vcat.contactId
                    inner join Location l on vplu.locationId = l.locationId
                where  isSelected = 1 and
                    contactStatusId = 1 and
                    vplu.contactId = @contactId

The getUserPrivileges proc is called with every page request (it is in the base pages.)   It was not cached like you might expect.   It doesn’t look like it, but the SQL above references 23 tables in the FROM or JOIN clauses.  None of these table have the “with(nolock)” hint on it so it is taking longer than it should.   If I remove the WHERE clause to get an idea of the number of rows involved it returns 159,710 rows and takes 3 to 5 seconds to run (after hours with no one else on the server.)

So if this stored proc can only be run one-at-a-time because of the lock, and it is being called once per page, and it holds the locks on the system tables for the duration of the select and temp table creation, you can see how it might be affecting the performance of the whole application.

  The fix for this would be:
1.    Use session level caching so this is only called once per session.
2.    Replace the SELECT INTO with code that creates the table using standard Transact-SQL DDL statements, and then use INSERT INTO to populate the table.
3.    Put “with(nolock)”  on everything involved with this call.

B.    If the stored proc getUserPrivileges didn’t have enough problems for you, then let me add:  it probably gets recompiled on each call.   So SQL Server acquires a COMPILE lock on each call.  
The reason it gets recompiled is because the temp table gets created and then a lot of rows are deleted from it (if a @locationId or @permissionLocationId are passed in).    This will cause the stored proc to be recompiled on the SELECT that follows (yes, in the middle of running the stored proc.)     In other procs I’ve noticed a DECLARE CURSOR statement whose SELECT statement references a temporary table – this will force a recompile too.

For more info on recompilation see:
http://support.microsoft.com/kb/243586/en-us

The fix for this would be:
1.    Again, hit this stored proc far fewer times by using caching.
2.    Have the @locationId or @permissionLocationId filtering applied in the WHERE clause while the table is being created.
3.    Replace the temp tables with table variables – they result in fewer recompilations.

If things don’t work like you expect them to then you can spend a lot of time staring at something without every figuring out what is wrong.

Tags:

SQL

Time Zones and Performance

by JBrooks 24. February 2009 08:48

In one of the systems I work on we have the problem of needing to track the different timezones that our users are in.  In doing this we need to account for daylight savings time.
I see code like this used in many places:

select   ...
dbo.fn_convertFromUTC(@StartUTC, @timeZone) as startLocal,
dbo.fn_convertFromUTC(@EndUTC, @timeZone) as endLocal
from      ...

and thought this might have an impact on performance since the conversion function will fire twice for each row in the result set.  I came up with a function that takes 3 parameters: @start, @end (both in UTC time) and @timeZone and returns a table like the one below.  You would then joint to this table to do the conversion.  This will allow for the same functionality, but will only be fired once or twice per result set.


startTime          endTime   offset  isHr2
3/1/07 7:00     3/11/07 6:59    -5    0
3/11/07 7:00    11/4/07 6:59    -4    0
11/4/07 7:00    11/4/07 7:59    -5    1
11/4/07 8:00    11/5/07 9:00    -5    0

--Here is some test code that shows how this would be used:

declare @startViewUTC DateTime, @endViewUTC DateTime,  @timeZone char(4)
set @startViewUTC = '3/1/2007 7:00am'
set @endViewUTC = '11/5/2007 9:00am'
set @timeZone = 'EPT'  -- Eastern Prevailing Time


select ai.startTime,
ai.endTime,
dateadd(hh, tzStart.offset,ai.startTime) as startLocal,
dateadd(hh, tzEnd.offset,ai.endTime) as endLocal,
tzStart.isHr2, tzStart.offset, tzEnd.isHr2, tzEnd.offset
from      dbo.fn_getAllIntervals(@startViewUTC, @endViewUTC, 3) ai
inner join dbo.fn_getTimeZoneOffsets(@startViewUTC, @endViewUTC, @timeZone)  tzStart
on ai.startTime between tzStart.startTime and tzStart.endTime
inner join dbo.fn_getTimeZoneOffsets(@startViewUTC, @endViewUTC, @timeZone)  tzEnd
on ai.endTime between tzEnd.startTime and tzEnd.endTime


--And here is the functions that are used to return the time zone offset table:

-- First we need to have a function that returns the Daylight Saving Start DateTime for a given year.

 

CREATE ALTER  FUNCTION dbo.fn_getDSTStart (
    @year int
)
RETURNS datetime
AS
BEGIN
    DECLARE @dstStart datetime

    IF @year < 2007
        SET @dstStart = DateAdd(mi, 120, DateAdd(dd, (8 - DatePart(dw, Cast(@year AS varchar) + '0401')) % 7, Cast(@year AS varchar) + '0401'))
    ELSE
        SET @dstStart = DateAdd(mi, 120, DateAdd(dd, (8 - DatePart(dw, Cast(@year AS varchar) + '0308')) % 7,Cast(@year AS varchar) + '0308'))

    RETURN @dstStart
END

CREATE FUNCTION dbo.fn_getDSTEnd (
    @year int
)
RETURNS datetime
AS
BEGIN
    DECLARE @dstEnd datetime

    IF @year < 2007
        SET @dstEnd = DateAdd(mi, 120, DateAdd(dd, 1 - DatePart(dw, Cast(@year AS varchar) + '1031'), Cast(@year AS varchar) + '1031'))
    ELSE
        SET @dstEnd = DateAdd(mi, 120, DateAdd(dd, (8 - DatePart(dw, Cast(@year AS varchar) + '1101')) % 7,Cast(@year AS varchar) + '1101'))

    RETURN @dstEnd
END

GO


CREATE       FUNCTION dbo.fn_getTimeZoneOffsets (
 @start datetime,
 @end datetime,
 @timeZone varchar(4)
)
RETURNS @timeZoneOffsets TABLE
 (startTime datetime,
  endTime datetime,
  offset int,
   isHr2 bit)
AS
BEGIN
declare @defaultOffset int

  --obtain the Standard Time, GMT Offset (this is not a complete list of timezones.)
  Select @defaultOffset =
        case @timeZone
      when 'GMT' then 0
      when 'AST' then -4
      when 'ADT' then -3
      when 'EST' then -5
      when 'EDT' then -4
      when 'CST' then -6
      when 'CDT' then -5
      when 'MST' then -7
      when 'MDT' then -6
      when 'PST' then -8
      when 'PDT' then -7
      when 'AKST' then -9
      when 'AKDT' then -8
      when 'EPT' then -5
      when 'CPT' then -6
      when 'MPT' then -7
      when 'PPT' then -8
        else 0
        end
 
 IF @timeZone IN ('CPT', 'EPT', 'MPT', 'PPT')
  BEGIN
  DECLARE @offsetTime int,
    @dstStart datetime,
    @dstEnd datetime,
    @yearStart datetime,
    @yearEnd datetime,
    @prevYearEnd datetime,
    @startYear int,
    @endYear int,
    @Year int
    
  set @startYear = year(@start)
  set @endYear = year(@end)
  set @Year = @startYear
  -- keep in mind these dates are all in GMT.
  --break the year into each part that has a different value
  while @Year <= @endYear
  begin
   set @dstStart = dateadd(hh,@defaultOffset*-1,dbo.fn_getDSTStart(@Year))
   set @dstEnd = dateadd(hh,@defaultOffset*-1,dbo.fn_getDSTEnd(@Year))
   set @yearStart = cast('01/01/'+cast(@Year as char(4)) as DateTime)

   set @yearEnd = cast('12/31/'+cast(@Year as char(4))+' 23:59:59.000' as DateTime)
   set @yearStart = case when @yearStart < @start then @start else @yearStart end
   set @yearEnd = case when @yearEnd < @end then @yearEnd else @end end


    if(@prevYearEnd is null)
    begin
       insert into @timeZoneOffsets
       Select @yearStart, dateadd(ss,-1,@dstStart), @defaultOffset, 0 isHr2
    end
    else
    begin
      update @timeZoneOffsets
       set endTime=dateadd(ss,-1,@dstStart)
       where endTime=@prevYearEnd
    end

   set @prevYearEnd = @yearEnd

   insert into @timeZoneOffsets
   Select @dstStart, dateadd(ss,-1,@dstEnd), @defaultOffset + 1, 0 isHr2
 
   insert into @timeZoneOffsets
   Select @dstEnd, dateadd(ss,-1,dateadd(hh,1,@dstEnd)), @defaultOffset, 1 isHr2
 
   insert into @timeZoneOffsets
   Select dateadd(hh,1,@dstEnd), @yearEnd, @defaultOffset, 0 isHr2
 
   set @Year = @Year + 1
 
  end

  delete from @timeZoneOffsets
  where endTime < @start
  or startTime > @end

    update @timeZoneOffsets
    set startTime = @start
    where startTime = (select min(startTime) from @timeZoneOffsets)

    update @timeZoneOffsets
    set endTime = @end
    where endTime = (select max(endTime) from @timeZoneOffsets)

 
   END
 ELSE
  insert into @timeZoneOffsets     
  Select @start, @end, @defaultOffset, 0 isHr2

RETURN
END

GO

Tags:

SQL

Where to put the connection string

by JBrooks 24. February 2009 08:10
Machine.config is used for machine specific items - thus the name. It never get moved from machine to machine.  So that is where you put your connection strings.  Doing this you will never have the horror of realizing your QA team has been hammering against your production data because someone forgot to account for the web.config when it was moved DOWN from prod to back out a bad release to QA (or whatever).  And all your web.configs will be the same and easy to promote when they do change.

You can also add a value to the machine.config so that you can tell which environment you are in at runtime.

So in my machine.config I added:
  <appSettings>
    <add key="ADEDevelopmentEnvironment" value="Yes"/>
  </appSettings>

Then in my code I have:
If Not String.IsNullOrEmpty(ConfigurationManager.AppSettings("ADEDevelopmentEnvironment")) Then
            Me.Label1.Text = "You are STILL in Dev?!!!"
Else
            Me.Label1.Text = "You are on production"
End If

Tags:

ASP.Net | Development

GridView Sort Column Arrow Performance

by JBrooks 23. February 2009 16:11

I see code like the following a lot:

protected void GridView1_RowCreated(object sender, GridViewRowEventArgs e)
{
     if (e.Row.RowType == DataControlRowType.Header)
            if (String.Empty != this.GridView1.SortExpression)  
                     AddSortImage(e.Row);             
           
}

The problem with this is that it fires for each row in the GridView slowing performance.  A better way is the following which only fires once per GridView binding:

protected void GridView1_DataBound(object sender, EventArgs e)
{
     if
(String.Empty != this.GridView1.SortExpression) 
            AddSortImage(GridView1.HeaderRow);
}

The supporting methods are:

void AddSortImage(GridViewRow headerRow) 

     int
iCol = GetSortColumnIndex();
     if
(-1 == iCol) 
           return

     // Create the sorting image based on the sort direction.

     Image sortImage = new Image();
     if
(SortDirection.Ascending == this.GridView1.SortDirection) 

{             sortImage.ImageUrl = @"~\Images\BlackDownArrow.gif";
               sortImage.AlternateText = "Ascending Order";
}
     else 

{
             sortImage.ImageUrl = @"~\Images\BlackUpArrow.gif";

             sortImage.AlternateText = "Descending Order";

}
      // Add the image to the appropriate header cell.
        headerRow.Cells[iCol].Controls.Add(new LiteralControl("&nbsp;"));
        headerRow.Cells[iCol].Controls.Add(sortImage);
}

public int GetSortColumnIndex()
{
      // Iterate through the Columns collection to determine the index
      // of the column being sorted. 
      foreach (DataControlField field in GridView1.Columns)
       {
             if (field.SortExpression == this.GridView1.SortExpression)
               {
                  return this.GridView1.Columns.IndexOf(field);
                }
         }
      return -1;
}

Tags:

ASP.Net | GridView

Connection Pool Misconceptions

by JBrooks 12. February 2009 08:38
Below is the sql to show how many connection are being used from my application.

declare @Database varchar(30),
    @LoginName varchar(30)

set @Database = 'MyDB'
set @LoginName = 'MyLoginName'

select max(blocked) hasBlocking,
    count(1) [Count],
    sum(open_tran) open_trans,
    max(waittime) maxWaitTime
from master.dbo.sysprocesses (nolock)
where db_name(dbid) = @Database and convert(sysname, rtrim(loginame)) = @LoginName
        
We needed this because there was a problem where our application would use up 100 connections (the default) and then bomb out.  The root cause was because of 2 misconceptions that a developer had.

First misconception:
Calling Close() on a DataReader closed the Connection.  This only releases the Connection object from the DataReader, but this does not close it.  You need to call Close() on the Connection object.  You can test this by seeing the Connection object's State will still be "Open" after calling Close()on the DataReader.

Second misconception:
Do not call Close or Dispose on a Connection, a DataReader, or any other managed object in the Finalize method of your class. In a finalizer, you should only release unmanaged resources that your class owns directly. If your class does not own any unmanaged resources, do not include a Finalize method in your class definition. For more information. For more information on this, see:
http://msdn2.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.close(VS.85).aspx

So if these 2 misconceptions are cleared up in your mind you are a lot less likely to run into this connection pool problem.

 

Tags:

ASP.Net | Development

Checking All CheckBoxes in a GridView Using Client-Side Script

by JBrooks 11. February 2009 11:48

Given a GridView like the one below, I want to add the ability to check the check box in the header and have that event check all of the checkboxes in each row.

 

I did this by adding 12 lines of JavaScript code and adding 1 attribute to my header template.   Below is the markup followed by the code-behind. The parts I added to this simple page are in bold.

<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>Test check all CheckBoxes</title>
       <script type="text/javascript" language="javascript">
        function changeAllCheckBoxes(sender) {
            var gridViewRows = GetParentElementByTagName(sender, "TABLE").rows;
            for (var i = 1; i < gridViewRows.length; ++i) {
                gridViewRows[i].cells[0].childNodes[0].checked = sender.checked;
            }
        }

        function GetParentElementByTagName(element, tagName) {
            var element = element;
            while (element.tagName != tagName)
                element = element.parentNode;
            return element;
        }

    </script>
</head>
<body>
    <form id="form1" runat="server">
    <div>
   
    </div>
    <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False"
        Caption="Pick Some Technologies" BackColor="White" BorderColor="#CC9966"
        BorderStyle="None" BorderWidth="1px" CellPadding="4">
    <Columns>
            <asp:TemplateField HeaderText="Include" SortExpression="Include">
                <HeaderTemplate>
                    <asp:CheckBox ID="CheckBox1" runat="server" onclick="changeAllCheckBoxes(this)" />
                </HeaderTemplate>
                <ItemTemplate>
                    <asp:CheckBox ID="CheckBox1" runat="server"  />
                </ItemTemplate>
                <ItemStyle HorizontalAlign="Center" />
            </asp:TemplateField>
            <asp:BoundField DataField="Name" HeaderText="Technology" />
        </Columns>
        <HeaderStyle BackColor="#990000" Font-Bold="True" ForeColor="#FFFFCC" />
    </asp:GridView>
    </form>
</body>
</html>

using System;
using System.Data;

public partial class test : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        if (!IsPostBack)
        {
            DataTable dt = new DataTable();

            dt.Columns.Add("Include", typeof(Boolean));
            dt.Columns.Add("Name", typeof(String));
            dt.Rows.Add(new Object[] { false, "C#" });
            dt.Rows.Add(new Object[] { false, "ASP.Net" });
            dt.Rows.Add(new Object[] { false, "SQL Server" });
            dt.Rows.Add(new Object[] { false, "VB.Net" });

            GridView1.DataSource = dt;
            GridView1.DataBind();

        }
    }
}
 

 

 

Tags:

ASP.Net | Development | GridView