SQL

Database Continuous Integration – DeployTest

by JBrooks 17. February 2017 17:47

Part 3 of 4 - Directories and Configuration


Below are the default directories I use. These could all be under c:\DeployTest or something.
Directory Type of Files
Archive Completed files are moved to once processed
DatabaseFiles SQL Server database data files
DBBackup Resent backups of production for restoring
DeployUtils PowerShell support utilities
InBasket Incomming files needing to be processed
InCmds Incomming commands used to trigger processing
Logs Log files
Working Process working files


I didn't want to hard code the directory paths because I could make changes like moving the Archive directory to a shared drive like "\\MyShare\MyProject\Archives". I will also have multiple project using my PowerShell code so having config files makes it easier to work with.

The config file for the directories above looks like:

<?xml version="1.0" encoding="UTF-8" ?> <C> <DirSettings> <BaseDir>C:\DeployTest\</BaseDir> <InBasketDir Parent="BaseDir">InBasket\</InBasketDir> <WorkingDir Parent="BaseDir">Working\</WorkingDir> <DBSourceDir Parent="BaseDir">DBBackup\</DBSourceDir> <ArchiveDir Parent="BaseDir">Archive\</ArchiveDir> <DBFilesDir Parent="BaseDir">DatabaseFiles\</DBFilesDir> <InCmdDir Parent="BaseDir">InCmds\</InCmdDir> <LogDir Parent="BaseDir">Logs\</LogDir> </DirSettings> <FileSettings> <InFileMask Parent="InBasketDir">Northwind_Update_*.sql</InFileMask> <DBSourceFileMask Parent="DBSourceDir">Northwind_*.bak</DBSourceFileMask> <BuildResultsFileFormat Parent="LogDir">BuildResults_{0}.log</BuildResultsFileFormat> <LogFileFormat Parent="LogDir">TestDeploy_{0}.log</LogFileFormat> </FileSettings> <DBSettings> <Server>.</Server> <Database>Northwind</Database> <BackupDB>0</BackupDB> </DBSettings> <EmailSettings> <SendEmails>1</SendEmails> <SMTPServer>smtp.gmail.com</SMTPServer> <SMTPPort>587</SMTPPort> <UseSsl>1</UseSsl> <SMTPUserName>JimInDE@GMail.com</SMTPUserName> <SMTPPassword>Someday13!</SMTPPassword> <From>JBrooks@DigitalTools.com</From> <To>JBrooks@DigitalTools.com</To> </EmailSettings> </C>

Typically you would have one of these per project. This might be called something like NorthwindDeployTest.xml. Notice that there is a "BaseDir" element and then other directories are a relative path under it. They have an attribute Parent="BaseDir". This can be removed if you want to use a different directory not under the BaseDir.

So our example where archives were moved to a shared drive would change:

<ArchiveDir Parent="BaseDir">Archive\</ArchiveDir>

To:

<ArchiveDir>\\MyShare\MyProject\Archives\</ArchiveDir>

Now all that we need is a PowerShell script to read this XML config file and then process it. One of the files under DeployUtils is Load-DeployConfig.ps1 and that is what it does.

Function Load { param ( [string] $ConfigFileName = 'C:\Work\Deploy\TestDeployConfig.xml', [string] $LogFileName ) Write-Host "this is it: $LogFileName" # Do not make $Config private, it is used by Log() Log ("Loading Config File $ConfigFileName") -LogFileName $LogFileName [xml]$Config = Get-Content $ConfigFileName Log 'Adding LogFileName to Config' $LogFileName $logFileNode = $Config.SelectSingleNode('//C/FileSettings/LogFileName') if(!$logFileNode) { $Config.c.FileSettings.AppendChild($Config.CreateElement("LogFileName")) } $Config.C.FileSettings.LogFileName = $LogFileName Write-Host "Loaded: $($Config.C.FileSettings.LogFileName)" $LogFileName #Now do not need the optional LogFileName since it defaults to $Config.C.FileSettings.LogFileName Log "" Log "Process Config File" #fill in the parent for all of the elements with a Parent attribute foreach($setting in $Config.SelectNodes("//*[@Parent]")) { Log "$($setting.InnerText) ==>" $parent = $Config.C.DirSettings.SelectSingleNode("//" + $setting.Parent) # if($parent -ne $null) $setting.InnerText = $parent.InnerText + $setting.InnerText Log "`t`t$($setting.InnerText)" } Log "" return $Config }

In your main PowerShell script you would use this to load the config like:

$Config = Load -ConfigFileName $ConfigFile -LogFileName $logFileName

Then you can use the variables like:

Move-Item -Path "$SQLFile" -Destination $Config.C.DirSettings.ArchiveDir.InnerText

Tags:

Development | OneScript | PowerShell | Continuous Integration | SQL

Database Continuous Integration – DeployTest

by JBrooks 12. February 2017 17:36

Part 2 of 4 - Commit Hook

I’ve created a simple Subversion post commit hook by creating a batch file called post-commit.bat and saving it to my …\SVN\hooks directory.  This batch file only has 1 line that calls a PowerShell script:

    

%SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe -noprofile -executionpolicy bypass -file C:\DeployTest\DeployUtils\CreateOneScriptCMD.ps1 "%1" "%2"

 

(This should all be on a single line.) If you run this to test you need to “Run as Administrator.”

The arguments passed to the batch file are just passed to the PowerShell script file. The %1 is the path to the Subversion repository and the %2 is the revision number just created by the commit.

 

The CreateOneScriptCMD.ps1 PowerShell script will look at the directories just committed and then create a text file with the appropriate OneScript Command arguments.  I then have a scheduled process that runs every few minutes that will see this new file and then do the real work. I did it this disconnected way so the developer doesn’t have to wait for the full process to complete when he does a commit.

Below is the CreateOneScriptCMD.ps1 script

param ( [Parameter(Mandatory=$true)] [string]$REPOS, [Parameter(Mandatory=$true)] [string]$REV ) Set-Location $PSScriptRoot cls $VerbosePreference = 'Continue' #hash table of paths to arguements found in OneScriptClient $Private:mapPathToArgs = @{"NorthwindDB/branches/" = "-s 1 -p 3 -fb -ra -b {0}" } $Private:index = -1 $Private:branchPath $Private:author $Private:args foreach($Private:dir in svnlook dirs-changed $REPOS --revision $REV) { Write-Verbose "Checking $dir" foreach($Private:key in $mapPathToArgs.keys) { $index = $dir.indexOf($key) Write-Verbose "index = $index" if($index -gt -1) { Write-Verbose "matched $key" # $author = svnlook author $REPOS --revision $REV # Write-Verbose "Author = $author" Write-Verbose "Matched $dir" $index = $index + $key.Length Write-Verbose "New index $index" $len = $dir.indexOf("/", $index) - $index Write-Verbose "Len = $len" $branchPath = $dir.Substring($index, $len) Write-Verbose "Branch is $branchPath" $args = $mapPathToArgs[$key] -f $branchPath break } } if($args) { break } } if($args) { $Private:dt = Get-Date -Format yyyyMMdd_HHmmss $Private:cmdFileName = "..\InCmds\OneScriptCmd_$dt.txt" $args | Out-File $cmdFileName -Append # $author | Out-File $CmdFileName -Append }

This will generate a file named something like: C:\DeployTest\InCmds\OneScriptCmd_20170212_215709.txt

And its content will be a single line containing the arguments to use when creating the release script from OneScriptCMD in the next step. Its content will look something like:

-s 1  -p 3 -fb -ra -b "3.2"

These are the same arguments created from within the OneScript Client Windows program. 

Tags: , , ,

Continuous Integration | Development | OneScript | PowerShell | SQL

Database Continuous Integration – DeployTest

by JBrooks 5. February 2017 06:23

Part 1 of 4 – Overview

For the next few blog posts I’m going to walk through having our new product OneScript creating a SQL release script automatically after a Subversion commit and then have a process that will automatically do the following:

  1. Restore a resent backup of our production database to our Deployment Test SQL Server instance.
  2. Apply the release script that OneScript just generated to the database.
  3. Determine if applying the script had any errors.
  4. Send an email to the developers with “Success” or “Failed” in the subject line. Have the process log as the body of the email and have the results of applying the SQL release script as an attachment.

This will tell the developers almost immediately if the SQL they just checked in causes an error in the release script.

You can download all of the completed code from here.

Most of this is done with PowerShell. If you don’t know PowerShell I highly encourage you to learn it. It is easy to learn and the next version of Windows will not have the command anymore - only PowerShell. I recommend the book below.

 

PowerShellBook_

Learn Windows PowerShell in a Month of Lunches

Tags:

Development | Continuous Integration | OneScript | SQL

Database Continuous Integration

by JBrooks 27. January 2017 16:23

 

We are pleased to announcing the release of our product OneScript!  It automates the building of a SQL release script from individual SQL change scripts. This allows you to include database changes as a part of your continuous integration process.

Who is this for? Developers that work on applications where the contents of the database needs to be preserved. It is simple to use and integrates seamlessly into the development process. Developers continue to use the tools and practices that they already use.


It works by following these steps:

  1. Developers script out each database change during a development cycle and check them into source control (Examples: SVN – Subversion, TFS – Team Foundation Server.)
  2. The change scripts are each checked into different folders based on the type of database object that they change. Example folder names are: Tables, Views, Store Procs, Static Data, etc.
  3. Within OneScript (one-time setup) each folder name is assigned a sort order. So something like Tables = 100, Functions = 200, Views = 300, Store Procs = 400, etc.
  4. For a release a filter is defined. The logic usually takes the form of “scripts changed since a given date/time” or “include every change since the branch was created” (if you use branches.)
  5. Optionally, at the individual file level you can override its normal sort order or filter.
  6. When it does a build it will filter, sort and combine all of the individual change scripts into a single release script. You can do a build from the command-line!


We built OneScript because we have tried many other approaches including:

  • Keep a single big change script in source control that the developers keep adding to for a release.
  • Give change script files a name that starts with the next available number (0044_, 0045_, etc.) so they are combined in order into single release script at build time.
  • Do a “diff” between Development and Production databases and then script out the differences. Then hand edit it to remove anything not wanted. Separately, add our static data scripts.
  • Email change scripts to the tech lead.


Each one of these approaches suffered from one or more of the following problems:

  • Quickly becomes unmanageable.
  • Last one in wins for changes to the same object.
  • Conflicts are hidden until release time.
  • Undoing a change is a manual and tedious process.
  • No history or auditability of changes.
  • Doesn’t lend itself to automation.

To learn more visit our site http://www.OneScript.com

 

Tags:

Development | SQL | OneScript

Generating Indexes for Foreign Keys

by jbrooks 18. September 2014 09:03

Surprisingly SQL Server doesn’t automatically create indexes just because you have a foreign key. So at the end of a development project I just used the script below to generate the SQL needed.

select 'create index indx_'+t.name +'_' + c.name + ' on ' + t.name +'(' + c.name + ');' from sys.foreign_key_columns as fk inner join sys.tables as t on fk.parent_object_id = t.object_id inner join sys.columns as c on fk.parent_object_id = c.object_id and fk.parent_column_id = c.column_id where not exists (select 1 from sys.index_columns ic where ic.object_id = t.object_id and ic.column_id = c.column_id) order by 1

This returns the commands that I can then run to create the indexes.

create index indx_ActivityLog_activityTypeId on ActivityLog(activityTypeId);
create index indx_ActivityLog_userId on ActivityLog(userId);
create index indx_Areas_managerId on Areas(managerId);
create index indx_Categories_areaId on Categories(areaId);
create index indx_CriteriaItems_criteriaColorId on CriteriaItems(criteriaColorId);

Tags:

SQL | Development

Logging Exceptions and the Commands that Caused Them.

by jbrooks 14. December 2010 12:12

For a long time I’ve been logging all of my exceptions to a table that looks like:

ExceptionTable

Now for exceptions that were generated by the database I want to also log the exact database call that caused the exception.  It will make debugging go a lot faster.  For this I have a new table:

ExceptionCmdTable

This will get the exceptionId from the first table and the a string that caused the exception.  Something like:

saveSomething @id=100, @UserName='Jakey',  @IPAddress='100.100.100.100',  @editBy='JBrooks'

So the first part of logging the Exception is two methods on my base class.

 
public void LogException(Exception ex)
{
    LogException(ex, null);
}
 
 
public void LogExceptionx(Exception ex, SqlCommand objBadCmd)
{
    int id = 0;
 
    if (this._WritingErrorLog == true)
    {
        // don't want to get in an infinite loop.
        return;
    }
    else
    {
        this._WritingErrorLog = true;
    }
 
 
    try
    {
 
        SqlCommand objCmd = this.GetNewCmd("dbo.insertException");
 
        string message = "";
 
        // Get the userId of the user that got the error.
        if (System.Web.HttpContext.Current.Request.Cookies["UID"] != null)
            message = System.Web.HttpContext.Current.Request.Cookies["UID"].Value + "|";
 
        // Add the command that gave the error.
        if (objBadCmd != null && !string.IsNullOrEmpty(objBadCmd.CommandText))
            message += objBadCmd.CommandText + "|";
 
 
        message += ex.ToString();
 
        if (message.Length > 8000)
            message = message.Substring(0, 7999);
 
 
        objCmd.Parameters.Add("@Message", SqlDbType.VarChar, 8000).Value = message;
 
        objCmd.Parameters.Add("@StackTrace", SqlDbType.VarChar, 8000).Value =
            ((ex.StackTrace == null) ? "no stack trace." : ex.StackTrace);
 
        //insert the exception and get the new ID.
        id = this.ExecuteScalarInt(objCmd);
 
        if (id > 0 && objBadCmd != null)
            insertExceptionCmd(id, objBadCmd);
 
    }
    catch (Exception ExLogException)
    {
        this.LogExceptionToFile(ex);
        this.LogExceptionToFile(ExLogException);
    }
 
    this._WritingErrorLog = false;
 
}

Now the part that generates the command string is another method that gets called be the method above:

 
private string getCmdString(SqlCommand objBadCmd)
{
 
    if (objBadCmd == null)
        return string.Empty;
 
    if (objBadCmd.Parameters == null || objBadCmd.Parameters.Count == 0)
        return objBadCmd.CommandText;
 
    StringBuilder sb = new StringBuilder();
 
    try
    {
        sb.Append(objBadCmd.CommandText+" ");
 
        for (int i = 0; i < objBadCmd.Parameters.Count; i++)
        {
            SqlParameter p = objBadCmd.Parameters[i];
 
            if (i > 0)
                sb.Append(", ");
 
            sb.Append(p.ParameterName + "=");
 
            if (p.Value == null)
                sb.Append("null");
            else
            {
                switch (objBadCmd.Parameters[i].SqlDbType)
                {
                    case SqlDbType.BigInt:
                    case SqlDbType.Int:
                    case SqlDbType.Float:
                    case SqlDbType.Decimal:
                    case SqlDbType.SmallInt:
                    case SqlDbType.Money:
                    case SqlDbType.Image:
                    case SqlDbType.Real:
                    case SqlDbType.SmallMoney:
                    case SqlDbType.TinyInt:
                    case SqlDbType.DateTimeOffset:
                        sb.Append(p.Value.ToString());
                        break;
 
                    default:
                        sb.Append("'" + p.Value.ToString() + "'");
                        break;
 
                }
            }
        }
    }
    catch (Exception ex)
    {
        return "getCmdString created its own error processing:  " + 
            sb.ToString() + "      +++++" + ex.ToString().Substring(0, 8000);
    }
 
    return sb.ToString();
}

So now I have a complete logging of exception in my applications and it is a simple matter to make a page where I can view them, or make an RSS feed where they are sent to me.

Tags:

Development | SQL | ASP.Net

Change Connection String Password Programmatically

by jbrooks 6. December 2010 13:07

On one of our projects we have a requirement that we have to change our passwords every 90 day.  So I created a page where I can just click a button and it will generate a password, change the password on the database for the user in the connection string and then change the password in the web.config.

The page just has a button that say “Change Password” and a label called lblMessage to show the results.

The code behind just calls the class that does all of the work and then shows a message:

 
 
const string CONNSTRINGNAME = "SampleAppConnString";
const string WEBCONFIGFILE = "~/Web.Config";
 
protected void btnChangePassword_Click(object sender, EventArgs e)
{
 
if (cConnStringPasswordChanger.ChangePassword(HttpContext.Current.Server.MapPath(WEBCONFIGFILE), 
        CONNSTRINGNAME))
    this.lblMessage.Text = "Password has been changed";
else
    this.lblMessage.Text = "Password has NOT been changed";
 
}

Now for the class that does all of the work:

 
public static class cConnStringPasswordChanger
{
 
    public static bool ChangePassword(string webConfig, string ConnStringName)
    {
        return ChangePassword(webConfig, ConnStringName, 
                Guid.NewGuid().ToString().Replace("-", "").Substring(0, 9));
    }
 
 
    // 1. Open the web.config.
    // 2. Change the connString to have the new password.
    // 3. Change the password on the database.
    // 4. Commit the web.config change.
    public static bool ChangePassword(string webConfig, string ConnStringName, 
        string newPassword)
    {
 
        bool changed = false;
        string oldConnString;
 
        FileInfo fi = new FileInfo(webConfig);
 
        if (fi.IsReadOnly)
        {
            File.SetAttributes(webConfig, FileAttributes.Normal);
        }
 
        fi = null;
 
        XmlDocument cfgDoc = new XmlDocument();
        cfgDoc.Load(webConfig);
 
        XmlNode connNode = cfgDoc.SelectSingleNode("//connectionStrings");
 
        XmlNode myNode = connNode.SelectSingleNode("//add[@name='" + ConnStringName + "']");
        oldConnString = myNode.Attributes["connectionString"].Value;
 
        string oldPassword = getValue(oldConnString, "Password");
 
        string newConnString = oldConnString.Replace(oldPassword, newPassword);
 
        myNode.Attributes["connectionString"].Value = newConnString;
 
        string userId = getValue(newConnString, "User Id");
 
        XmlTextWriter writer = new XmlTextWriter(webConfig, null);
        writer.Formatting = Formatting.Indented;
 
        // last possible second change it on the database.
        changePassword(oldConnString, userId, oldPassword, newPassword);
        try
        {
            cfgDoc.WriteTo(writer);
            writer.Flush();
            writer.Close();
            changed = true;
        }
        catch (Exception ex)
        {
            // error saving web.config change, so change it back on the database.
            changePassword(newConnString, userId, newPassword, oldPassword);
            throw;
        }
 
        writer = null;
        cfgDoc = null;
 
        return changed;
    }
 
    // This function is passed a connection string like:
    // "data source=.\SQLEXPRESS;Initial Catalog=MyDb;User Id=McUser;Password=c99c0472e;"
    // and a partName like "User Id" or "Password".
    // it returns the value for that partName.
    private static string getValue(string connString, string partName)
    {
        int partStart = connString.ToLower().IndexOf(partName.ToLower());
        int partEndSemi; 
        int partEndQuote;
            
 
        if (partStart > -1)
            partStart += partName.Length + 1;
        else
            throw new Exception(partName + " not found in connection string");
 
        partEndSemi = connString.Substring(partStart).IndexOf(";");
        partEndQuote = connString.Substring(partStart).IndexOf("\"");
            
 
        if (partEndQuote == -1)
            partEndQuote = connString.Length - partStart - 1;
 
        if (partEndSemi == -1)
            partEndSemi = connString.Length - partStart - 1;
                        
        return connString.Substring(partStart, Math.Min(partEndQuote, partEndSemi));
    }
 
    // Call the database to change the password.
    private static bool changePassword(string connString, string loginName, 
        string oldPassword, string newPassword)
    {
        bool changed = false;
 
        using (SqlConnection connection = new SqlConnection(connString))
        {
            SqlCommand command = new SqlCommand("dbo.sp_password", connection);
            command.CommandType = CommandType.StoredProcedure;
            command.Parameters.Add("@loginame", SqlDbType.NVarChar, 128).Value = loginName;
            command.Parameters.Add("@old", SqlDbType.NVarChar, 128).Value = oldPassword;
            command.Parameters.Add("@new", SqlDbType.NVarChar, 128).Value = newPassword;
            command.Connection.Open();
            command.ExecuteNonQuery();
            changed = true;
        }
 
 
        return changed;
 
    }
}
 

Tags:

ASP.Net | Development | SQL

Uploading XML Files to SQL Server

by jbrooks 18. November 2010 13:14

I had a large number of XML files that each contained hourly prices for a lot of different locations.  I needed to get all of the prices for a single location.  I did this by loading all of the files into a table and then querying the table.

First, I needed to change the rights on my database to allow me to run the xp_cmdshell store procedure.

 
USE MASTER
GO
 
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
 
EXEC sp_configure 'xp_cmdshell', 1
RECONFIGURE WITH OVERRIDE
GO
 
USE MyTestDB
GO
 

Then I needed to make the temporary table that would hold my file names and load the file names from the directory.

 
CREATE TABLE #DIR (fileName varchar(100))
 
INSERT INTO #DIR
EXEC master..xp_CmdShell 'DIR C:\RTHourly\*.xml /B'
 

A second table is needed hold the content of my XML files,  it is one row for each file. Note the XML data type for the last column called priceData.

 
create table xmls
(id int identity,
fileName varchar(250),
priceData xml)
 

Now we are ready to loop thru each file name found in the  #DIR table and load it’s contents into the xmls table.

 
DECLARE @fileName AS varchar(250)
DECLARE @fullPathName AS varchar(250)
 
DECLARE file_cursor CURSOR FOR
SELECT fileName
FROM #DIR
WHERE fileName like '%.xml'
 
OPEN file_cursor
FETCH NEXT FROM file_cursor INTO @fileName;
 
WHILE @@FETCH_STATUS = 0
BEGIN
    PRINT @fileName
 
    set @fullPathName = 'C:\RTHourly\'+ @fileName
    
 
    -- this must be dynamic sql b/c file name has to be a literal string for OPENROWSET
 
    exec (
    'INSERT INTO xmls(fileName, priceData)
    SELECT '''+@fileName+''' AS fileName, A.*
    FROM OPENROWSET( BULK '''+@fullPathName+''',SINGLE_CLOB)
    AS A')
 
    FETCH NEXT FROM file_cursor INTO @fileName;
END
 
CLOSE file_cursor;
DEALLOCATE file_cursor;
 

 

At this point all of the files are loaded so I just need to get the data out for my location “AZ”. 

 
select priceData.value('(//PriceGroup/Prices/Price[@location = "AZ"]/Detail/@utcTimepoint)[1]',
        'varchar(150)') as time,
priceData.value('(//PriceGroup/Prices/Price[@location = "AZ"]/Detail/@lmp)[1]',
'real') as price,
fileName
from xmls
order by 1 
 

 

Now let’s turn off the security hole that we created when we started this little project.

 
USE MASTER
GO
 
EXEC sp_configure 'xp_cmdshell', 0
RECONFIGURE WITH OVERRIDE
GO
 
 
EXEC sp_configure 'show advanced options', 0;
GO
RECONFIGURE;
GO
 

Tags:

Development | SQL

Passing IDs to Stored Procs Using XML

by jbrooks 15. October 2010 10:58

In the past we would use a Varchar to pass a comma delimited list of IDs to a stored proc. We would then build a big varchar string of the full SQL and run the dynamic SQL.  Starting with SQL Server 2005 you can pass the stored proc XML and do your select without using dynamic SQL.

The stored proc would look something like:

 
create proc [dbo].[getLocationTypes](@locationIds XML,
@typeIds XML=null)
as  
begin  
set nocount on  
 
   
SELECT locationId, typeId
FROM xrefLocationTypes 
WHERE locationId 
IN (SELECT Item.value('.', 'int' )
FROM @locationIDs.nodes('IDList/ID') AS x(Item))
AND (typeId IN
 (SELECT Item.value('.', 'int' )
FROM @typeIds.nodes('IDList/ID') AS x(Item)))
ORDER BY 1, 2
  
end  
 
 
 
And then you would call this like:
 
 
EXECUTE dbo.getLocationTypes '<IDList><ID>1</ID><ID>3</ID></IDList>', 
'<IDList><ID>200</ID><ID>300</ID><ID>400</ID></IDList>'    
 

Tags:

SQL | Development

Delete Spam Users from YetAnotherForum

by JBrooks 5. September 2010 14:21

 

I use YetAnotherForum (YAF) for my Forums here.   Like others, I get hundreds of spam users created each month.  I wrote the following SQL to delete any user that hasn’t visited in the last day, never posted or emailed a message, isn’t the last user on a forum and isn’t in the Active list.
 
-- first delete the UserGroup rows.
DELETE FROM dbo.yaf_UserGroup
WHERE UserID in
(
SELECT u.UserID
FROM dbo.yaf_User u
WHERE u.LastVisit < getdate() -1
AND u.NumPosts = 0)
AND UserID NOT IN
(
SELECT a.UserID
FROM dbo.yaf_Active a)
AND UserID NOT IN
(
SELECT f.LastUserID
FROM dbo.yaf_Forum f
WHERE f.LastUserID IS NOT NULL)
AND UserID NOT IN
(
SELECT pm.FromUserID
FROM dbo.yaf_PMessage pm)
AND UserID NOT IN
(
SELECT m.UserId
FROM dbo.yaf_Message m)
 
--
-- now delete the User rows.
--

DELETE FROM dbo.yaf_User
WHERE LastVisit < getdate() -1
AND NumPosts = 0
AND UserID NOT IN
(
SELECT a.UserID
FROM dbo.yaf_Active a)
AND UserID NOT IN
(
SELECT f.LastUserID
FROM dbo.yaf_Forum f
WHERE f.LastUserID IS NOT NULL)
AND UserID NOT IN
(
SELECT pm.FromUserID
FROM dbo.yaf_PMessage pm)
AND UserID NOT IN
(SELECT m.UserId
FROM dbo.yaf_Message m)

Tags:

SQL

One Script

OneScript Continuous Integration for you database Scripts in version control are automatically combined into a release script.

RecentPosts