T-SQL

You are currently browsing the archive for the T-SQL category.

I came across an interesting little enhancement which I’d like to share with you.  I’m sure you’re all familiar with the following error message:

Msg 2601, Level 14, State 1, Line 4
Cannot insert duplicate key row in object ‘dbo.#t’ with unique index ‘PK_Unique’.

That’s right, the “hey, you’re inserting junk, stop that right now!” message.

However, earlier this week I encountered a variation of that error in our ETL logs:

Msg 2601, Level 14, State 1, Line 3
Cannot insert duplicate key row in object ‘dbo.#t’ with unique index ‘PK_Unique’. The duplicate key value is (1, 2).

I’m sure I don’t need to explain how useful that extra sentence can be, right?  So, then I started digging because obviously I want that error to always include the offending values.

Recently our database back-end servers have been replaced with new machines running SQL Server 2008 R2 SP1.  And indeed, as of R2 SP1, this error message has gotten an upgrade!

If you want to check for yourself, the following code snippet can be used to generate the error:

create table #t (PK1 int, PK2 int);
create unique index PK_Unique on #t(PK1, PK2);
insert into #t values (1, 2),(1, 2);

Now, message 2601 is not the only one complaining about duplicate keys.  Another example can be generated using the following query:

create table #t (PK1 int unique, PK2 int unique);
insert into #t values (1, 2),(1, 2);

Executing that snippet on SQL Server 2008 R2 SP1 results in this error:

Msg 2627, Level 14, State 1, Line 1

Violation of UNIQUE KEY constraint ‘UQ__#t________C5776555123EB7A3′. Cannot insert duplicate key in object ‘dbo.#t’. The duplicate key value is (2).

Do you notice the difference?  The message did indeed get an upgrade, but is not able to support a simultaneous violation on multiple columns, while msg 2601 can.  Hopefully they can get that fixed by SQL Server 2012 RTM!

Out of curiosity I decided to get a closer look at other possible variations on this error and dug into the sys.messages table using this query:

select * from sys.messages
where text like ('%duplicate key%');

Results of that query:

All error messages containing "duplicate key"

So, there’s one more to investigate: 1505.  To get that error generated is not that complicated.  We actually already have all the statements, just need to use them in the right order, like so:

create table #t (PK1 int, PK2 int);
insert into #t values (1, 2),(1, 2);
create unique index PK_Unique on #t(PK1, PK2);

And what do we get?

Msg 1505, Level 16, State 1, Line 1

The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name ‘dbo.#t__________________________________________________________________________________________________________________00000000001D’ and the index name ‘PK_Unique’. The duplicate key value is (1, 2).

Nice, message 1505 also supports multiple columns, that makes two out of three!

Ow, don’t forget to clean up (yes, I even do that for temporary tables).

drop table #t;

Additional Info

Some further digging around on the internet brought me to the following suggestion on Microsoft Connect: Duplicate Key Values by Anton Plotnikov.  Looks like our enhanced statements are the result of that request.

There’s another suggestion as well: Tweak To Duplicate Key Message by Louis Davidson.  Louis suggests to have all offending values added to the message as well, but in a different situation than when dealing with multiple unique columns.  You can create one insert statement that inserts more than one record with offending values, and that’s the situation he’s referring to.  If you feel that’s important, get over to Connect and cast vote!  I do think that the number of values would need to get limited in that case, we might end up with really long error messages otherwise.

One last link, also by Louis Davidson.  He also wrote a blog post about the error message when he found out about it in Denali.

Have fun!

Valentino.

Share

Tags: ,

In this post I’m actually just going to repeat what I answered to a question on Experts Exchange.

The OP was having an issue getting a record count of all tables in his database.  It was working through the Management Studio, using the undocumented but well-known sp_MSforeachtable stored procedure.

exec sp_MSforeachtable 'select ''?'' TableName, count(*) Cnt from ?';

However, the OP needed the list of record counts inside an Integration Services package.  And this didn’t work out as expected, because sp_MSforeachtable uses a temporary table, something that SSIS does not like.

So I proposed to use a script similar to the following, possibly in a stored procedure.

/*
DESCRIPTION: Returns a list of record counts for each table in the database
AUTHOR:         Valentino Vranken
CREATED:     2011/05/21
VERSION:     1.0
COPIED FROM: http://blog.hoegaerden.be
*/
declare @tempTable table
(
    TableSchema nvarchar(256),
    TableName nvarchar(256),
    Cnt bigint
);

declare @sql nvarchar(4000);
declare @tableSchema nvarchar(256);
declare @tableName nvarchar(256);
declare @columnName sysname;
declare @cnt bigint;

declare tableCursor cursor for
    select TABLE_SCHEMA, TABLE_NAME from INFORMATION_SCHEMA.TABLES
    where TABLE_TYPE = 'BASE TABLE';

open tableCursor;

fetch next from tableCursor into @tableSchema, @tableName;

while @@FETCH_STATUS = 0
begin
    set @sql = 'select @cnt = COUNT(*) from [' + @tableSchema + '].[' + @tableName + ']';

    exec sp_executesql @sql, N'@cnt bigint output', @cnt = @cnt output;

    insert into @tempTable select @tableSchema, @tableName, @cnt;

    fetch next from tableCursor into @tableSchema, @tableName;
end;

close tableCursor;
deallocate tableCursor;

select * from @tempTable;

Yeah, I know, cursors are bad… But hey, at least it works!  Just don’t schedule it for execution every five seconds.

I seem to be repeating myself.  I wrote something similar when I posted the script about finding empty columns many months ago.

The most important part of the script is the following:

select TABLE_SCHEMA, TABLE_NAME from INFORMATION_SCHEMA.TABLES
where TABLE_TYPE = 'BASE TABLE';

This is using one of the information schema views to retrieve a list of tables in the database.  The INFORMATION_SCHEMA.TABLES view contains a list of tables and views, so the where clause is needed to filter out the views.  INFORMATION_SCHEMA.TABLES is just one of many interesting views available in SQL Server, and should always be used in favor of the sys tables.

What does the output look like?  Here’s part of the output when executed on the AdventureWorks2008R2 database:

Record count of all tables in the AdventureWorks2008R2 database (although not all are shown, you get the idea)

Have fun!

Valentino.

Share

Tags: , ,

With the holidays I haven’t been able to write much.  So I’ll make up for it with this +3000 words article.  If you’re reading this early in the morning, you’d better get a double espresso first ;-)

In this article I will demonstrate a method that can be used to calculate aggregations over a certain period of time in the past, or LastXMonths aggregations as I’m calling them throughout the article.  I’ll be using T-SQL, SQL Server Integration Services and a relational database as source.  More specifically I will be using the Merge Join data transformation in SSIS, and Common Table Expressions in T-SQL.

Version-wise I’m using SQL Server 2008 R2, but this method should work as of SQL Server 2005.  Furthermore I’m using the Contoso DWH, available for download at the Microsoft Download Center.  (In case you’re wondering, it’s the .BAK file.)

You can download the finished SSIS package from my Skydrive.  (The file is called MergeJoin.dtsx.)

The Scenario

Let’s say we’ve got a relational database containing some sales figures.  Management has asked for sales-related data to be available somewhere for easy analysis.  Ideally a cube would be built for that purpose but as budgets are currently tight, a temporary solution needs to be provided meanwhile.  So it’s been decided that an additional table will be created, populated with the exact data as required by management.  This table should contain all details (number of items and amount of the sale) about products sold, grouped by the date of the sale, the zip code of the place where the sale occurred and the category of the product.

Furthermore, each record should contain the sum of all sales of the last month for the zip code and product category of each particular record.  Two additional aggregations should calculate the sales for the last three months and last six months.

A Simple Example

To make sure we’re all on the same track on the requirements, here’s a small example to illustrate the expected outcome.

Small example displaying the expected outcome of the process

I’ve omitted the SalesAmount numbers for readability reasons.  The records are ordered chronologically, with the oldest first.  As you can see, the bottom record shows 16 as value for Last6MSalesQuantity.  This is the result of the SalesQuantity of the current record and the SalesQuantity of the previous record, which happens to fall within the timespan of the lowest record’s SaleDate going back six months.  The two other records do not fall within the six months timespan and are thus not included in the sum for the Last6MSalesQuantity of that bottom record.

Fetching The Data Into A Table

Our scenario requires that the sales figures are calculated and put into a new table.  Let’s first start with creating the queries to fetch the data.

Step 1: The Daily Numbers

The easiest part are the daily sales numbers.  These can be retrieved fairly easy from the Contoso data warehouse, just by using a GROUP BY clause as shown in the following query.

--daily sales
select DD.Datekey, DS.ZipCode, DPC.ProductCategoryName,
    SUM(FS.SalesAmount) SalesAmount_SUM,
    SUM(FS.SalesQuantity) SalesQuantity_SUM
from dbo.FactSales FS
    inner join dbo.DimStore DS on DS.StoreKey = FS.StoreKey
    inner join dbo.DimProduct DP on DP.ProductKey = FS.ProductKey
    inner join dbo.DimProductSubcategory DPS
        on DPS.ProductSubcategoryKey = DP.ProductSubcategoryKey
    inner join dbo.DimProductCategory DPC
        on DPC.ProductCategoryKey = DPS.ProductSubcategoryKey
    inner join dbo.DimDate DD on DD.Datekey = FS.DateKey
group by DD.Datekey, DS.ZipCode, DPC.ProductCategoryName
order by DD.Datekey asc, DS.ZipCode asc, DPC.ProductCategoryName asc;

Part of the result of that query looks like this:

Result of the daily sales query

Nothing special to mention so far so let’s continue to the next step.

Step 2: The Monthly Numbers

In this step, we’ll use the query from step 1 as base for the full query.  I’ll first show you the query and then provide you with some explanation of what’s going on.

--LastMonth
declare @numberOfMonths tinyint = 1;
with DailySalesData as
(
    select DD.Datekey, DS.ZipCode, DPC.ProductCategoryName,
        SUM(FS.SalesAmount) SalesAmount_SUM,
        SUM(FS.SalesQuantity) SalesQuantity_SUM
    from dbo.FactSales FS
        inner join dbo.DimStore DS on DS.StoreKey = FS.StoreKey
        inner join dbo.DimProduct DP on DP.ProductKey = FS.ProductKey
        inner join dbo.DimProductSubcategory DPS
            on DPS.ProductSubcategoryKey = DP.ProductSubcategoryKey
        inner join dbo.DimProductCategory DPC
            on DPC.ProductCategoryKey = DPS.ProductSubcategoryKey
        inner join dbo.DimDate DD on DD.Datekey = FS.DateKey
    group by DD.Datekey, DS.ZipCode, DPC.ProductCategoryName
),
UniqueRecordsPerDay as
(
    select Datekey, ZipCode, ProductCategoryName
    from DailySalesData
    group by Datekey, ZipCode, ProductCategoryName
)
select UR.Datekey, DSD.ZipCode, DSD.ProductCategoryName,
    SUM(DSD.SalesAmount_SUM) SalesAmount_SUM,
    SUM(DSD.SalesQuantity_SUM) SalesQuantity_SUM
from DailySalesData DSD
    inner join UniqueRecordsPerDay UR
            on UR.ProductCategoryName = DSD.ProductCategoryName
        and UR.ZipCode = DSD.ZipCode
        and DSD.Datekey
            between DATEADD(month, -@numberOfMonths, UR.Datekey + 1)
            and UR.Datekey
group by UR.Datekey, DSD.ZipCode, DSD.ProductCategoryName
order by UR.Datekey asc, DSD.ZipCode asc, DSD.ProductCategoryName asc;

The query uses a variable called @numberOfMonths.  This will allow us to use the same query for the totals of last month, as well as for the Last3M and the Last6M numbers.  All that’s needed is changing the variable to 3 or 6.

But how does the query get to the results?  To start, it uses two CTEs (Common Table Expressions).  The first one is called DailySalesData.  And the query for that CTE should look familiar to you by now: it’s the one from step 1, without the ORDER BY clause.

The second CTE is called UniqueRecordsPerDay and gives us one record for each unique date, zip code and product category as found in the Contoso data.  The DateKey, ZipCode and ProductCategoryName fields are our key grouping fields.  And this CTE is actually the key to calculating the monthly aggregated data, as I’ll explain next.

What the main query does is the following.  It selects the data from the DailySalesData CTE and joins that with the unique records per day recordset.  All grouping key fields need to be included in the join.  However, as you can see, to add the DateKey into the join I’m not just using the equals operator but the BETWEEN keyword instead.  I’ve also used the DATEADD function to subtract the number of months as specified through the @numberOfMonths variable.  That statement is saying: “give me all records starting from DateKey, going back @numberOfMonths”.  The query again groups by the key fields to be able to sum the records up.

This construction ensures that the SalesAmount_SUM and SalesQuantity_SUM fields represent the sum for the record’s zip code and product category and for the period as indicated by the @numberOfMonths variable.

Step 3: Merging It All Together Into One Table

Now that we know how to retrieve the data, we still need to get it into a table.  One option would be to use the INSERT statement on the daily records, followed by UPDATE statements to populate the monthly (1, 3, 6) aggregated columns.  However, I’m a BI guy so let’s use an SSIS package to get to the result (plus it allows me to illustrate the Merge Join data flow transformation :-) ).

So open up the BIDS and create a new package.  Drop a Data Flow Task into the Control Flow and add a Connection Manager connecting to your Contoso DWH.  Then switch to the Data Flow page.

Nothing special so far I believe.  Next we need to set up four Data Flow Sources: one for the daily figures, one for the monthly, one for the 3M and one for the 6M data.

Setting Up The Data Sources

Throw in an OLE DB Source component, configure it to use your connection manager and copy/paste the first query above into the command textbox.  Again nothing special, right?

However, the Merge Join component expects its incoming data to be sorted.  That’s why I’ve included the ORDER BY clause in the queries above.  But that’s not all.  Connecting our data source to a Merge Join transformation without any additional change will result in an error such as the following:

Validation error. Data Flow Task Merge Join [457]: The input is not sorted. The “input “Merge Join Left Input” (458)” must be sorted.

To avoid this error, we need to explicitly inform our data flow that the data is actually ordered, and we need to give it all the details: on what fields has the data been ordered and in what order!  And that needs to be done through the Advanced Editor.

So, right-click the OLE DB Source and select Show Advanced Editor.

Right-click OLE DB Source to open up the Advanced Editor

In the Advanced Editor, navigate to the last tab called Input and Output Properties and select the “OLE DB Source Output” node in the tree structure on the left.  Doing that will show the properties for the selected output and one of those properties is called IsSorted.  By default it is set to False.  Set it to True.

Tip: double-clicking the label of the property will swap its value to the other value.  This can be useful in cases when you need to change several options but even here is saves a couple of clicks.  It’s all about optimization. :-)

Advanced Editor on OLE DB Source: the IsSorted property

At this moment the component knows that the incoming data is sorted, but it still doesn’t know on what fields.  To specify that, open up the OLE DB Source Output node, followed by the Output Columns node.  You’ll now see the list of fields.  As specified in the query, the data is ordered firstly on DateKey, secondly on ZipCode and thirdly on ProductCategoryName.

Select DateKey to see its properties.

Advanced Editor of OLE DB Source showing the SortKeyPosition property

The property in which we’re interested here is called SortKeyPosition.  By default it is set to zero.  When the incoming data is sorted,  this property should reflect in what order the data is sorted, starting with one for the first field.  So in our case here the value should be set to 1.

Set the SortKeyPosition property for ZipCode to 2 and for ProductCategoryName to 3.

That’s one of the four OLE DB sources set up.  The other three will be easier as we can start from the first one.  So, copy and paste the source component, open it up by double-clicking it and replace the query with our second query from earlier, the one returning the monthly figures.  Ow, and give it a decent name but I’m sure you knew that.

Create the third source component in the same way, but change the value for the @numberOfMonths variable to 3.  And again the same process for source number four, changing the variable’s value to 6.

Here’s what we have so far:

Four OLE DB sources set up - waiting to be merged

Merging The Sources Into One Flow

Next up is merging the incoming flows.  Drag a Merge Join data flow transformation under the Daily Sales source and connect the source to the Merge Join.  That will open the following Input Output Selection screen.

Input Output Selection window

A Merge Join expects two inputs: one is called the Left Input and the other is called the Right Input.  Select Merge Join Left Input as value for the Input dropdown.

Close the popup window and connect the second source (with the monthly data) as well to the Merge Join.  There’s only one input remaining so this one is automatically the right input – no popup window is shown.

Next we need to configure the Merge Join so that it merges the data as expected.  Open the Merge Join Transformation Editor by double-clicking the component.

Merge Join Transformation Editor

By default the Join type dropdown is set to Inner join.  In our situation that’s good enough.  In the case that only one record exists for a certain zip code and product category on a given day, the monthly data for this record will be the sum of just that one record but in any case: there’s always at least one record for each incoming flow to be combined with each other.

As you can see, because both incoming flows are ordered in the same way, it automatically knows on which fields to put the join.

By default, no output fields are created as the white bottom half of the screenshot indicates.

Now I’ll show you a screenshot of the expected setup:

Merge Join Transformation Editor set up as expected

There are several ways to specify the output fields.  The first method is by using the dropdown in the Input column.  Selecting a value there will populate a dropdown in the column called Input Column (djeez, that was one column too much).  Here’s what that method looks like:

Specifying the output fields by using the dropdowns

Selecting a value in the second column will then give you a default value for the Output Alias.  This default can be freely modified.  As you may have guessed, this is not my preferred method – way too many comboboxes.

Another method of specifying the output fields is by using the checkboxes in front of the fields in the top part of the window.  I believe the larger screenshot above says it all.  Just check the fields that you need and then change their default Output Alias to whatever suits you.   In my example here I only needed to modify the alias for the last two records.

With our first Merge Join set up, only two are remaining.  So drag in a second Merge Join from the Toolbox, connect the output of the first join as Left Input on the second join and add the output of the third OLE DB source as Right Input.

Interesting to note here is that the output of the Merge Join is sorted in the same manner as its inputs.  One way of verifying this is by right-clicking the connector between the two joins and choosing Edit.

Right-click data flow connector and select Edit to open up Data Flow Path Editor

That opens up the Data Flow Path Editor.

Tip: double-clicking the connector will also open the editor!

Examine the Metadata of the Data Flow Path to verify the sort order

As you can see in the above screenshot, the metadata page shows a list of the available fields with some properties, such as the Sort Key Position.  Now if that doesn’t look familiar?! :-)

So far, the second Merge Join has been added and connected but it hasn’t been configured yet.  The process is very similar to the way we’ve set up the first join.  Just select all fields from the left input by checking all the checkboxes and select the two SUM fields from the right input.  Don’t forget to give those SUM fields a clear name.

Two joins done, one remaining.  Just drag one in and connect it with the second join plus the last remaining OLE DB source.  I won’t go into further details here, it’s exactly the same as I just explained for the second join.

Here’s what the Data Flow should look like:

The Data Flow with all the Merge Joins connected

And here’s what the third Merge Join should look like:The third Merge Join as set up for the example

An Error That You May Encounter

When using sorted data flows and the Merge Join component, you may encounter the following error message:

An error that you may encounter while using the Merge Join component

And now in words for the search engines:

The component has detected potential metadata corruption during validation.

Error at Data Flow Task [SSIS.Pipeline]: The IsSorted property of output “Merge Join Output” (91) is set to TRUE, but the absolute values of the non-zero output column SortKeyPositions do not form a monotonically increasing sequence, starting at one.

Yeah right, you had to read that twice, didn’t you?  And the best is yet to come:

Due to limitations of the Advanced Editor dialog box, this component cannot be edited using this dialog box.

So there’s a problem with your Merge Join but you cannot use the Advanced Editor to fix it, hmm, and you call that the ADVANCED editor?  Is there anything more advanced perhaps?  Well, actually, there is.  It’s called the Properties pane.  With the Merge Join selected, one of the properties there is called NumKeyColumns.  That property reflects on how many columns the incoming data is sorted.  And currently it contains the wrong value.  Changing its value to the correct number of columns will remove the error.

Properties pane displaying the Merge Join's properties, including NumKeyColumns

In case you’re wondering when you might encounter this particular problem, here’s how you can simulate it.  (Don’t forget to make a copy of the package before messing around with it.)

With the package as it currently is, remove the ZipCode field from the first two sources by unchecking it in the Columns page of the OLE DB Source Editor.

The sources are now complaining so open up their Advanced Editor and correct the SortKeyPosition of the ProductCategoryName field: it should become 2 instead of 3 because ZipCode was 2 and has been removed.

Now try to open the first Merge Join.  The first time it will complain about invalid references so delete those.  With the references deleted, if you now try to open the Merge Join editor, you’ll see the error we’re discussing here.  To fix it, change the NumKeyColumns property of the Merge Join to 2 instead of 3.

Adding The Destination Table

Now there’s only one step remaining: adding a destination for our merged data.  So, throw in an OLE DB Destination and connect it with the output of the last Merge Join:

An OLE DB Destination connected to the join that merges it all together

I’ll just use a quick and dirty way of creating a new table in the database.  Open up the OLE DB Destination Editor by double-clicking it and select a Connection Manager in the dropdown.  Now click the New button next to the Name of the table or the view dropdown.

That opens up the Create Table window, with a CREATE TABLE query pre-generated for you for free.  Isn’t that nice?  Change the name of the table to something nice (at least remove those spaces, yuk!!) and click OK.

The Create Table window

The new table is created at the moment that the OK button gets clicked.

Right, so are we there?  Well, almost.  As you can see now in the next screenshot, the BIDS does not want us to click the OK button just yet.

The OLE DB Destination Editor with the Mappings still missing

To resolve that warning, just open the Mappings page.  As the names of the input columns are matching exactly with the names of the fields in the destination table, everything will be automagically configured at this moment.  So now you can close the window with the OK button.

And that’s it!  Everything is set up to populate the new table with the aggregated figures, as requested by management.  To give it a run, right-click your package in the Solution Explorer and guess what… select Execute Package!  If everything has been configured as expected, you should get some green boxes soon.  And some data in the table, like this:

The final result: sales figures aggregated over different periods in time

Conclusion

In this article I’ve demonstrated a way to aggregate data over different periods in time, using T-SQL and Integration Services.  Obviously this method does not replace the flexibility that one gets when analyzing data stored in an OLAP cube, but it can be a practical method when you quickly need to provide aggregated data for management.

Have fun!

Valentino.

References

Merge Join Data Flow Transformation

Common Table Expressions (CTEs)

DATEADD() function

Share

Tags: , , ,

In this article I will show you a couple of different T-SQL queries to fetch aggregated data.  The main purpose is to illustrate how the OVER clause can be used to aggregate data.

For the examples I will use data from the AdventureWorks2008R2 database, available at CodePlex.

The Data

The AdventureWorks 2008 R2 database contains a view called Sales.vSalesPerson.  This is the data with which I’ll be working in the examples below.  Here’s what it looks like:

My Working Data

I’ve hidden some fields so that all the relevant ones are in view.

The Scenario

Your manager has asked you to create one query, to be executed on the Sales.vSalesPerson table, that returns a list of:

  • all employees (FirstName, LastName, JobTitle, CountryRegionName, StateProvinceName, City),
  • their sales of last year (SalesLastYear),
  • the sum of the sales of last year for their country,
  • the average of the sales of last year compared to all employees with the same type of phone (PhoneNumberType)
  • the overall average and sum of the sales of last year.

Using Derived Tables

No problem you say, coming right up.  So you start building your query, retrieving all fields as requested.

After quite some typing, here’s what your query looks like:

select S.FirstName, S.LastName, S.JobTitle, S.PhoneNumberType, S.CountryRegionName,
    S.StateProvinceName, S.City, S.SalesLastYear,
    GeographicSales.SalesLastYearGeographic_SUM,
    SalesByPhoneType.SalesLastYearByPhoneNumberType_AVG,
    SalesSUM.SalesLastYear_AVG, SalesSUM.SalesLastYear_SUM
from Sales.vSalesPerson S
--Derived Table 1: the overall aggregates
cross join (
    select SUM(SalesLastYear) SalesLastYear_SUM, AVG(SalesLastYear) SalesLastYear_AVG
    from Sales.vSalesPerson
) SalesSUM
--Derived Table 2: the aggregate on Country level
inner join (
    select CountryRegionName, SUM(SalesLastYear) SalesLastYearGeographic_SUM
    from Sales.vSalesPerson
    group by CountryRegionName
) GeographicSales on GeographicSales.CountryRegionName = S.CountryRegionName
--Derived Table 3: the aggregate on phone type
inner join (
    select PhoneNumberType, AVG(SalesLastYear) SalesLastYearByPhoneNumberType_AVG
    from Sales.vSalesPerson
    group by PhoneNumberType
) SalesByPhoneType on SalesByPhoneType.PhoneNumberType= S.PhoneNumberType;

The main query is retrieving all fields as requested.  Further down there are three derived table queries, each one retrieving aggregates on a different level.

The first derived table is retrieving the overall aggregates.  These are cross-joined with every record in our main query so for each record the totals will be the same, which is what we want.

The second derived table retrieves the aggregates on Country level, including the CountryRegionName.  This is done using the conventional GROUP BY method.  The CountryRegionName is the key on which the derived table is joined to the main table.

The third derived table uses a similar system, this time for the aggregate on phone type.

And here’s the query’s output:

Output of the query using subqueries

Happy with this result, you go up to the cafeteria to finally have lunch with your colleagues (who left 15 minutes earlier but you wanted to get your query finished first).

Using The OVER Clause

During lunch you explain to your peers what kind of funny request you got from management and told them how you solved it.

Then one of them speaks up and says: “Want to know how you can avoid all that typing?  Use the OVER clause!  I’ll show you when we are back at our desks.”

After lunch, here’s what your colleague helps to produce:

select S.FirstName, S.LastName, S.JobTitle, S.PhoneNumberType, S.CountryRegionName,
    S.StateProvinceName, S.City, S.SalesLastYear,
    SUM(SalesLastYear) OVER (PARTITION BY CountryRegionName)
        SalesLastYearGeographic_SUM,
    AVG(SalesLastYear) OVER (PARTITION BY PhoneNumberType)
        SalesLastYearByPhoneNumberType_AVG,
    SalesSUM.SalesLastYear_AVG, SalesSUM.SalesLastYear_SUM
from Sales.vSalesPerson S
--Derived Table 1: the overall aggregates
cross join (
    select SUM(SalesLastYear) SalesLastYear_SUM, AVG(SalesLastYear) SalesLastYear_AVG
    from Sales.vSalesPerson
) SalesSUM;

As you can see, derived tables 2 and 3 are gone.  They have been replaced with the OVER clause, in combination with PARTITION BY.  What you say with the OVER clause is: “partition the dataset by the fields specified in the PARTITION BY and apply the aggregation on those partitions”.  Another word for this is aggregate window function.

As you like the approach, you ask your co-worker how you can get rid of that cross join.  He doesn’t really know but then another colleague who overheard your conversation says: “On this blog the other day I read that you can use the OVER clause and partition by anything you want.  As long as it’s a constant, it will work!”.

So you give that a try and you end up with the following final query:

select FirstName, LastName, JobTitle, PhoneNumberType, CountryRegionName,
    StateProvinceName, City, SalesLastYear,
    SUM(SalesLastYear) OVER (PARTITION BY CountryRegionName)
        SalesLastYearGeographic_SUM,
    AVG(SalesLastYear) OVER (PARTITION BY PhoneNumberType)
        SalesLastYearByPhoneNumberType_AVG,
    AVG(SalesLastYear) OVER (PARTITION BY 'duh') SalesLastYear_AVG,
    SUM(SalesLastYear) OVER (PARTITION BY 1) SalesLastYear_SUM
from Sales.vSalesPerson;

As illustrated in the example, you can use any constant value to calculate overall aggregates over the whole dataset using the OVER clause.

You happily thank your colleagues and tell them that next time you’ll be able to join them for lunch on time.

About a week later you’re explaining to one of your friends how you’ve gotten to know the OVER clause. After hearing how you use it to aggregate over the whole dataset, he smiles and says: “I know how you can simplify it even more! Don’t partition at all!”.

Taking a closer look it turns out that the PARTITION BY is actually optional:

Ranking Window Functions
< OVER_CLAUSE > :: =     OVER ( [ PARTITION BY value_expression, ... [ n ] ]            <ORDER BY_Clause> )Aggregate Window Functions
< OVER_CLAUSE > :: =     OVER ( [ PARTITION BY value_expression, ... [ n ] ] )

See those square brackets? Means it’s optional.

So here is the real final query:

select FirstName, LastName, JobTitle, PhoneNumberType, CountryRegionName,
    StateProvinceName, City, SalesLastYear,
    SUM(SalesLastYear) OVER (PARTITION BY CountryRegionName)
        SalesLastYearGeographic_SUM,
    AVG(SalesLastYear) OVER (PARTITION BY PhoneNumberType)
        SalesLastYearByPhoneNumberType_AVG,
    AVG(SalesLastYear) OVER () SalesLastYear_AVG,
    SUM(SalesLastYear) OVER () SalesLastYear_SUM
from Sales.vSalesPerson;

Conclusion

When you compare the final query with the first one, tell me, which one would you prefer to maintain?  Do you prefer to have lunch with your peers or to arrive late and miss all the fun?

Have fun!

Valentino.

References

OVER Clause (Transact-SQL)

Join Fundamentals

SELECT (Transact-SQL)

Share

Tags: , , ,

A while ago I posted a query to create a list of all the Integration Services packages deployed to the MSDB.  I am now using that query to take it a step further.

If you’ve been using SSIS for a while you’ve probably noticed that the Management Studio doesn’t like to delete Integration Services folders that are not empty.  In fact, it will politely ask you if you’re sure that you want to delete the folder on which you’ve just selected the “Delete” option through the right-click menu.

Right-click pop-up menu on SSIS folder

I am sure I want to delete this non-empty SSIS folder

So you click the Yes button.  But then it shows you the following message:

SSIS folder ‘FolderWithSubfolders’ contains packages and/or other folders. You must drop these first. (Microsoft SQL Server Native Client 10.0)

Graphically it looks like this:

Object Explorer pop-up: you can't delete SSIS folders that contain packages or other folders

And this message can be really annoying if you’ve got a main folder with, let’s say, five subfolders, and each subfolder contains about 20-30 packages.  If you want to delete this folder you first need to delete each package separately and then delete the five subfolders, and then you can finally delete the main folder.  And all that through the right-click pop-up menu because you can’t just select the object in the Object Explorer and hit the Delete button on the keyboard – it doesn’t have an action on SSIS objects…

So, I wasn’t planning on doing such a job manually and came up with the following stored procedure.

It’s probably a bit long but don’t run away just yet, I will explain what’s going on down below the code, and there are some comments in the code as well.

/*
DESCRIPTION: Deletes all folders and packages under, and including, specified folder.
WRITTEN BY:  Valentino Vranken
CREATED:     2010-02-28
VERSION:     1.0
USAGE:
  -- mind the forward slash
  EXEC dbo.SSIS_RecursiveDeleteFolder '/FolderWithSubfolders'
  -- to delete a subfolder
  EXEC dbo.SSIS_RecursiveDeleteFolder '/FolderWithSubfolders/ASubfolderWithPackages'

COPIED FROM: http://blog.hoegaerden.be

Note 1: folder names are not case-sensitive
Note 2: uses system tables and (undocumented) stored procedures located in MSDB.
Note 3: this code was written for SQL Server 2008. For 2005:
  o sysssispackagefolders -> sysdtspackagefolders90
  o sysssispackages -> sysdtspackages90
  o sp_ssis_deletefolder -> sp_dts_deletefolder
  o sp_ssis_deletepackage -> sp_dts_deletepackage
*/
CREATE PROCEDURE dbo.SSIS_RecursiveDeleteFolder
    @Folder varchar(2000)
AS
BEGIN
    set nocount on;

    declare @foldersToDelete table
    (
        folderid uniqueidentifier,
        Lvl int
    );

    declare @packagesToDelete table
    (
        PackageName sysname,
        folderid uniqueidentifier,
        Lvl int
    );

    --retrieve list of folders to be deleted
    with ChildFolders
    as
    (
        select PARENT.parentfolderid, PARENT.folderid, PARENT.foldername,
            cast('' as sysname) as RootFolder,
            cast(PARENT.foldername as varchar(max)) as FullPath,
            0 as Lvl
        from msdb.dbo.sysssispackagefolders PARENT
        where PARENT.parentfolderid is null
        UNION ALL
        select CHILD.parentfolderid, CHILD.folderid, CHILD.foldername,
            case ChildFolders.Lvl
                when 0 then CHILD.foldername
                else ChildFolders.RootFolder
            end as RootFolder,
            cast(ChildFolders.FullPath + '/' + CHILD.foldername as varchar(max))
                as FullPath,
            ChildFolders.Lvl + 1 as Lvl
        from msdb.dbo.sysssispackagefolders CHILD
            inner join ChildFolders on ChildFolders.folderid = CHILD.parentfolderid
    )
    insert into @foldersToDelete
    select F.folderid, F.Lvl
    from ChildFolders F
    where F.FullPath like @Folder + '%';

    --retrieve list of packages to be deleted
    with ChildFolders
    as
    (
        select PARENT.parentfolderid, PARENT.folderid, PARENT.foldername,
            cast('' as sysname) as RootFolder,
            cast(PARENT.foldername as varchar(max)) as FullPath,
            0 as Lvl
        from msdb.dbo.sysssispackagefolders PARENT
        where PARENT.parentfolderid is null
        UNION ALL
        select CHILD.parentfolderid, CHILD.folderid, CHILD.foldername,
            case ChildFolders.Lvl
                when 0 then CHILD.foldername
                else ChildFolders.RootFolder
            end as RootFolder,
            cast(ChildFolders.FullPath + '/' + CHILD.foldername as varchar(max))
                as FullPath,
            ChildFolders.Lvl + 1 as Lvl
        from msdb.dbo.sysssispackagefolders CHILD
            inner join ChildFolders on ChildFolders.folderid = CHILD.parentfolderid
    )
    insert into @packagesToDelete
    select P.name, F.folderid, F.Lvl
    from ChildFolders F
        inner join msdb.dbo.sysssispackages P on P.folderid = F.folderid
    where F.FullPath like @Folder + '%';

    --use cursor to loop over objects to be deleted
    declare objectsToDelete_cursor cursor
    for
        select P.folderid, P.Lvl, P.PackageName, 'P' as ObjectType
        from @packagesToDelete P
        UNION ALL
        select F.folderid, F.Lvl, null, 'F'
        from @foldersToDelete F
        order by Lvl desc, ObjectType desc;

    open objectsToDelete_cursor;

    declare @folderid uniqueidentifier;
    declare @lvl int;
    declare @packageName sysname;
    declare @objectType char;

    fetch next from objectsToDelete_cursor
    into @folderid, @lvl, @packageName, @objectType;

    while @@FETCH_STATUS = 0
    begin
        if @objectType = 'F'
        begin
            print 'exec msdb.dbo.sp_ssis_deletefolder '
                + cast(@folderid as varchar(max));
            exec msdb.dbo.sp_ssis_deletefolder @folderid;
        end
        else
        begin
            print 'exec msdb.dbo.sp_ssis_deletepackage '
                + @packageName + ', ' + cast(@folderid as varchar(max));
            exec msdb.dbo.sp_ssis_deletepackage @packageName, @folderid;
        end

        fetch next from objectsToDelete_cursor
        into @folderid, @lvl, @packageName, @objectType;
    end;

    close objectsToDelete_cursor;
    deallocate objectsToDelete_cursor;
END

Before trying to dismantle this stored procedure, I recommend you to read my previous article on retrieving the list of packages.  That already explains half of the code, if not 75%.

Our mission is to find a way to recursively delete packages and folders contained in a specified folder.  To be able to loop over those objects in the correct order (from the deepest level up until the level of the folder specified), the SP creates two table variables: one to hold all folders under the specified folder (@foldersToDelete) and one to hold the packages under the specified folder, including all subfolders (@packagesToDelete).

Based on those two lists I create a cursor that joins these two together, taking their level and object type into consideration.  That’s important because we first need to delete the packages in the lowest level folder, followed by their containing folder, then move one level up and do the same.

We then use the cursor to loop over the packages and folders and use two undocumented system stored procedures – one for each object type- to delete the package or folder.  These system SPs are located in the MSDB.  Here’s how they are defined:

ALTER PROCEDURE [dbo].[sp_ssis_deletefolder]
  @folderid uniqueidentifier
AS

ALTER PROCEDURE [dbo].[sp_ssis_deletepackage]
  @name sysname,
  @folderid uniqueidentifier
AS

As you can see, the parameters for these procedures are not that complicated.  Both of them expect a uniqueidentifier as identification for the folder.  That’s okay, these IDs are stored in the msdb.dbo.sysssispackagefolders table and retrieved by our queries to create the list of to-be-deleted objects.

Furthermore, the sp_ssis_deletepackage SP expects the name of the package to be deleted.  Not a problem either, those names are obtained from the msdb.dbo.sysssispackages table.

Note for SQL Server 2005 users: this code was written for SQL Server 2008.  The system stored procedures and system tables exist in 2005 as well, but they have different names.  See the comment header of my SP for more details.

So, let’s give it a little test.  Following screenshot shows the setup.  What I will do is use the stored procedure to delete the FolderWithSubfolders folder.  If you’ve been paying close attention, that is the same folder which I tried to delete manually through the Management Studio’s right-click menu (see first screenshot above).

Overview of my deployed folders and packages

After creating the SP, I ran following command:

EXEC dbo.SSIS_RecursiveDeleteFolder '/FolderWithSubfolders'

And that gave me the following output in the Messages pane:

exec msdb.dbo.sp_ssis_deletepackage AnotherPackage, 7F38288D-4370-40A8-80E3-E92283033E4C

exec msdb.dbo.sp_ssis_deletepackage Package, 7F38288D-4370-40A8-80E3-E92283033E4C

exec msdb.dbo.sp_ssis_deletefolder 4102ED59-ED75-4D93-BBAE-0A162447BF02

exec msdb.dbo.sp_ssis_deletefolder 7F38288D-4370-40A8-80E3-E92283033E4C

exec msdb.dbo.sp_ssis_deletefolder C156B436-8C78-4BF9-99F9-5ABFAB10C405

I have deliberately put a couple of print commands in the stored procedure to dump the commands that are actually being executed.  This gives us a good idea of what’s going on.

That’s it for now folks.  Thank you for reading this, and if you found it useful or you’ve got some questions about it: post a comment!

Have fun!

Valentino.

Share

Tags: , , , ,

« Older entries

© 2008-2017 BI: Beer Intelligence? All Rights Reserved