Calculating LastXMonths Aggregations Using T-SQL and SSIS
With the holidays I haven’t been able to write much. So I’ll make up for it with this +3000 words article. If you’re reading this early in the morning, you’d better get a double espresso first
In this article I will demonstrate a method that can be used to calculate aggregations over a certain period of time in the past, or LastXMonths aggregations as I’m calling them throughout the article. I’ll be using T-SQL, SQL Server Integration Services and a relational database as source. More specifically I will be using the Merge Join data transformation in SSIS, and Common Table Expressions in T-SQL.
Version-wise I’m using SQL Server 2008 R2, but this method should work as of SQL Server 2005. Furthermore I’m using the Contoso DWH, available for download at the Microsoft Download Center. (In case you’re wondering, it’s the .BAK file.)
You can download the finished SSIS package from my Skydrive. (The file is called MergeJoin.dtsx.)
Let’s say we’ve got a relational database containing some sales figures. Management has asked for sales-related data to be available somewhere for easy analysis. Ideally a cube would be built for that purpose but as budgets are currently tight, a temporary solution needs to be provided meanwhile. So it’s been decided that an additional table will be created, populated with the exact data as required by management. This table should contain all details (number of items and amount of the sale) about products sold, grouped by the date of the sale, the zip code of the place where the sale occurred and the category of the product.
Furthermore, each record should contain the sum of all sales of the last month for the zip code and product category of each particular record. Two additional aggregations should calculate the sales for the last three months and last six months.
A Simple Example
To make sure we’re all on the same track on the requirements, here’s a small example to illustrate the expected outcome.
I’ve omitted the SalesAmount numbers for readability reasons. The records are ordered chronologically, with the oldest first. As you can see, the bottom record shows 16 as value for Last6MSalesQuantity. This is the result of the SalesQuantity of the current record and the SalesQuantity of the previous record, which happens to fall within the timespan of the lowest record’s SaleDate going back six months. The two other records do not fall within the six months timespan and are thus not included in the sum for the Last6MSalesQuantity of that bottom record.
Fetching The Data Into A Table
Our scenario requires that the sales figures are calculated and put into a new table. Let’s first start with creating the queries to fetch the data.
Step 1: The Daily Numbers
The easiest part are the daily sales numbers. These can be retrieved fairly easy from the Contoso data warehouse, just by using a GROUP BY clause as shown in the following query.
--daily sales select DD.Datekey, DS.ZipCode, DPC.ProductCategoryName, SUM(FS.SalesAmount) SalesAmount_SUM, SUM(FS.SalesQuantity) SalesQuantity_SUM from dbo.FactSales FS inner join dbo.DimStore DS on DS.StoreKey = FS.StoreKey inner join dbo.DimProduct DP on DP.ProductKey = FS.ProductKey inner join dbo.DimProductSubcategory DPS on DPS.ProductSubcategoryKey = DP.ProductSubcategoryKey inner join dbo.DimProductCategory DPC on DPC.ProductCategoryKey = DPS.ProductSubcategoryKey inner join dbo.DimDate DD on DD.Datekey = FS.DateKey group by DD.Datekey, DS.ZipCode, DPC.ProductCategoryName order by DD.Datekey asc, DS.ZipCode asc, DPC.ProductCategoryName asc;
Part of the result of that query looks like this:
Nothing special to mention so far so let’s continue to the next step.
Step 2: The Monthly Numbers
In this step, we’ll use the query from step 1 as base for the full query. I’ll first show you the query and then provide you with some explanation of what’s going on.
--LastMonth declare @numberOfMonths tinyint = 1; with DailySalesData as ( select DD.Datekey, DS.ZipCode, DPC.ProductCategoryName, SUM(FS.SalesAmount) SalesAmount_SUM, SUM(FS.SalesQuantity) SalesQuantity_SUM from dbo.FactSales FS inner join dbo.DimStore DS on DS.StoreKey = FS.StoreKey inner join dbo.DimProduct DP on DP.ProductKey = FS.ProductKey inner join dbo.DimProductSubcategory DPS on DPS.ProductSubcategoryKey = DP.ProductSubcategoryKey inner join dbo.DimProductCategory DPC on DPC.ProductCategoryKey = DPS.ProductSubcategoryKey inner join dbo.DimDate DD on DD.Datekey = FS.DateKey group by DD.Datekey, DS.ZipCode, DPC.ProductCategoryName ), UniqueRecordsPerDay as ( select Datekey, ZipCode, ProductCategoryName from DailySalesData group by Datekey, ZipCode, ProductCategoryName ) select UR.Datekey, DSD.ZipCode, DSD.ProductCategoryName, SUM(DSD.SalesAmount_SUM) SalesAmount_SUM, SUM(DSD.SalesQuantity_SUM) SalesQuantity_SUM from DailySalesData DSD inner join UniqueRecordsPerDay UR on UR.ProductCategoryName = DSD.ProductCategoryName and UR.ZipCode = DSD.ZipCode and DSD.Datekey between DATEADD(month, -@numberOfMonths, UR.Datekey + 1) and UR.Datekey group by UR.Datekey, DSD.ZipCode, DSD.ProductCategoryName order by UR.Datekey asc, DSD.ZipCode asc, DSD.ProductCategoryName asc;
The query uses a variable called @numberOfMonths. This will allow us to use the same query for the totals of last month, as well as for the Last3M and the Last6M numbers. All that’s needed is changing the variable to 3 or 6.
But how does the query get to the results? To start, it uses two CTEs (Common Table Expressions). The first one is called DailySalesData. And the query for that CTE should look familiar to you by now: it’s the one from step 1, without the ORDER BY clause.
The second CTE is called UniqueRecordsPerDay and gives us one record for each unique date, zip code and product category as found in the Contoso data. The DateKey, ZipCode and ProductCategoryName fields are our key grouping fields. And this CTE is actually the key to calculating the monthly aggregated data, as I’ll explain next.
What the main query does is the following. It selects the data from the DailySalesData CTE and joins that with the unique records per day recordset. All grouping key fields need to be included in the join. However, as you can see, to add the DateKey into the join I’m not just using the equals operator but the BETWEEN keyword instead. I’ve also used the DATEADD function to subtract the number of months as specified through the @numberOfMonths variable. That statement is saying: “give me all records starting from DateKey, going back @numberOfMonths”. The query again groups by the key fields to be able to sum the records up.
This construction ensures that the SalesAmount_SUM and SalesQuantity_SUM fields represent the sum for the record’s zip code and product category and for the period as indicated by the @numberOfMonths variable.
Step 3: Merging It All Together Into One Table
Now that we know how to retrieve the data, we still need to get it into a table. One option would be to use the INSERT statement on the daily records, followed by UPDATE statements to populate the monthly (1, 3, 6) aggregated columns. However, I’m a BI guy so let’s use an SSIS package to get to the result (plus it allows me to illustrate the Merge Join data flow transformation ).
So open up the BIDS and create a new package. Drop a Data Flow Task into the Control Flow and add a Connection Manager connecting to your Contoso DWH. Then switch to the Data Flow page.
Nothing special so far I believe. Next we need to set up four Data Flow Sources: one for the daily figures, one for the monthly, one for the 3M and one for the 6M data.
Setting Up The Data Sources
Throw in an OLE DB Source component, configure it to use your connection manager and copy/paste the first query above into the command textbox. Again nothing special, right?
However, the Merge Join component expects its incoming data to be sorted. That’s why I’ve included the ORDER BY clause in the queries above. But that’s not all. Connecting our data source to a Merge Join transformation without any additional change will result in an error such as the following:
Validation error. Data Flow Task Merge Join : The input is not sorted. The “input “Merge Join Left Input” (458)” must be sorted.
To avoid this error, we need to explicitly inform our data flow that the data is actually ordered, and we need to give it all the details: on what fields has the data been ordered and in what order! And that needs to be done through the Advanced Editor.
So, right-click the OLE DB Source and select Show Advanced Editor.
In the Advanced Editor, navigate to the last tab called Input and Output Properties and select the “OLE DB Source Output” node in the tree structure on the left. Doing that will show the properties for the selected output and one of those properties is called IsSorted. By default it is set to False. Set it to True.
Tip: double-clicking the label of the property will swap its value to the other value. This can be useful in cases when you need to change several options but even here is saves a couple of clicks. It’s all about optimization.
At this moment the component knows that the incoming data is sorted, but it still doesn’t know on what fields. To specify that, open up the OLE DB Source Output node, followed by the Output Columns node. You’ll now see the list of fields. As specified in the query, the data is ordered firstly on DateKey, secondly on ZipCode and thirdly on ProductCategoryName.
Select DateKey to see its properties.
The property in which we’re interested here is called SortKeyPosition. By default it is set to zero. When the incoming data is sorted, this property should reflect in what order the data is sorted, starting with one for the first field. So in our case here the value should be set to 1.
Set the SortKeyPosition property for ZipCode to 2 and for ProductCategoryName to 3.
That’s one of the four OLE DB sources set up. The other three will be easier as we can start from the first one. So, copy and paste the source component, open it up by double-clicking it and replace the query with our second query from earlier, the one returning the monthly figures. Ow, and give it a decent name but I’m sure you knew that.
Create the third source component in the same way, but change the value for the @numberOfMonths variable to 3. And again the same process for source number four, changing the variable’s value to 6.
Here’s what we have so far:
Merging The Sources Into One Flow
Next up is merging the incoming flows. Drag a Merge Join data flow transformation under the Daily Sales source and connect the source to the Merge Join. That will open the following Input Output Selection screen.
A Merge Join expects two inputs: one is called the Left Input and the other is called the Right Input. Select Merge Join Left Input as value for the Input dropdown.
Close the popup window and connect the second source (with the monthly data) as well to the Merge Join. There’s only one input remaining so this one is automatically the right input – no popup window is shown.
Next we need to configure the Merge Join so that it merges the data as expected. Open the Merge Join Transformation Editor by double-clicking the component.
By default the Join type dropdown is set to Inner join. In our situation that’s good enough. In the case that only one record exists for a certain zip code and product category on a given day, the monthly data for this record will be the sum of just that one record but in any case: there’s always at least one record for each incoming flow to be combined with each other.
As you can see, because both incoming flows are ordered in the same way, it automatically knows on which fields to put the join.
By default, no output fields are created as the white bottom half of the screenshot indicates.
Now I’ll show you a screenshot of the expected setup:
There are several ways to specify the output fields. The first method is by using the dropdown in the Input column. Selecting a value there will populate a dropdown in the column called Input Column (djeez, that was one column too much). Here’s what that method looks like:
Selecting a value in the second column will then give you a default value for the Output Alias. This default can be freely modified. As you may have guessed, this is not my preferred method – way too many comboboxes.
Another method of specifying the output fields is by using the checkboxes in front of the fields in the top part of the window. I believe the larger screenshot above says it all. Just check the fields that you need and then change their default Output Alias to whatever suits you. In my example here I only needed to modify the alias for the last two records.
With our first Merge Join set up, only two are remaining. So drag in a second Merge Join from the Toolbox, connect the output of the first join as Left Input on the second join and add the output of the third OLE DB source as Right Input.
Interesting to note here is that the output of the Merge Join is sorted in the same manner as its inputs. One way of verifying this is by right-clicking the connector between the two joins and choosing Edit.
That opens up the Data Flow Path Editor.
Tip: double-clicking the connector will also open the editor!
As you can see in the above screenshot, the metadata page shows a list of the available fields with some properties, such as the Sort Key Position. Now if that doesn’t look familiar?!
So far, the second Merge Join has been added and connected but it hasn’t been configured yet. The process is very similar to the way we’ve set up the first join. Just select all fields from the left input by checking all the checkboxes and select the two SUM fields from the right input. Don’t forget to give those SUM fields a clear name.
Two joins done, one remaining. Just drag one in and connect it with the second join plus the last remaining OLE DB source. I won’t go into further details here, it’s exactly the same as I just explained for the second join.
Here’s what the Data Flow should look like:
An Error That You May Encounter
When using sorted data flows and the Merge Join component, you may encounter the following error message:
And now in words for the search engines:
The component has detected potential metadata corruption during validation.
Error at Data Flow Task [SSIS.Pipeline]: The IsSorted property of output “Merge Join Output” (91) is set to TRUE, but the absolute values of the non-zero output column SortKeyPositions do not form a monotonically increasing sequence, starting at one.
Yeah right, you had to read that twice, didn’t you? And the best is yet to come:
Due to limitations of the Advanced Editor dialog box, this component cannot be edited using this dialog box.
So there’s a problem with your Merge Join but you cannot use the Advanced Editor to fix it, hmm, and you call that the ADVANCED editor? Is there anything more advanced perhaps? Well, actually, there is. It’s called the Properties pane. With the Merge Join selected, one of the properties there is called NumKeyColumns. That property reflects on how many columns the incoming data is sorted. And currently it contains the wrong value. Changing its value to the correct number of columns will remove the error.
In case you’re wondering when you might encounter this particular problem, here’s how you can simulate it. (Don’t forget to make a copy of the package before messing around with it.)
With the package as it currently is, remove the ZipCode field from the first two sources by unchecking it in the Columns page of the OLE DB Source Editor.
The sources are now complaining so open up their Advanced Editor and correct the SortKeyPosition of the ProductCategoryName field: it should become 2 instead of 3 because ZipCode was 2 and has been removed.
Now try to open the first Merge Join. The first time it will complain about invalid references so delete those. With the references deleted, if you now try to open the Merge Join editor, you’ll see the error we’re discussing here. To fix it, change the NumKeyColumns property of the Merge Join to 2 instead of 3.
Adding The Destination Table
Now there’s only one step remaining: adding a destination for our merged data. So, throw in an OLE DB Destination and connect it with the output of the last Merge Join:
I’ll just use a quick and dirty way of creating a new table in the database. Open up the OLE DB Destination Editor by double-clicking it and select a Connection Manager in the dropdown. Now click the New button next to the Name of the table or the view dropdown.
That opens up the Create Table window, with a CREATE TABLE query pre-generated for you for free. Isn’t that nice? Change the name of the table to something nice (at least remove those spaces, yuk!!) and click OK.
The new table is created at the moment that the OK button gets clicked.
Right, so are we there? Well, almost. As you can see now in the next screenshot, the BIDS does not want us to click the OK button just yet.
To resolve that warning, just open the Mappings page. As the names of the input columns are matching exactly with the names of the fields in the destination table, everything will be automagically configured at this moment. So now you can close the window with the OK button.
And that’s it! Everything is set up to populate the new table with the aggregated figures, as requested by management. To give it a run, right-click your package in the Solution Explorer and guess what… select Execute Package! If everything has been configured as expected, you should get some green boxes soon. And some data in the table, like this:
In this article I’ve demonstrated a way to aggregate data over different periods in time, using T-SQL and Integration Services. Obviously this method does not replace the flexibility that one gets when analyzing data stored in an OLAP cube, but it can be a practical method when you quickly need to provide aggregated data for management.
- Silly SQL #1: OLE DB Destination [SSIS]
- Deploying PDFs, and more [SSRS]
- T-SQL Tuesday 50: Automation, Automation, Automation!
- The "Select ALL" parameter option [SSRS]
- Book: SQL Server 2012 Reporting Services Blueprints
- SQL Server Days 2013 – Data Cleansing: Download
- SQL Server Days 2013
- Exploring the System.Object Package Variable [SSIS]
- T-SQL Tuesday 46: Contraptions!
- Creating Multi-Column Reports: The Top-Down Version [SSRS]
- August 2014 (1)
- May 2014 (1)
- January 2014 (1)
- December 2013 (1)
- November 2013 (2)
- October 2013 (1)
- September 2013 (2)
- July 2013 (2)
- June 2013 (2)
- May 2013 (3)
- March 2013 (3)
- February 2013 (2)
- January 2013 (2)
- December 2012 (2)
- November 2012 (3)
- October 2012 (2)
- August 2012 (2)
- July 2012 (2)
- June 2012 (2)
- May 2012 (2)
- April 2012 (3)
- March 2012 (4)
- February 2012 (4)
- January 2012 (2)
- December 2011 (2)
- November 2011 (2)
- October 2011 (1)
- September 2011 (3)
- August 2011 (2)
- June 2011 (2)
- May 2011 (3)
- April 2011 (3)
- March 2011 (3)
- February 2011 (2)
- January 2011 (5)
- December 2010 (1)
- November 2010 (3)
- October 2010 (3)
- September 2010 (2)
- August 2010 (4)
- July 2010 (2)
- June 2010 (4)
- May 2010 (6)
- April 2010 (3)
- March 2010 (3)
- February 2010 (11)
- January 2010 (9)
- December 2009 (2)
- November 2009 (3)
- October 2009 (3)
- September 2009 (4)
- August 2009 (6)
- July 2009 (2)
- June 2009 (3)
- May 2009 (7)
- April 2009 (3)
- March 2009 (3)
- February 2009 (5)
- January 2009 (4)
- December 2008 (2)
- November 2008 (3)
- October 2008 (1)
- September 2008 (1)
- August 2008 (4)
- July 2008 (3)