data

You are currently browsing articles tagged data.

SQL Server Day 2010

SQL Server Day 2010

For the third successive year the Belgian SQL Server User Group (SQLUG.BE vzw) and Microsoft are teaming up to organize the yearly Belgian SQL Server Day.

After the successful event of last year, SQL Server Day 2010 will again be the biggest event focused exclusively on Microsoft SQL Server in Belgium and Luxembourg.  Join us for sessions on SQL Azure, SQL Server 2008 R2, the Microsoft Business Intelligence platform and to connect with your peers.

This time the event will happen at San Marco Village in Schelle (Antwerp) and the date of the event is Thursday, December 2nd, 2010.  Book those calendars now!

Why would you want to be there?  Well, because there are going to be great speakers!  We’ve got some Belgian speakers and three (that’s right, 3) speakers from abroad!  Have you ever heard of Donald Farmer?  If you have, you’ll know that you want to be there.  If you have already seen him, you definitely know that you’ll want to be there.  And if you don’t know him: be there and you’ll see what I mean!

And he’s just one of a superb list.  The other speakers are Thomas Kejser, Chris Webb, Nico Jacobs, Wesley Backelant, Dirk Gubbels, Karel Coenye, Nico Verbaenen and Werner Geuens.

Furthermore, the full-day event will cost you zero euro.  So, what more do you want?  Free food??  Well, even that won’t be a problem!

One more thing: register now!

And as usual, have fun!

Valentino.

Share

Tags: , ,

Ask The Experts, Now!

The Belgian SQL Server User Group is teaming up with Microsoft to organize an “Ask The Experts” event.

The purpose of this event is to give you, the SQL Server user, an opportunity to ask Microsoft those things which you’ve always wanted to know but never could find on the internet.

The questions are currently being gathered (that explains the “Now” in my post’s title).  This gives you some time to think about what you’d like to ask beforehand.  On the day of the event, these questions will then be answered by a panel of four SQL Server specialists.

So, ever wanted to ask Microsoft a question related to SQL Server?  Now’s the time to do it!  Send them to asktheexperts@sqlug.be.

Who’s The Panel?

The panel consists of three Microsoft employees and one SQL Server MVP.  Here are the full details:

> Wesley Backelant <

In his role as a Technology Advisor at Microsoft, Wesley is responsible for helping customers understand the capabilities of SQL Server and the Microsoft Business Intelligence stack. Before joining Microsoft, Wesley was a Database Architect working on some of the largest implementations of SQL Server in Belgium. Wesley started his professional career in the SQL Server 6.5 timeframe and remained true to his passion ever since. Wesley is active on Twitter where he handles topics related to his favorite technology.

> Frederik Vandeputte <

Frederik Vandeputte is a Senior Consultant and partner at Kohera, the Microsoft SQL Server/Business Intelligence Competence Center of the Cronos Group. Frederik has been working with SQL Server since version 6.5. In his free time Frederik collects Microsoft certifications. His collection includes MCTS, MCSA, MCSE, MCDBA, MCITP, MCT, ranging from Windows 2000 and SQL Server 2000 up to SQL Server 2008. Frederik is one of the co-founders and the President of the Belgian SQL Server User Group (SQLUG.BE). In January 2008 Frederik, became the first Belgian MVP on SQL Server. Follow Frederik on his website and twitter.

> Dirk Gubbels <

Dirk Gubbels is a senior consultant at Microsoft, and has been working with SQL Server since version 4.2. As one of the few Microsoft Certified Database Architects he has been involved in the most demanding SQL server based applications in Belgium and all over the EMEA region. His main focus areas are Design, Performance and Availability for both OLTP and Business Intelligence environments.

> Gunther Beersaerts <

Gunther Beersaerts joined Microsoft in 1998 (on the launch day of SQL Server 7.0) as a Technical Marketer for MSDN/TechNet road shows and has held a number of technical roles during his career, including Systems Engineer, ATS, TSP roles covering a broad set of Microsoft Application Platform solutions. Over the past few years, Gunther has been active in technical roles for Databases and Business Intelligence platforms in EPG Belgium & Luxemburg. He then became a Strategist in the CATM (Customer Advocacy and Technology Management) organization which is a key-connection between the Microsoft Development teams and Customers /Partners. In this role, Gunther focuses on the Microsoft Data Platform, including SQL Server and Business Intelligence solutions. Prior to Microsoft, he was a developer and messaging engineer at a large financial institution in Brussels.

When?

Wednesday, September 29th 2010, starting at 1900.

Where?

The Microsoft offices in Zaventem.

Don’t forget to register here!

May the inspiration be with you while coming up with some questions :-)

See you there!

Valentino.

Share

Tags: , ,

12 + 12 = 24

For the second time this year, the SQL PASS folks are organizing another 24 Hours of PASS, a 24-hour free virtual training event.  Each session takes one hour, so there are 24 presentations in total.  Instead of putting them right after each other, this time they’ve decided to split the event in half: 12 hours on the first day and another 12 hours on the second.

image

When?

Wednesday, September 15th and Thursday, September 16th.  Each day the sessions start at 1200 GMT, so for us Belgians we need to add two hours for our local time.

Is it an interesting idea to split the event in two?  During the day I have to work, so I usually enroll for some sessions during the evening, starting earliest at 1800 and ending around midnight.  Looking at the schedule for the Fall event, it means that I’ll be able to attend sessions during two evenings instead of one.  Except, Wednesday evening I go swimming with our oldest daughter.  Ah well, better luck next time.

So, what are you waiting for?  Register here, it’s free and it’s interesting!  (Well, it’s probably not interesting for everyone on this planet, but it should be interesting to you – otherwise you wouldn’t be reading my blog either :-) )

If you require more info: check out the agenda!

Have fun!

Valentino.

Share

Tags: , ,

With the holidays I haven’t been able to write much.  So I’ll make up for it with this +3000 words article.  If you’re reading this early in the morning, you’d better get a double espresso first ;-)

In this article I will demonstrate a method that can be used to calculate aggregations over a certain period of time in the past, or LastXMonths aggregations as I’m calling them throughout the article.  I’ll be using T-SQL, SQL Server Integration Services and a relational database as source.  More specifically I will be using the Merge Join data transformation in SSIS, and Common Table Expressions in T-SQL.

Version-wise I’m using SQL Server 2008 R2, but this method should work as of SQL Server 2005.  Furthermore I’m using the Contoso DWH, available for download at the Microsoft Download Center.  (In case you’re wondering, it’s the .BAK file.)

You can download the finished SSIS package from my Skydrive.  (The file is called MergeJoin.dtsx.)

The Scenario

Let’s say we’ve got a relational database containing some sales figures.  Management has asked for sales-related data to be available somewhere for easy analysis.  Ideally a cube would be built for that purpose but as budgets are currently tight, a temporary solution needs to be provided meanwhile.  So it’s been decided that an additional table will be created, populated with the exact data as required by management.  This table should contain all details (number of items and amount of the sale) about products sold, grouped by the date of the sale, the zip code of the place where the sale occurred and the category of the product.

Furthermore, each record should contain the sum of all sales of the last month for the zip code and product category of each particular record.  Two additional aggregations should calculate the sales for the last three months and last six months.

A Simple Example

To make sure we’re all on the same track on the requirements, here’s a small example to illustrate the expected outcome.

Small example displaying the expected outcome of the process

I’ve omitted the SalesAmount numbers for readability reasons.  The records are ordered chronologically, with the oldest first.  As you can see, the bottom record shows 16 as value for Last6MSalesQuantity.  This is the result of the SalesQuantity of the current record and the SalesQuantity of the previous record, which happens to fall within the timespan of the lowest record’s SaleDate going back six months.  The two other records do not fall within the six months timespan and are thus not included in the sum for the Last6MSalesQuantity of that bottom record.

Fetching The Data Into A Table

Our scenario requires that the sales figures are calculated and put into a new table.  Let’s first start with creating the queries to fetch the data.

Step 1: The Daily Numbers

The easiest part are the daily sales numbers.  These can be retrieved fairly easy from the Contoso data warehouse, just by using a GROUP BY clause as shown in the following query.

--daily sales
select DD.Datekey, DS.ZipCode, DPC.ProductCategoryName,
    SUM(FS.SalesAmount) SalesAmount_SUM,
    SUM(FS.SalesQuantity) SalesQuantity_SUM
from dbo.FactSales FS
    inner join dbo.DimStore DS on DS.StoreKey = FS.StoreKey
    inner join dbo.DimProduct DP on DP.ProductKey = FS.ProductKey
    inner join dbo.DimProductSubcategory DPS
        on DPS.ProductSubcategoryKey = DP.ProductSubcategoryKey
    inner join dbo.DimProductCategory DPC
        on DPC.ProductCategoryKey = DPS.ProductSubcategoryKey
    inner join dbo.DimDate DD on DD.Datekey = FS.DateKey
group by DD.Datekey, DS.ZipCode, DPC.ProductCategoryName
order by DD.Datekey asc, DS.ZipCode asc, DPC.ProductCategoryName asc;

Part of the result of that query looks like this:

Result of the daily sales query

Nothing special to mention so far so let’s continue to the next step.

Step 2: The Monthly Numbers

In this step, we’ll use the query from step 1 as base for the full query.  I’ll first show you the query and then provide you with some explanation of what’s going on.

--LastMonth
declare @numberOfMonths tinyint = 1;
with DailySalesData as
(
    select DD.Datekey, DS.ZipCode, DPC.ProductCategoryName,
        SUM(FS.SalesAmount) SalesAmount_SUM,
        SUM(FS.SalesQuantity) SalesQuantity_SUM
    from dbo.FactSales FS
        inner join dbo.DimStore DS on DS.StoreKey = FS.StoreKey
        inner join dbo.DimProduct DP on DP.ProductKey = FS.ProductKey
        inner join dbo.DimProductSubcategory DPS
            on DPS.ProductSubcategoryKey = DP.ProductSubcategoryKey
        inner join dbo.DimProductCategory DPC
            on DPC.ProductCategoryKey = DPS.ProductSubcategoryKey
        inner join dbo.DimDate DD on DD.Datekey = FS.DateKey
    group by DD.Datekey, DS.ZipCode, DPC.ProductCategoryName
),
UniqueRecordsPerDay as
(
    select Datekey, ZipCode, ProductCategoryName
    from DailySalesData
    group by Datekey, ZipCode, ProductCategoryName
)
select UR.Datekey, DSD.ZipCode, DSD.ProductCategoryName,
    SUM(DSD.SalesAmount_SUM) SalesAmount_SUM,
    SUM(DSD.SalesQuantity_SUM) SalesQuantity_SUM
from DailySalesData DSD
    inner join UniqueRecordsPerDay UR
            on UR.ProductCategoryName = DSD.ProductCategoryName
        and UR.ZipCode = DSD.ZipCode
        and DSD.Datekey
            between DATEADD(month, -@numberOfMonths, UR.Datekey + 1)
            and UR.Datekey
group by UR.Datekey, DSD.ZipCode, DSD.ProductCategoryName
order by UR.Datekey asc, DSD.ZipCode asc, DSD.ProductCategoryName asc;

The query uses a variable called @numberOfMonths.  This will allow us to use the same query for the totals of last month, as well as for the Last3M and the Last6M numbers.  All that’s needed is changing the variable to 3 or 6.

But how does the query get to the results?  To start, it uses two CTEs (Common Table Expressions).  The first one is called DailySalesData.  And the query for that CTE should look familiar to you by now: it’s the one from step 1, without the ORDER BY clause.

The second CTE is called UniqueRecordsPerDay and gives us one record for each unique date, zip code and product category as found in the Contoso data.  The DateKey, ZipCode and ProductCategoryName fields are our key grouping fields.  And this CTE is actually the key to calculating the monthly aggregated data, as I’ll explain next.

What the main query does is the following.  It selects the data from the DailySalesData CTE and joins that with the unique records per day recordset.  All grouping key fields need to be included in the join.  However, as you can see, to add the DateKey into the join I’m not just using the equals operator but the BETWEEN keyword instead.  I’ve also used the DATEADD function to subtract the number of months as specified through the @numberOfMonths variable.  That statement is saying: “give me all records starting from DateKey, going back @numberOfMonths”.  The query again groups by the key fields to be able to sum the records up.

This construction ensures that the SalesAmount_SUM and SalesQuantity_SUM fields represent the sum for the record’s zip code and product category and for the period as indicated by the @numberOfMonths variable.

Step 3: Merging It All Together Into One Table

Now that we know how to retrieve the data, we still need to get it into a table.  One option would be to use the INSERT statement on the daily records, followed by UPDATE statements to populate the monthly (1, 3, 6) aggregated columns.  However, I’m a BI guy so let’s use an SSIS package to get to the result (plus it allows me to illustrate the Merge Join data flow transformation :-) ).

So open up the BIDS and create a new package.  Drop a Data Flow Task into the Control Flow and add a Connection Manager connecting to your Contoso DWH.  Then switch to the Data Flow page.

Nothing special so far I believe.  Next we need to set up four Data Flow Sources: one for the daily figures, one for the monthly, one for the 3M and one for the 6M data.

Setting Up The Data Sources

Throw in an OLE DB Source component, configure it to use your connection manager and copy/paste the first query above into the command textbox.  Again nothing special, right?

However, the Merge Join component expects its incoming data to be sorted.  That’s why I’ve included the ORDER BY clause in the queries above.  But that’s not all.  Connecting our data source to a Merge Join transformation without any additional change will result in an error such as the following:

Validation error. Data Flow Task Merge Join [457]: The input is not sorted. The “input “Merge Join Left Input” (458)” must be sorted.

To avoid this error, we need to explicitly inform our data flow that the data is actually ordered, and we need to give it all the details: on what fields has the data been ordered and in what order!  And that needs to be done through the Advanced Editor.

So, right-click the OLE DB Source and select Show Advanced Editor.

Right-click OLE DB Source to open up the Advanced Editor

In the Advanced Editor, navigate to the last tab called Input and Output Properties and select the “OLE DB Source Output” node in the tree structure on the left.  Doing that will show the properties for the selected output and one of those properties is called IsSorted.  By default it is set to False.  Set it to True.

Tip: double-clicking the label of the property will swap its value to the other value.  This can be useful in cases when you need to change several options but even here is saves a couple of clicks.  It’s all about optimization. :-)

Advanced Editor on OLE DB Source: the IsSorted property

At this moment the component knows that the incoming data is sorted, but it still doesn’t know on what fields.  To specify that, open up the OLE DB Source Output node, followed by the Output Columns node.  You’ll now see the list of fields.  As specified in the query, the data is ordered firstly on DateKey, secondly on ZipCode and thirdly on ProductCategoryName.

Select DateKey to see its properties.

Advanced Editor of OLE DB Source showing the SortKeyPosition property

The property in which we’re interested here is called SortKeyPosition.  By default it is set to zero.  When the incoming data is sorted,  this property should reflect in what order the data is sorted, starting with one for the first field.  So in our case here the value should be set to 1.

Set the SortKeyPosition property for ZipCode to 2 and for ProductCategoryName to 3.

That’s one of the four OLE DB sources set up.  The other three will be easier as we can start from the first one.  So, copy and paste the source component, open it up by double-clicking it and replace the query with our second query from earlier, the one returning the monthly figures.  Ow, and give it a decent name but I’m sure you knew that.

Create the third source component in the same way, but change the value for the @numberOfMonths variable to 3.  And again the same process for source number four, changing the variable’s value to 6.

Here’s what we have so far:

Four OLE DB sources set up - waiting to be merged

Merging The Sources Into One Flow

Next up is merging the incoming flows.  Drag a Merge Join data flow transformation under the Daily Sales source and connect the source to the Merge Join.  That will open the following Input Output Selection screen.

Input Output Selection window

A Merge Join expects two inputs: one is called the Left Input and the other is called the Right Input.  Select Merge Join Left Input as value for the Input dropdown.

Close the popup window and connect the second source (with the monthly data) as well to the Merge Join.  There’s only one input remaining so this one is automatically the right input – no popup window is shown.

Next we need to configure the Merge Join so that it merges the data as expected.  Open the Merge Join Transformation Editor by double-clicking the component.

Merge Join Transformation Editor

By default the Join type dropdown is set to Inner join.  In our situation that’s good enough.  In the case that only one record exists for a certain zip code and product category on a given day, the monthly data for this record will be the sum of just that one record but in any case: there’s always at least one record for each incoming flow to be combined with each other.

As you can see, because both incoming flows are ordered in the same way, it automatically knows on which fields to put the join.

By default, no output fields are created as the white bottom half of the screenshot indicates.

Now I’ll show you a screenshot of the expected setup:

Merge Join Transformation Editor set up as expected

There are several ways to specify the output fields.  The first method is by using the dropdown in the Input column.  Selecting a value there will populate a dropdown in the column called Input Column (djeez, that was one column too much).  Here’s what that method looks like:

Specifying the output fields by using the dropdowns

Selecting a value in the second column will then give you a default value for the Output Alias.  This default can be freely modified.  As you may have guessed, this is not my preferred method – way too many comboboxes.

Another method of specifying the output fields is by using the checkboxes in front of the fields in the top part of the window.  I believe the larger screenshot above says it all.  Just check the fields that you need and then change their default Output Alias to whatever suits you.   In my example here I only needed to modify the alias for the last two records.

With our first Merge Join set up, only two are remaining.  So drag in a second Merge Join from the Toolbox, connect the output of the first join as Left Input on the second join and add the output of the third OLE DB source as Right Input.

Interesting to note here is that the output of the Merge Join is sorted in the same manner as its inputs.  One way of verifying this is by right-clicking the connector between the two joins and choosing Edit.

Right-click data flow connector and select Edit to open up Data Flow Path Editor

That opens up the Data Flow Path Editor.

Tip: double-clicking the connector will also open the editor!

Examine the Metadata of the Data Flow Path to verify the sort order

As you can see in the above screenshot, the metadata page shows a list of the available fields with some properties, such as the Sort Key Position.  Now if that doesn’t look familiar?! :-)

So far, the second Merge Join has been added and connected but it hasn’t been configured yet.  The process is very similar to the way we’ve set up the first join.  Just select all fields from the left input by checking all the checkboxes and select the two SUM fields from the right input.  Don’t forget to give those SUM fields a clear name.

Two joins done, one remaining.  Just drag one in and connect it with the second join plus the last remaining OLE DB source.  I won’t go into further details here, it’s exactly the same as I just explained for the second join.

Here’s what the Data Flow should look like:

The Data Flow with all the Merge Joins connected

And here’s what the third Merge Join should look like:The third Merge Join as set up for the example

An Error That You May Encounter

When using sorted data flows and the Merge Join component, you may encounter the following error message:

An error that you may encounter while using the Merge Join component

And now in words for the search engines:

The component has detected potential metadata corruption during validation.

Error at Data Flow Task [SSIS.Pipeline]: The IsSorted property of output “Merge Join Output” (91) is set to TRUE, but the absolute values of the non-zero output column SortKeyPositions do not form a monotonically increasing sequence, starting at one.

Yeah right, you had to read that twice, didn’t you?  And the best is yet to come:

Due to limitations of the Advanced Editor dialog box, this component cannot be edited using this dialog box.

So there’s a problem with your Merge Join but you cannot use the Advanced Editor to fix it, hmm, and you call that the ADVANCED editor?  Is there anything more advanced perhaps?  Well, actually, there is.  It’s called the Properties pane.  With the Merge Join selected, one of the properties there is called NumKeyColumns.  That property reflects on how many columns the incoming data is sorted.  And currently it contains the wrong value.  Changing its value to the correct number of columns will remove the error.

Properties pane displaying the Merge Join's properties, including NumKeyColumns

In case you’re wondering when you might encounter this particular problem, here’s how you can simulate it.  (Don’t forget to make a copy of the package before messing around with it.)

With the package as it currently is, remove the ZipCode field from the first two sources by unchecking it in the Columns page of the OLE DB Source Editor.

The sources are now complaining so open up their Advanced Editor and correct the SortKeyPosition of the ProductCategoryName field: it should become 2 instead of 3 because ZipCode was 2 and has been removed.

Now try to open the first Merge Join.  The first time it will complain about invalid references so delete those.  With the references deleted, if you now try to open the Merge Join editor, you’ll see the error we’re discussing here.  To fix it, change the NumKeyColumns property of the Merge Join to 2 instead of 3.

Adding The Destination Table

Now there’s only one step remaining: adding a destination for our merged data.  So, throw in an OLE DB Destination and connect it with the output of the last Merge Join:

An OLE DB Destination connected to the join that merges it all together

I’ll just use a quick and dirty way of creating a new table in the database.  Open up the OLE DB Destination Editor by double-clicking it and select a Connection Manager in the dropdown.  Now click the New button next to the Name of the table or the view dropdown.

That opens up the Create Table window, with a CREATE TABLE query pre-generated for you for free.  Isn’t that nice?  Change the name of the table to something nice (at least remove those spaces, yuk!!) and click OK.

The Create Table window

The new table is created at the moment that the OK button gets clicked.

Right, so are we there?  Well, almost.  As you can see now in the next screenshot, the BIDS does not want us to click the OK button just yet.

The OLE DB Destination Editor with the Mappings still missing

To resolve that warning, just open the Mappings page.  As the names of the input columns are matching exactly with the names of the fields in the destination table, everything will be automagically configured at this moment.  So now you can close the window with the OK button.

And that’s it!  Everything is set up to populate the new table with the aggregated figures, as requested by management.  To give it a run, right-click your package in the Solution Explorer and guess what… select Execute Package!  If everything has been configured as expected, you should get some green boxes soon.  And some data in the table, like this:

The final result: sales figures aggregated over different periods in time

Conclusion

In this article I’ve demonstrated a way to aggregate data over different periods in time, using T-SQL and Integration Services.  Obviously this method does not replace the flexibility that one gets when analyzing data stored in an OLAP cube, but it can be a practical method when you quickly need to provide aggregated data for management.

Have fun!

Valentino.

References

Merge Join Data Flow Transformation

Common Table Expressions (CTEs)

DATEADD() function

Share

Tags: , , ,

On the forums I now and then encounter questions regarding images on SSRS reports.  Instead of re-inventing the wheel each time, I decided to write an article about the subject.  So in this article I’ll be discussing and demonstrating several different ways of how images can be put on a report.

I’m using SQL Server Reporting Services 2008 R2 CTP, more precisely version 10.50.1352.12, but the methods explained here will work on any SSRS 2008.  Furthermore I’m using the AdventureWorks2008R2 database, available at CodePlex.

The resulting report, including image files, can be downloaded from my Skydrive.

The Scenario

The marketing department has requested a product catalogue.  This catalogue should contain all products produced by our two daughter companies: The Canyon Peak and Great Falls Soft.  The catalogue should be grouped on company, with the next company’s products starting on a new page.

Further requirements are:

  1. Each page needs an image in its header, with even pages displaying a different image than odd pages.
  2. Each company has a logo.  The logo should be displayed in the company’s header.
  3. Each product has a logo.  The logo should be displayed as part of the product details.

A design document containing the expected layout, including all image material, has been provided.

The Data

The following query provides us with all the data needed to produce the report:

SELECT 'The Canyon Peak' as Company, 'TheCanyonPeak_logo.png' CompanyLogo,
    'The Canyon Peak company specializes in all kinds of bikes, such as touring and road bikes.' CompanyDescription,
    P.Name as Product, PS.Name as Subcategory, PC.Name as Category,
    PP.LargePhoto, P.ListPrice, P.Weight, P.Size,
    P.SizeUnitMeasureCode, P.WeightUnitMeasureCode
FROM Production.Product AS P
    INNER JOIN Production.ProductSubcategory AS PS
        ON PS.ProductSubcategoryID = P.ProductSubcategoryID
    INNER JOIN Production.ProductCategory AS PC
        ON PC.ProductCategoryID = PS.ProductCategoryID
    LEFT OUTER JOIN Production.ProductProductPhoto PPP
        ON PPP.ProductID = P.ProductID
    LEFT OUTER JOIN Production.ProductPhoto PP
        ON PPP.ProductPhotoID = PP.ProductPhotoID
WHERE PC.Name = 'Bikes' --The Canyon Peak sells bikes
    and PP.ProductPhotoID > 1 --I don't want NO IMAGE AVAILABLE
UNION ALL
SELECT 'Great Falls Soft' as Company, 'GreatFallsSoft_logo.png' CompanyLogo,
    'Great Falls Soft uses only the softest tissues available for those sporting clothes.  And on top of that, they''re waterproof.' CompanyDescription,
    P.Name as Product, PS.Name as Subcategory, PC.Name as Category,
    PP.LargePhoto, P.ListPrice, P.Weight, P.Size,
    P.SizeUnitMeasureCode, P.WeightUnitMeasureCode
FROM Production.Product AS P
    INNER JOIN Production.ProductSubcategory AS PS
        ON PS.ProductSubcategoryID = P.ProductSubcategoryID
    INNER JOIN Production.ProductCategory AS PC
        ON PC.ProductCategoryID = PS.ProductCategoryID
    LEFT OUTER JOIN Production.ProductProductPhoto PPP
        ON PPP.ProductID = P.ProductID
    LEFT OUTER JOIN Production.ProductPhoto PP
        ON PPP.ProductPhotoID = PP.ProductPhotoID
WHERE PC.Name = 'Clothing' --Great Falls Soft sells clothes, waterstopping soft clothes
    and PP.ProductPhotoID > 1 --I don't want NO IMAGE AVAILABLE
ORDER BY Category asc, Subcategory asc, Product asc;

I’m not going into the details of this query.  Let’s just say that I’m manipulating data from the database in combination with some hardcoded data to get usable data for our example.  I’ve added some comments to make it clear what the query is doing.  If you have a look at its output, you’ll see that it produces a list of products with some additional fields.

Results of the query

Different Ways Of Adding Images

To get started, open up a SSRS solution, add a new report, add a data source connecting to your AdventureWorks 2008 R2 DB, and add a dataset using the above query.

Embedding Images In Your Report

The first way of adding images to a report that we’ll take a look at is by embedding them inside the report.  Looking at the scenario requirements described earlier, this is requirement 1.

Let’s add a header to the report.  In the BIDS menu, select Report > Add Page Header.

Adding a header to a report

If you don’t see the Report menu item, you probably have not selected your report.  Click your report in the Design view to select it.

From the Toolbox, drag the Image report item onto the header portion of the report.  Doing that will show a pop-up window, the Image Properties.  By default, the Select the image source combobox is set to Embedded.  Good, that’s what we need at this point.  What we now need to do is import an image into the report, using the Import button.

Clicking the Import button shows a common file Open dialog.  Our marketing department has given me two images for use in the header: Cloudy_banner.png and AnotherCloudy_banner.png.  Let’s select the first one.

Adding an image to a report by using the Import button on the Image Properties window

If you don’t see any images, have a look at that filter dropdown as highlighted in the screenshot above.  By default this is set to JPEG files.

Here’s the result in the Image Properties:

Image Properties with an image selected

On the Size page, select Clip instead of Fit proportional.  This is a setting that you’ll need to look at case per case.  For our header images, Clip is the most suitable option.

Image Properties: set Display to Clip

Close the Image Properties window and enlarge the image placeholder so that it occupies the whole header area:

Image added to report header

As you can see, we now have an image in the header.  But we haven’t fully implemented the requirement yet.  The even pages should display a different image than the uneven ones.

To be able to do that, we’ll first add the second banner image to the report.  In the Report Data pane, locate the Images node and open it up.  You’ll notice that the image that we inserted earlier can be found here.

The Images node in the Report Data pane shows all embedded images

Right-click the Images node and select Add Image.

Right-click Images node to add an embedded image to the report

That opens up the familiar file Open dialog which was used to add the first image.  So I’m now selecting the file called AnotherCloudy_banner.png, after changing the default filter to PNG.  After clicking OK, the image gets added under the Images node.

Second banner image added to the report

With the second image added, all that remains to be done is tell the header that it should pick different images depending on the page number.

Right-click the image in the header and select Image Properties.  On the General page, when you click the dropdown of the setting called Use this image, you’ll notice that there are two values now.  These are the same values as displayed in the Report Data pane.  And these are the values to be used in the expression that we’ll create to rotate the images depending on page number.

Click the fx button next to the dropdown and enter the following expression:

=IIF(Globals!PageNumber Mod 2 = 0, "Cloudy_banner", "AnotherCloudy_banner")

This is a fairly simple expression, using the Mod operator and the IIF() function.  When page number can be divided by two, which means it’s an even page number, Cloudy_banner is displayed.  Otherwise the other banner is displayed.

That’s it, the report header is finished.  When you have a look at the report in Preview, it should now show the second banner on the first page – this is an uneven page.

To conclude this chapter I’d like to mention that this method is usually not the preferred one.  A disadvantage here is that the images are stored inside the report RDL and thus cannot be modified without altering the report itself.

Here’s the evidence:

 <EmbeddedImages>
    <EmbeddedImage Name="Cloudy_banner">
      <MIMEType>image/png</MIMEType>
      <ImageData>iVBORw0KGgoAAAANSUhEUgAABVsAAABaCAIAAA...

To have a look at the RDL yourself, just right-click the report in the Solution Explorer and select View Code.

On to requirement number two!

Displaying Images Through A URL

At the moment, the report body is still empty, so drag a Table onto it.  Put the Table in the upper-left corner, remove one of the columns so that two remain, remove the Header row and make it a bit wider.

Now set the DataSetName property of the Tablix to the name of your dataset, in my case that’s dsProducts.

The report should display the data grouped on company, so right-click on the line that says Details in the Row Groups window part at the bottom of the Design View.  Select Add Group > Parent Group.

Right-click the Details line in Row Groups to add a new parent group

Group by Company and add a group header:

Tablix grouping

Remove the extra first column that just got generated:

Remove unwanted column

We’ve now got an empty tablix with two columns, a Details row and a Company header row.  In our dataset, one of the fields is called CompanyDescription.  Hover the mouse pointer above the textbox in the top-right, click the small icon that appears and choose the field from the dropdown that appears when you click the icon.

Click the small icon to get a list of fields

To add the company’s logo, drag an Image from the Toolbox pane into the textbox on the left of the company description.  Doing this opens up the by now familiar Image Properties dialog.

Give it a good name, such as CompanyLogo, and select External as image source.

Click the fx button next to the Use this image box and enter an expression such as this one:

="file:C:\vavr\test\" + Fields!CompanyLogo.Value

When using External as image source, the image expression should result in a valid URL, any valid URL.  In my example the files are located in a local folder called c:\vavr\test.  Keep in mind that, when you deploy the report to a server, the images should be located in that same folder, this time located on the server.

The Image Properties configured to display an External image

By default the image gets displayed using the Fit Proportional setting.  You can verify this in the Size page of the Image Properties.  We want the image to get fully displayed while maintaining the aspect ratio, so leave the setting as it is.  Close the image properties dialog.

Vertically enlarge the first row in our tablix to an acceptable size.  In my case the marketing department specified to use a height of 1.5 inches for the company logo.  With the image selected, locate the Size > Height property and set it to “1,5in”.  Note that the decimal separator used here depends on your local settings.

Now have a look at the report in Preview:

The report with company logos added

Note that I’ve removed the borders of all textboxes by setting their BorderStyle property to None.

With the logo images implemented we have fulfilled requirement two.  On to number three.

Retrieving Images From The Database

In this last requirement we’ll have a look at displaying images that are retrieved from the database, also known as data-bound images.

The retrieving part is actually already implemented.  In our dataset there’s a field called LargePhoto, that one contains a picture of the product.

Let’s add some product details and a picture in that remaining blank row.  To get full control over layout I want to make the detail part of the tablix a freestyle part.  First merge the two cells together by selecting both of them, then right-click and choose Merge Cells.

Merging two cells together in a tablix

Now select a Rectangle in the Toolbox pane and drop it into the merged area.  To add fields such as Subcategory and Product you can just select them from the Report Data pane and drop them inside the rectangle.  I’m also adding some additional labels and fields, as shown in the next screenshot.

The product details in Design view

As you can see I’ve modified the fonts a bit.  The rendered version:

The rendered product details

This is the expression used for displaying the weight:

=IIF(
    IsNothing(Fields!Weight.Value),
    "unknown",
    Fields!Weight.Value & " " & Fields!WeightUnitMeasureCode.Value
)

And here’s the expression for the size field:

=Fields!Size.Value & " " & Fields!SizeUnitMeasureCode.Value

For the layout of the price field I’ve just entered C in the Format property of the textbox.

With the textual product details completed, all that remains to be done is adding the product image.

From the Toolbox pane, drag an Image into the remaining whitespace in the rectangle, next to the product details.  (You did keep some space available, right?)

Again we get the familiar Image Properties popup.  Give it a good name, like ProductImage, and select the image source that we haven’t used yet, Database.  In the Use this field dropdown, select LargePhoto, and select image/gif as MIME type.

Note: the images are stored as GIF.  You can verify this by running a select on the Production.ProductPhoto table.  Looking at the LargePhotoFileName field we see that the extension is .gif.

There one textbox on the General page that’s still blank.  That one is called Tooltip.  Click the fx button next to it and enter following formula:

=Fields!Product.Value

Click sufficient OK buttons until the properties dialog is gone, then resize the image placeholder so that it occupies the remaining whitespace.

Here’s what the result looks like in preview:

The final report, with a tooltip on the product image

When hovering the mouse pointer above the product image, you’ll get a nice tooltip.

Conclusion

In this article I have illustrated the three possible methods of adding an image to your Reporting Services report.

Have fun!

Valentino.

References

BOL: Adding Images to a Report

Share

Tags: , , , ,

« Older entries § Newer entries »

© 2008-2017 BI: Beer Intelligence? All Rights Reserved