April 2011

You are currently browsing the monthly archive for April 2011.

This week I was asked twice, by two different (luckily) colleagues, if I knew how to change the order of the parameters in a report in SQL Server Reporting Services.  I still remember that the first time when I needed this after my upgrade to 2008, I also spent some time looking for it.  Apparently it’s still hard to find so I’ve now decided to write a quick post about it.  I promise you, once you know it you’ll say “how silly”.

The following screenshot should say it all:

Use the arrows in the Report Data pane to move parameters up or down

That’s right, those two small arrow buttons in the Report Data pane can be used to move the parameters up or down.  They may be difficult to spot but once you’ve found them you’ll never forget.  Why are they difficult to spot?

Move Up/Down buttons greyed out

For several reasons:

  • they are greyed out when no parameter is selected
  • they may not be visible when your Report Data pane is too small

How silly huh?

Have fun!

Valentino.

References

How to: Change the Order of a Report Parameter (Reporting Services)

Share

Tags: , ,

In my previous article I showed you how the XML Source component can be used to load XML files into a SQL Server database, using fairly simple XML structures.  In this follow-up article I will demonstrate how to tackle the complex XML issue.

The Complex XML Example

You probably know that SSRS reports, RDLs, are actually XML files.  And they’re not the easiest types of XML files around.  To humans they are still readable but the structure can be quite complex.  So there we’ve got our example: an RDL.  More specifically I’ll be using the RDL that’s available for download in one of my earlier articles.

The Goal

Every good example has got a goal.  Our goal today is to retrieve a list of datasets and fields as defined in the RDL.  Shouldn’t be too difficult, right?

Using The XML Source Component

Let’s try to get this done through the XML Source component with which we’re very familiar by now.  You know the drill: drag an XML Source into your Data Flow, open it up and configure the XML and XSD locations.

Note: to be able to do this I cheated a bit by manually manipulating the RDL a little.  More precisely I removed all the namespace references from the <report> tag and further down the XML (removed “rd:”).

With both files configured, let’s have a look at the Columns page:

The XML Source component handling a really complex XML file

Look at that massive list of output flows!  In total I’ve gotten 45 of them, all for free!  Even if you’re up to the task of creating 45 output tables, do you really want to find out how to get these joined together?  To prevent creating that bunch of tables you may consider using the Merge Join component… 45 times in your data flow. Didn’t think so!

Sure, it would run fine if you manage to get it all constructed.  But in my opinion this is just too silly to try out because there’s an interesting alternative.

And that alternative is XSLT – eXtensible Stylesheet Language Transformations.

Using XSLT

With XSLT you describe what you want to retrieve from the XML document and what it should look like.  In this example we’ll be retrieving the list of datasets and their fields, in CSV format.  CSV stands for Comma-Separated Values, although I prefer the term “Character-Separated Values” as the separator is not always a comma.

To be able to write correct XSLT, you need to know what the XML structure looks like.  Here are the first 31 lines of the sample RDL file mentioned earlier.

<?xml version="1.0" encoding="utf-8"?>
<Report>
  <AutoRefresh>0</AutoRefresh>
  <InitialPageName>A Very Unique Name</InitialPageName>
  <DataSources>
    <DataSource Name="srcContosoDW">
      <DataSourceReference>ContosoDW</DataSourceReference>
      <SecurityType>None</SecurityType>
      <DataSourceID>b7a3d32c-e95d-4acf-bb99-9d60755303ea</DataSourceID>
    </DataSource>
  </DataSources>
  <DataSets>
    <DataSet Name="dsProductList">
      <Query>
        <DataSourceName>srcContosoDW</DataSourceName>
        <CommandText>select DPC.ProductCategoryName, DPS.ProductSubcategoryName, DP.ProductName
from dbo.DimProduct DP
inner join dbo.DimProductSubcategory DPS
    on DPS.ProductSubcategoryKey = DP.ProductSubcategoryKey
inner join dbo.DimProductCategory DPC
    on DPC.ProductCategoryKey = DPS.ProductCategoryKey;</CommandText>
      </Query>
      <Fields>
        <Field Name="ProductCategoryName">
          <DataField>ProductCategoryName</DataField>
          <TypeName>System.String</TypeName>
        </Field>
        <Field Name="ProductSubcategoryName">
          <DataField>ProductSubcategoryName</DataField>
          <TypeName>System.String</TypeName>
        </Field>

As you can see, the main node is called Report.  Nested under Report we’ve got DataSets, which can have several DataSet elements.  Each DataSet has a set of Fields with one or more Field elements.  Using that information we come to the following XSLT.

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:fn="http://www.w3.org/2005/xpath-functions">
  <xsl:output method="text" version="1.0" encoding="UTF-8" indent="no"/>
  <xsl:template match="/">
    <xsl:text>DataSource;DataSet;Field</xsl:text>
    <xsl:text>&#13;&#10;</xsl:text>

    <xsl:for-each select="Report/DataSets/DataSet/Fields/Field">
      <xsl:text>"</xsl:text>
      <xsl:value-of select="../../Query/DataSourceName"/>
      <xsl:text>";"</xsl:text>
      <xsl:value-of select="../../@Name"/>
      <xsl:text>";"</xsl:text>
      <xsl:value-of select="@Name"/>
      <xsl:text>"</xsl:text>
      <xsl:text>&#13;&#10;</xsl:text>
    </xsl:for-each>

  </xsl:template>
</xsl:stylesheet>

So, what is the XSLT describing?  On line three, we say that the output should be text in UTF-8 encoding.  The “template match” on the fourth line takes the whole XML document into consideration, hence the forward slash.  Then on line five we start writing output through the xsl:text tag.  This is our header line.  As you can see we’re using the semi-colon as column separator in the CSV output.  Line six adds a CRLF (carriage-return + line feed) to the output.

Then the fun part starts.  If you have experience with XPath, the way XSLT walks through the XML document should look familiar to you.

The xsl:for-each tag loops over all the Fields in all the DataSets in the document.

Using the xsl:value-of tag, we can fetch values out of the XML.  The first value being retrieved is the name of the data source that dataset is using.  (I’ve added the retrieval of the data source to demonstrate how element values are retrieved.)  The path to the DataSourceName element is Report/DataSets/DataSet/Query/ so we use the double-dot syntax to navigate two levels up in the XML tree.  The value of the element itself is retrieved by just using its name, as demonstrated in the XSLT above.

The next value-of tag retrieves the Name attribute of the DataSet, hence the two levels up, and the final value-of fetches the Name attribute of the Field element.

Now that the XSLT is clear for everyone, how do we apply it to our XML document?  Here comes the time for SSIS once more!

Open up the BIDS with the Control Flow of an SSIS package active and throw in an XML Task component.

The XML Task, one of the Control Flow Items in Integration Services

Double-click the component to open up the XML Task Editor.  This is what it looks like by default:

XML Task Editor: default settings

As this is an all-round XML task that can handle several XML-related tasks, the first setting that we need to modify is called OperationType.  That’s not too complicated because it comes with a dropdown and XSLT is one of the possible values.

The different operation types supported by the XML Task

With XSLT selected, the editor transforms into the following:

The XML Task Editor with XSLT as OperationType

Now we need to configure where the task can find our XML file, through the Source property.  Click the Source textbox to make the dropdown appear and select <New File connection…>.

You can create a new File Connection through the XML Task Editor

In the File Connection Manager Editor, leave the Usage type at Existing file and select the RDL.

Next up we’re going to specify where the task can find the XSLT that needs to be applied to the XML.  That can be done through the Second Operand settings.  As SecondOperandType, select File Connection.  Use the dropdown of the SecondOperand property to create a second new file connection that points to your XSLT file.

With that set up as well, only one step remains.  The task still doesn’t know where the output should be saved.  Or that it actually should get saved.  So first switch the SaveOperationResult property to True.  As you can see, DestinationType is already set to File Connection, that’s what we need.  Use the dropdown of the Destination property to create a third new file connection.  This time however, Usage Type should be set to Create File.  Specify path and filename for the output file and click OK to close the File Connection Manager Editor.

This is what our XML Task now looks like in the editor:

The XML Task Editor with all input and output files specified, as expected for our XSLT experiment

As shown above, I’ve called the output file DatasetInfo.csv.

One more property that can be interesting is the OverwriteDestination property.  Setting it to True can ease the testing of your package if you need to execute it multiple times.  Which you’ll probably want when your XSLT is not giving the expected output.  Don’t forget to set it to False afterwards (depending on what behavior you actually expect from your package).

Okay, now close the XML Task Editor and execute the package.  If you haven’t made any mistakes, the task should color green and you should have an extra file on your hard drive somewhere.  Here’s what the content of my DatasetInfo.csv looks like:

DataSource;DataSet;Field

"srcContosoDW";"dsProductList";"ProductCategoryName"

"srcContosoDW";"dsProductList";"ProductSubcategoryName"

"srcContosoDW";"dsProductList";"ProductName"

"srcContosoDW";"dsProductList";"ProductCategoryColor"

"srcContosoDW";"dsProductList";"EasterEgg"

Look at that, a list of fields, all part of the dsProductList dataset.

“Hang on, wasn’t this article going to demonstrate how to get complex XML files imported into our database?  And now you’re writing the data to a file?!”

Well yeah, you’re right.  Unfortunately the XML Task does not offer the possibility to write to a table in a database.  So to get the data imported into your database you’ll need to set up a Data Flow that imports the CSV files.  But that shouldn’t be too difficult to achieve, right?

Mission accomplished!

Conclusion

With this article I have shown how Integration Services can be used to retrieve data out of complex XML files, without actually using the XML Source component.  I hope you’ve enjoyed reading it as much as I had while writing.  Or maybe you know another interesting method to get complex XML imported.  Feel free to post comments!

If, after reading and applying the above technique, you are struggling with getting special characters such as é, è or ö and even ô, imported make sure to read my follow-up article on SSIS, Flat Files and Accents.  It also gives some more insight into what the above method actually produces (code page UTF-8 is a hint).

Have fun!

Valentino.

References

XSLT (Wikipedia)

CSV (Wikipedia)

XML Task (MDSN)

Share

Tags: , ,

SQL Server Integration Services can read XML files, that’s known by every BI developer.  (If you didn’t, don’t worry, I’m aiming this article at newcomers as well.)

But how far can you go?  When does the XML Source component become unusable?  Let’s find out!

To create the examples I’m using the following SQL Server version:

Microsoft SQL Server 2008 R2 (RTM) – 10.50.1600.1 (X64)   Apr  2 2010 15:48:46   Copyright (c) Microsoft Corporation  Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: )

Basic Example

This first example is a really simple XML file containing a list of colors with their corresponding RGB code.

<colors>
  <color RGB="FF0000">Red</color>
  <color RGB="00FF00">Green</color>
  <color RGB="0000FF">Blue</color>
  <color RGB="FFFFFF">White</color>
  <color RGB="000000">Black</color>
</colors>

Let’s import this into a database.  Open up the BIDS, create an SSIS project and throw a Data Flow Task into the package and open it up.

The component that we’re now most interested in is the XML Source, one of the components in the Data Flow Sources category in the Toolbox.

The XML Source component

Add one of those to your Data Flow and double-click it to open up the XML Source Editor.

The Data Access Mode should be set to XML file location, which is the default setting.  The other options are XML file from variable – useful if you’ve got the file path and name of the XML file in a variable – and XML data from variable – interesting if your XML data is actually stored in a variable.

As XML Location, select the .xml file.  Our XML sample does not have an inline schema, so we can’t use that checkbox.  And we can’t click the OK button either, it’s grayed out.  The source component really expects a description of the XML structure before the editor can be closed.

The bottom of the screen even shows a warning with the following message:

XML Schema (XSD) is not specified. Select an existing XSD or click Generate XSD to create an XSD from the XML file.

So, what are you waiting for,  Click the Generate XSD button to let the XML Source Editor generate the XSD schema for us.  Real easy, right?

Remember where you save the file, and when it’s generated, select the .xsd file in the XSD location textbox.  As you can see, the OK button will become available.  But don’t click it just yet.

Here’s what the XML Source Editor now looks like:

XML Source Editor with an XML and XSD file specified

Let’s now move on to the second page of the XML Source Editor, called Columns.  When you open it, you’ll receive the following popup with a couple of warnings:

Warning gets displayed when opening the Columns page

The editor is letting us know that the columns that are being generated do not have a maximum length specified.  So it’s setting them to Unicode (DT_WSTR) with a length of 255.  Click the OK button to get rid of that message and to be able to see the generated columns.

Note: if your data elements or attributes may contain longer strings then you should have a look at modifying the length specification.  This can be done through the Advanced Editor, which is opened by right-clicking the XML Source.  The Input and Output Properties page is the one you’re after.

The Columns page, showing the columns that the XML Source generated for us

As you can see, our only attribute – RGB, is nicely put in a column with the same name.  The value of each <color> node however is not put in a column called Color.  By default, this value is put into a column called “text”.  Which is a weird name for a column in an SSIS data flow if you ask me.  The good thing is that you can just rename it by changing the Output Column value.

Let’s test this out.  My favorite way is to add a Multicast component to the Data Flow, then add a Data Viewer on the connector (right-click the green arrow, select Data Viewers, click Add > OK > OK).  Now execute the package to get this result:

Testing the output of the XML Source through the Data Viewer

Mission accomplished, we’ve retrieved data from a very basic XML file!

Adding Some Complexity

Let’s move on to the second example of this article.  The difference with the previous example is that now we’ve got multiple nested structures to deal with.

The example represents a list of book reviews, including some details on the books themselves.  A book can have multiple writers and obviously multiple reviews as well.

<books>
    <book pages="300">
        <title>Microsoft SQL Server 2008 R2 Master Data Services</title>
        <category>Information Technology</category>
        <authors>
            <author>Jeremy Kashel</author>
            <author>Tim Kent</author>
            <author>Martyn Bullerwell</author>
        </authors>
        <reviews>
            <review>If you're looking for an excellent book on the new Master Data Services component of SQL Server 2008 R2, definitely check this one out!  To be released in June 2011 by Packt Publishing!</review>
        </reviews>
    </book>
    <book pages="832">
        <title>Inside Microsoft SQL Server 2008: T-SQL Querying</title>
        <category>Information Technology</category>
        <authors>
      <author>Itzik Ben-gan</author>
      <author>Lubor Kollar</author>
      <author>Dejan Sarka</author>
      <author>Steve Kass</author>
        </authors>
        <reviews>
            <review>Every "Inside SQL Server" book can be recommended, especially when written by Itzik!</review>
        </reviews>
    </book>
    <book pages="1137">
        <title>The Lord of the Rings</title>
        <category>Fantasy</category>
        <authors>
            <author>J.R.R. Tolkien</author>
        </authors>
        <reviews>
            <review>Like fantasy?  What are you waiting for then?  It's a classic!</review>
            <review>If you liked the movie, you'll love the book.</review>
        </reviews>
    </book>
</books>

Configure an XML Source so that it uses the books.xml file, generate the XSD and specify its location.  I’m not going into details on that, the procedure is the same as in our first example above.

Now open up the Columns page to have a closer look at how the XML data is going to get imported.

XML Source generates multiple=

So how does the XML Source component deal with the multiple nested structures?  It generates multiple outputs!  If you select another output from that dropdown, you get to see its fields.

To get a clear understanding of what exactly is going on, let’s connect each output with an OLE DB Destination component.  The target table can be generated based on the incoming fields by clicking the New button.  Replace the table name in the generated CREATE TABLE script with a clear one that fulfills your naming convention requirements – such as NO SPACES IN A TABLE NAME for instance – and hit the OK button.

Destinatio table can be generated by using the New button in the OLE DB Destination Editor

Now that the table is created, it will be automatically selected in the Name of the table or the view dropdown.  Don’t forget to visit the Mappings page so that the, well, mappings are created.  If no field names were modified in the CREATE TABLE script then all fields should be mapped automatically based on their names.

With all five destinations added, execute the package.

Each XML Source output is connected to an OLE DB Destination - executes fine

So now we’ve loaded the data from our XML file into a database, but the data is spread over five tables.  How do we retrieve that data?  Join them together!

select * from XML_book
inner join XML_authors on XML_authors.book_Id = XML_book.book_Id
inner join XML_author on XML_author.authors_Id = XML_authors.authors_Id
inner join XML_reviews on XML_reviews.book_Id = XML_book.book_Id
inner join XML_review on XML_review.reviews_Id = XML_reviews.reviews_Id;

And the result looks like this:

XML data imported into the SQL Server database

Conclusion

We have managed to flatten the data from an XML file containing multiple nested repeating nodes, nice huh?  But do you also feel the limitation using this method?  In terms of modern XML, this was still a fairly easy XML file and yet we already needed five tables to store the data.  Can you imagine what this will give with a really complex file?

Watch this blog for the follow-up article where I will try to import data from a really complex XML file!

Have fun!

Valentino.

References

XML Source component

Share

Tags: , , ,

© 2008-2017 BI: Beer Intelligence? All Rights Reserved