Integration Services

You are currently browsing the archive for the Integration Services category.

In case you’ve read my article on using SSIS and XSLT to get XML imported into the database, you know that I cheated a little by first manually removing the namespaces from the XML document.

Well, that obviously doesn’t work smoothly when the process needs to get automated.

So here’s a method to use XSLT to remove the namespaces for you.

Removing The Namespaces

Using the XML Task as explained in my article you can apply the XSLT to remove the namespaces as an additional step prior to the XML Task that applies the XSLT to CSV conversion.  As output destination, you could set up a package variable that accepts the “XML without namespaces”, or you can write to file.  Up to you to decide.

Here’s the XSLT that will remove namespaces from the XML:

<!-- remove namespaces -->
<xsl:stylesheet xmlns:xsl ="http://www.w3.org/1999/XSL/Transform" version ="1.0" >
  <xsl:template match ="@*" >
    <xsl:attribute name ="{local-name()}" >
      <xsl:value-of select ="." />
    </xsl:attribute>
    <xsl:apply-templates/>
  </xsl:template>
  <xsl:template match ="*" >
    <xsl:element name ="{local-name()}" >
      <xsl:apply-templates select ="@* | node()" />
    </xsl:element>
  </xsl:template>
</xsl:stylesheet>

(Ref. http://blogs.msdn.com/b/kaevans/archive/2003/06/13/8679.aspx)

Removing namespaces is one thing, but you’re losing some information.  What if you’d like to keep the namespaces as part of the node name?

Replacing The Namespaces

Well, that possible too!  Using the XSLT below, namespaces are kept but the colons separating the namespaces from the attribute names are replaced with underscores.  The translate() function is used to achieve this:

<!-- replace namespaces -->
<xsl:stylesheet xmlns:xsl ="http://www.w3.org/1999/XSL/Transform" version ="1.0" >
  <xsl:template match ="@*" >
    <xsl:attribute name ="{local-name()}" >
      <xsl:value-of select ="." />
    </xsl:attribute>
    <xsl:apply-templates/>
  </xsl:template>
  <xsl:template match ="*" >
    <!--keep namespace prefix as first part of node name (replaced colon with underscore) -->
    <xsl:element name ="{translate(name(), ':', '_')}" >
      <xsl:apply-templates select ="@* | node()" />
    </xsl:element>
  </xsl:template>
</xsl:stylesheet>

Have fun!

Valentino.

Share

Tags: , ,

This is a follow-up to my article on Loading Complex XML Using SSIS and XSLT.  In that article I demonstrated how you can convert complex XML into simple CSV using XSLT in SSIS.

The resulting DTSX package and input files can be downloaded from my SkyDrive through this link.

Dealing With Special Characters

If you’ve followed the instructions in my article mentioned above and you need to deal with special characters such as the é and è encountered in the French language, you probably noticed that it wouldn’t really work as expected.  In fact, in your final result you may have ended up with the special characters being replaced with other, even more special, characters.  Obviously not good.

Here’s an explanation on the reason why that happens, and also how to deal with it.

Setting The Scene

Imagine the following sample XML, representing a really huge book collection:

<books>
    <book>
        <title>The Hitchhiker's Guide to the Galaxy</title>
        <author>Douglas Adams</author>
        <language>EN</language>
        <description>The Hitchhiker's Guide to the Galaxy is a science fiction comedy series created by Douglas Adams.</description>
    </book>
    <book>
        <title>Le Trône de fer</title>
        <author>George R.R. Martin</author>
        <language>FR</language>
        <description>Le Trône de fer est une série de romans de fantasy de George R. R. Martin, dont l'écriture et la parution sont en cours. Martin a commencé à l'écrire en 1991 et le premier volume est paru en 1996. Prévue à l'origine comme une trilogie, la série compte désormais cinq volumes publiés et deux autres sont attendus.</description>
    </book>
</books>

As you can see, the second book in the list is the French version of the first book in the A Song of Ice and Fire series by George R.R. Martin and as it goes with French, there are some accents in the description of the book.

We’ll use the following XSLT to convert it to CSV:

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs=http://www.w3.org/2001/XMLSchema xmlns:fn="http://www.w3.org/2005/xpath-functions">
  <xsl:output method="text" version="1.0" encoding="UTF-8" indent="no"/>
  <xsl:template match="/">
    <xsl:text>BookTitle;Author;Language;Description</xsl:text>
    <xsl:text>&#13;&#10;</xsl:text>

    <xsl:for-each select="books/book">
      <xsl:text>"</xsl:text>
      <xsl:value-of select="title"/>
      <xsl:text>";"</xsl:text>
      <xsl:value-of select="author"/>
      <xsl:text>";"</xsl:text>
      <xsl:value-of select="language"/>
      <xsl:text>";"</xsl:text>
      <xsl:value-of select="description"/>
      <xsl:text>"</xsl:text>
      <xsl:text>&#13;&#10;</xsl:text>
    </xsl:for-each>

  </xsl:template>
</xsl:stylesheet>

Using an XML Task in the Control Flow, as explained in my article, we’d get the following output:

BookTitle;Author;Language;Description
“The Hitchhiker’s Guide to the Galaxy”;”Douglas Adams”;”EN”;”The Hitchhiker’s Guide to the Galaxy is a science fiction comedy series created by Douglas Adams.”
“Le Trône de fer”;”George R.R. Martin”;”FR”;”Le Trône de fer (A Song of Ice and Fire) est une série de romans de fantasy de George R. R. Martin, dont l’écriture et la parution sont en cours. Martin a commencé à l’écrire en 1991 et le premier volume est paru en 1996. Prévue à l’origine comme une trilogie, la série compte désormais cinq volumes publiés et deux autres sont attendus.”

So far so good, all accents are still present!

Then we’d import the file using a Flat File Source component in a Data Flow Task.  Here’s what the General page of the Flat File Connection Manager would look like:

Flat File Connection Manager: General

We’ve set double-quote as Text Qualifier and checked the Column names in the first data row textbox.

Switching to the Columns page we’d get the following:

Flat File Connection Manager: Columns - the Preview has messed up the accents!

Hang on, that’s not right!  The Preview is not displaying our accents as expected!  Oh my, what’s going on here? Let’s call the code page detectives!

A Mismatch Investigation

Take a good look at the XSLT which we’ve used to convert the XML into CSV, especially the xsl:output line:

<xsl:output method=textversion=1.0encoding=UTF-8indent=no/>

That line specifies that the text output should be encoded using the UTF-8 code page.

Now take a good look at the General page in the screenshot earlier, more precisely this part:

Code page: 1252 (ANSI - Latin I) is not what we need right now!

Indeed, code page 1252 (ANSI – Latin I).  While the input is UTF-8.  Of course that results in a mismatch of certain characters, as demonstrated here.  The fix is fairly easy, just change the Code page setting to 65001 (UTF-8).

Code page: 65001 (UTF-8) - much better!

If we now switch back to the Columns page we should come to the following result:

Flat File Connection Manager: Columns page preview with accents!

Ah, sure looks better doesn’t it?  All accents are present as expected.

But in case you thought that’s it, I’d advise you to think again.  Don’t worry, I’ll demonstrate what I mean.  Let’s do that by setting up a simple Data Flow.

Setting Up The Data Flow

Throw in a Flat File Source and specify our Flat File Connection Manager.  I also prefer to keep NULLs as they come in, using the Retain null values from the source as null values in the data flow checkbox.

Flat File Source: Connection Manager

If you click the Preview button you should get similar output as shown one screenshot earlier.

Now hook this up to an OLE DB Destination that writes the incoming data into a table in your favorite database:

OLE DB Destination is not happy :(

As you can see, our destination is not entirely happy with all this.  Here are the details of one of the error messages:

Validation error. Data Flow Task: Data Flow Task: The column “BookTitle” cannot be processed because more than one code page (65001 and 1252) are specified for it.

Looks like once more we’ve got a code page conflict.  And we sure do. Clicking the Data Flow connector between the Flat File source and OLE DB destination shows us the following:

Data Flow Path Editor shows that our strings are encoded using the 65001 code page.

Each of our incoming string values is encoded using the 65001 (UTF-8) code page.  But our database was created using the Latin1_General_CI_AS collation.  So we’ve indeed got a code page conflict!

Fear not, that’s easily remedied.  Add a Derived Column transformation in between the source and destination and convert each incoming string value using a cast expression such as this one:

(DT_STR, 50, 1252)BookTitle_IN

Note: whenever I need to manipulate incoming columns to create a second version of the same column, I rename the incoming column to TheColumn_IN.  The new version will be called TheColumn and preferably TheColumn is the name of the field in the destination table.  This makes it easy to distinguish all columns later down the flow.

Here’s what the final version of the Derived Column looks like:

Using the Derived Column transformation to cast the incoming strings into the correct code page.

Next we’ll need to open the Destination and change the mapped fields to the new ones.  Because my new columns are called exactly the same as the fields in the destination table, I can do that easily.  In the Mappings page, all I need to do is right-click the grey background in between the two tables and click Select All Mappings, hit the Delete button, right-click again and click Map Items By Matching Names:

Using Map Items By Matching Names, easy!

With the data flow finished, let’s give our package a run!

Flat File Source has got a length issue!

Ouch, our source is not happy!  A closer examination of the Output pane brings us to the following error:

Error: 0xC02020A1 at Data Flow Task, Flat File Source [16]: Data conversion failed. The data conversion for column “Description” returned status value 4 and status text “Text was truncated or one or more characters had no match in the target code page.”.

Oh right, so far we haven’t bothered looking at the actual length of the data that we’re importing.  Actually, what is the length of our data flow columns??  Well, if you’ve been paying close attention you should have noticed the number 50 several times in the screenshots and expressions above.  That’s indeed the default length for text columns when importing a flat file.

And if you scroll back up to the sample XML, you’ll notice that the content for the description is longer than 50 characters, thus causing our error!  Let’s find out how to get that solved!

Fixing The Field Length Issue

The first step in getting this fixed is opening up the Advanced page in the Flat File Connection Manager editor.

Flat File Connection Manager: using the Advanced page to change field length.

Then select the Description field and change its OutputColumnWidth property from 50 to 500.

That will cause the source to generate a warning.  Remove this warning by opening and closing the source editor.  Click the Yes button in the popup that appears.

The next step is changing the expression for the Description field in the Derived Column to this:

(DT_STR,500,1252)Description_IN

Indeed, the field length is one of the parameters in that cast.  The other numeric parameter is obviously the code page.

Having done that you’ll notice that the destination will start complaining.  Of course, you’ll need to adapt the destination table to reflect the field length increase as well.  So change the table definition and open/close the destination editor to make it happy.

Alright, let’s run the package once more!

Finally the data flow is happy with it all and has inserted two records:

That's more like it: all components colored green!

And what does our table contain?  Let’s find out:

All accents have been imported!

That’s looking good for sure!

Conclusion

In this follow-up article I have demonstrated what might go wrong when you need to deal with special characters while importing flat files, and how to solve your possible issues.  In case you missed the original article, have a look through this link.

Have fun!

Valentino.

References

Wikipedia: UTF-8

Share

Tags: , ,

Do you like the Custom Code functionality in SSRS?

And what would you think if SSIS offered the same possibility?  Imagine, being able to write a custom function in .NET and then use it in any expression in your package, how powerful that would be!

There’s already one function I would have written today: GetFilename(string path).

If you believe such functionality to be useful, please vote on the following Connect request: Add user defined function support to the SSIS expression language

Have fun!

Valentino.

Share

Tags: , ,

Quick post to ask a moment of your time.

Have you ever wanted a feature in the BIDS to quickly identify package variables which have become obsolete?  Well, I’ve been involved in the cleanup of existing packages and I can tell you, that feature would be very handy!

After a search on the internet, it turns out that one of the planned features for the BIDS Helper contains exactly that.  The request is a bit wider than what I need, but at least “highlight unused variables” is part of it.

All I’m asking now is just a minute of your time to vote for that feature request.

Have fun, and thank you!

Valentino.

Share

Tags: , , ,

When I opened an existing SSIS project in the new SQL Server 2012 RC0, I came to an interesting discovery: an empty Toolbox pane!  Even with an SSIS package open in the designer.  Hmm, that’s funny!  So where are my SSIS components?

Take a good look at the following screenshot:

The Toolbox is no longer the SSIS Toolbox but the new SSIS Toolbox is!

That’s right, they are not in the Toolbox anymore but in the SSIS Toolbox instead.  This new toolbox is a bit different from the old one.  Besides the grouping of components that has changed, the most important change is that it will automatically detect any custom components.  You no longer need to right-click, select Choose Items, go fetch a coffee, wait until it cools down a bit, drink it and finally … select your custom component.  No, you’ll have to find another reason to get that coffee shot.  Actually, that’s not entirely true: you still need to right-click and then click Refresh Toolbox and then the custom components will be shown.

Another difference is that it’s split in two parts.  The bottom half of the pane now contains a description of the selected item, including a link that should lead to samples and a link to the Books Online.

The new SSIS Toolbox shows a description of the selected=

Out of curiosity I tried the Find Samples link a couple of times, but for now it doesn’t seem to deliver much content:

Not many results through Find Samples link

Okay, so one thing remains: how do you open the new SSIS Toolbox pane?  According to the Books Online it should be opened automatically when you open an existing project.  Well, apparently not all the time!

The first place I’d look is in the View menu.  But alas, SSIS Toolbox is not one of the menu items.  Not even in the Other Windows submenu.  Why oh why?!

Long story short: do you see those two buttons in the below screenshot?  They’re new!

Package designer has gotten two new buttons

The first button leads to the Variables pane, the second button will open the SSIS Toolbox.  Good to know isn’t it?!

Further investigation led me to the following: according to the Books Online, the SSIS Toolbox item should actually be located in the View > Other Windows menu.  As that is not the case and I think it’s only logical to have that pane added to the View menu as well, I’ve filed a bug on Microsoft Connect.  Go ahead and vote!

Have fun!

Valentino.

References

SSIS Toolbox

Share

Tags: , ,

« Older entries § Newer entries »

© 2008-2014 BI: Beer Intelligence? All Rights Reserved