Integration Services

You are currently browsing articles tagged Integration Services.

Surprised!Sometimes we take silly little things we do in our daily life for granted and assume everyone else is aware of them too.  That’s where we’re wrong: as I’ve found out, they aren’t!  Not always anyway.  That brought me to the idea to start this Silly SQL blog series.  Each post will explain one little thing I do or use regularly that makes my life easier.  Here’s the first one!

Last week I noticed a co-worker making a lot of keyboard noise while implementing a Data Flow Transformation in an SSIS ETL package.  When I turned around to have a look at his screen I saw he was working on an OLE DB Destination, nothing wrong with that.  Basically he was hitting the down arrow followed by TAB twice, down arrow again and so on in order to set up matching input columns with destination columns.  This method worked because we decided to put the incoming fields in the same order as the columns in the destination table and we also gave them the same name using aliases in the source query.

However, for tables with over 200 fields this method is quite tiresome (and annoying for colleagues unless they’re using a headset).  Nice as I am I decided to help him out.  I asked him if I could borrow his mouse for a second and then right-clicked in the grey area in between the two tables:

The Map Items by Matching Names functionality

You should have seen his face when he saw that window appear, and even more when I selected Map Items by Matching Names!  Apparently this is some functionality that’s been hidden really well because in that same week I caught another co-worker in exactly the same situation.  And these are not junior profiles I’m talking about!

If you’re now thinking “Hang on, I never have to set up the matches myself?” that may be true!  If you’re always creating new packages and the names are matching then BIDS will set up the matching fields automatically when you open the Mappings page.  But we are working with previously-defined templates to speed up development.  In that case BIDS will not set up the matches so that functionality shown above really comes in handy!

A fast way to find out if all fields have been matched is to click the Input Column header:

Click Input Columns header to put unmatched items on top!

This will order the items with the unmatched ones, recognized by <ignore>, on top!

See, the things you take for granted aren’t always that for others, as proven here.

That’s it for now, let’s see if I can come up with another silly thing for the next post!

Update: this post was turned into a movie by the good folks of Webucator: check it out!

In the meantime: have fun!

Valentino.

Share

Tags: , ,

A Record SetAs you may already know, it is possible to use the Execute SQL Task to populate a package variable with a result set.

In case you’re not that familiar with this technique yet, here are a quick two words on setting that up.  You just give it a query, set the ResultSet property to Full result set and configure a package variable in the Result Set property window.  The package variable’s type is System.Object.

But what exactly is this mysterious System.Object and how can we explore it?  Well, that depends.  More precisely, it depends on the Connection Type which you’ve chosen in the Execute SQL Task properties.

Let’s explore two possibilities: ADO.NET and OLE DB.  Our end goal is straightforward: retrieve the number of records in the result set.

The query which I’m using in the Execute SQL task is this one:

select ProductAlternateKey
from dbo.DimProduct
where Color = 'blue'

On my AdventureWorksDW2012 database it should return 28 records: 28

Exploring the ADO.NET result set

The first step is finding out what type exactly this result set object is.  Hook up a Script Task to your Execute SQL task and put a breakpoint on it.  Now run your package and examine the Locals window:

Debugging the Control Flow to find the object type

Well look at that, it’s a System.Data.DataSet!  Using this knowledge it’s fairly simple to produce code that fetches the record count:

DataSet ds = (DataSet)Dts.Variables["MyResultset"].Value;
MessageBox.Show(ds.Tables[0].Rows.Count.ToString());

Note: don’t forget to add the package variable to the ReadOnlyVariables before opening the code editor.

The System.Data namespace is included by default in the using statements, no worries there.  So we can just cast the variable into a Dataset.  The DataSet object contains a DataTableCollection called Tables.  As there’s only one result set this is located at index zero.  We travel down the object tree to finally find the Count property of the Rows DataRowCollection.

And here’s the result:

The message box shows 28 items

That’s all there’s to it, easy huh?  Let’s move on to our second option, OLE DB.

Exploring the OLE DB result set

Once again we start at the beginning: with the debugging of the Control Flow to find out what object type our mysterious System.Object is:

The OLE DB result set gives us a System.__ComObject, hmm...

Hmm, System.__ComObject, that’s … special.  Ow right, the OLE DB provider uses a COM wrapper.  How can we “unwrap” our object and introduce it to the .NET world?  Let’s see if we can find out what’s hidden behind that wrapper, by using the following code:

MessageBox.Show(Microsoft.VisualBasic.Information.TypeName(Dts.Variables["MyResultset"].Value));

TypeName is a VB.NET function and retrieves the data type of the parameter passed into it.

To get this to run in a C# SSIS task you first need to add the Microsoft.VisualBasic reference:

Adding a reference to the VB.NET assembly

Executing the package results in this:

Result type: Recordset

So, our result is Recordset, hmm, well, I think we more or less knew this already.  What kind of Recordset?  Well, an ADO Recordset.  We know this because the following code works:

System.Data.OleDb.OleDbDataAdapter da = new System.Data.OleDb.OleDbDataAdapter();
DataTable dt = new DataTable();
da.Fill(dt, Dts.Variables["MyResultset"].Value);
MessageBox.Show(dt.Rows.Count.ToString());

Basically, we use the Fill method of the OleDbDataAdapter to fill a System.Data.DataTable with the data from the ADO Recordset.  The version of the method in our example (there are several overrides) accepts two parameters:

public int Fill(

DataTable dataTable,

Object ADODBRecordSet

)

With the DataTable filled we’ve got once again access to a Rows DataRowsCollection, exactly the same as in our ADO.NET example in fact.  Executing the package now results in exactly the same message box as shown earlier: 28 records!

Beware of pitfalls

If you mix the two methods up you’ll get funky errors such as:

System.InvalidCastException: Unable to cast COM object of type ‘System.__ComObject’ to class type ‘System.Data.DataSet’. Instances of types that represent COM components cannot be cast to types that do not represent COM components; however they can be cast to interfaces as long as the underlying COM component supports QueryInterface calls for the IID of the interface.

and also

System.ArgumentException: Object is not an ADODB.RecordSet or an ADODB.Record.

So be careful, use the right object types for your particular System.Object.

Conclusion

In this article I’ve demonstrated a couple of methods which can be used to retrieve information from the mysterious System.Object result set in an SSIS package.

Have fun!

Valentino.

Additional References

Execute SQL Task

Result Sets in the Execute SQL Task

OleDbDataAdapter Class

Share

Tags: , ,

Once again I’ve been wasting some time because of a silly bug.  This time it was due to the OLE DB Source component and the way it works with parameters.  If you are in a situation where you know your query is working fine and yet no records are going down the data flow, here’s a possible solution!

Disclaimer: this issue exists up until SQL Server 2008 R2.  Read on for details!

Update: after being advised to do so by several people, including Jamie Thomson, I’ve filed a bug at MS Connect: SSIS OLE DB Source incorrectly returns zero records in combination with parameter and comment

The Situation

I had a Data Flow with an OLE DB Source that uses one parameter, for instance:

select ProductAlternateKey, EnglishProductName
from dbo.DimProduct
--some really smart comment goes here
where Color = ?

I knew the query was working fine because when executed through SSMS and with the question mark replaced with ‘blue’, it would return 28 rows:

28 records in Management Studio

But when executed in BIDS, through either Execute Package or Execute Task, it would return zero records:

Zero records, zilch, nada, niente, none at all!

So I thought something must be going wrong with the package variable that gets passed into the source parameter, somehow.  I’m not going into details on what I tried out in my attempt to get this working, but I can tell you that I started to get really irritated.  My colleague Koen Verbeeck (b|t) can confirm this because I called him over to my desk to help me think! (thanks btw!) Smile

After some further tinkering with the data flow, we had our smart moment of the day and decided to launch SQL Server Profiler to see what BIDS was sending to the server!  I’m not sure if you’re aware of this but BIDS is doing some metadata-related stuff when preparing queries.  As far as I can tell, it also tries to determine the parameter type by running the following query:

 set fmtonly on select Color from  dbo.DimProduct
--some really smart comment goes here where 1=2 set fmtonly off

When creating this statement, it seems to use the whole FROM clause of the original query, including any trailing comments.  It combines that with a SELECT statement that contains the field that gets filtered and it appends " where 1=2 set fmtonly off".

But alas, apparently it’s not aware that lines can be commented out by using a double dash.  So part of its generated statement is commented out.  What it should have done is used some CRLFs, especially in front of the WHERE clause.  But it didn’t.

So, as a result of that, FMTONLY remains on while the SELECT statement gets executed, resulting in zero records!

For those unfamiliar with the FMTONLY setting:

Returns only metadata to the client. Can be used to test the format of the response without actually running the query.

And I can actually confirm what I’m stating here by changing the query to the following:

set fmtonly off;
select ProductAlternateKey, EnglishProductName
from dbo.DimProduct
--some really smart comment goes here
where Color = ?

28 records down the pipe!

We've got data!

But this hack is a little too dirty to put in production.  So what else can we do?  Well, use block-style comments instead and we won’t face the issue!

select ProductAlternateKey, EnglishProductName
from dbo.DimProduct
/* some even smarter comment goes here */
where Color = ?

So, as I mentioned at the start of the post, this behavior can be reproduced using SSIS versions prior to 2012.  What about 2012 then?  Here’s the result of the Data Flow using the first query mentioned above:

SSIS 2012: we've got data, even with the "faulty" query!

Alright, that works better!  Now let’s use Profiler to check what’s going on here.  This is the first statement that gets executed:

exec [sys].sp_describe_undeclared_parameters N'select ProductAlternateKey, EnglishProductName
from dbo.DimProduct
--some really smart comment goes here
where Color = @P1'

Further down, I also see this one:

exec [sys].sp_describe_first_result_set N'select ProductAlternateKey, EnglishProductName
from dbo.DimProduct
--some really smart comment goes here
where Color = @P1',N'@P1 nvarchar(15)',1

It is using an entirely different approach, no longer using the FMTONLY setting!  Hang on, this rings a bell!  Look what the BOL page for SET FMTONLY (2012 version) specifies:

Do not use this feature. This feature has been replaced by sp_describe_first_result_set (Transact-SQL), sp_describe_undeclared_parameters (Transact-SQL), sys.dm_exec_describe_first_result_set (Transact-SQL), and sys.dm_exec_describe_first_result_set_for_object (Transact-SQL).

Cool stuff!

Conclusion

If you’re not on SQL Server 2012 yet, be careful with comments in OLE DB Sources in the SSIS Data Flow!  Ow, and get the SQL Server Profiler off its dusty shelf now and then!

Have fun!

Valentino.

Share

Tags: , , , ,

This is a follow-up to my article on Loading Complex XML Using SSIS and XSLT.  In that article I demonstrated how you can convert complex XML into simple CSV using XSLT in SSIS.

The resulting DTSX package and input files can be downloaded from my SkyDrive through this link.

Dealing With Special Characters

If you’ve followed the instructions in my article mentioned above and you need to deal with special characters such as the é and è encountered in the French language, you probably noticed that it wouldn’t really work as expected.  In fact, in your final result you may have ended up with the special characters being replaced with other, even more special, characters.  Obviously not good.

Here’s an explanation on the reason why that happens, and also how to deal with it.

Setting The Scene

Imagine the following sample XML, representing a really huge book collection:

<books>
    <book>
        <title>The Hitchhiker's Guide to the Galaxy</title>
        <author>Douglas Adams</author>
        <language>EN</language>
        <description>The Hitchhiker's Guide to the Galaxy is a science fiction comedy series created by Douglas Adams.</description>
    </book>
    <book>
        <title>Le Trône de fer</title>
        <author>George R.R. Martin</author>
        <language>FR</language>
        <description>Le Trône de fer est une série de romans de fantasy de George R. R. Martin, dont l'écriture et la parution sont en cours. Martin a commencé à l'écrire en 1991 et le premier volume est paru en 1996. Prévue à l'origine comme une trilogie, la série compte désormais cinq volumes publiés et deux autres sont attendus.</description>
    </book>
</books>

As you can see, the second book in the list is the French version of the first book in the A Song of Ice and Fire series by George R.R. Martin and as it goes with French, there are some accents in the description of the book.

We’ll use the following XSLT to convert it to CSV:

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs=http://www.w3.org/2001/XMLSchema xmlns:fn="http://www.w3.org/2005/xpath-functions">
  <xsl:output method="text" version="1.0" encoding="UTF-8" indent="no"/>
  <xsl:template match="/">
    <xsl:text>BookTitle;Author;Language;Description</xsl:text>
    <xsl:text>&#13;&#10;</xsl:text>

    <xsl:for-each select="books/book">
      <xsl:text>"</xsl:text>
      <xsl:value-of select="title"/>
      <xsl:text>";"</xsl:text>
      <xsl:value-of select="author"/>
      <xsl:text>";"</xsl:text>
      <xsl:value-of select="language"/>
      <xsl:text>";"</xsl:text>
      <xsl:value-of select="description"/>
      <xsl:text>"</xsl:text>
      <xsl:text>&#13;&#10;</xsl:text>
    </xsl:for-each>

  </xsl:template>
</xsl:stylesheet>

Using an XML Task in the Control Flow, as explained in my article, we’d get the following output:

BookTitle;Author;Language;Description
“The Hitchhiker’s Guide to the Galaxy”;”Douglas Adams”;”EN”;”The Hitchhiker’s Guide to the Galaxy is a science fiction comedy series created by Douglas Adams.”
“Le Trône de fer”;”George R.R. Martin”;”FR”;”Le Trône de fer (A Song of Ice and Fire) est une série de romans de fantasy de George R. R. Martin, dont l’écriture et la parution sont en cours. Martin a commencé à l’écrire en 1991 et le premier volume est paru en 1996. Prévue à l’origine comme une trilogie, la série compte désormais cinq volumes publiés et deux autres sont attendus.”

So far so good, all accents are still present!

Then we’d import the file using a Flat File Source component in a Data Flow Task.  Here’s what the General page of the Flat File Connection Manager would look like:

Flat File Connection Manager: General

We’ve set double-quote as Text Qualifier and checked the Column names in the first data row textbox.

Switching to the Columns page we’d get the following:

Flat File Connection Manager: Columns - the Preview has messed up the accents!

Hang on, that’s not right!  The Preview is not displaying our accents as expected!  Oh my, what’s going on here? Let’s call the code page detectives!

A Mismatch Investigation

Take a good look at the XSLT which we’ve used to convert the XML into CSV, especially the xsl:output line:

<xsl:output method=textversion=1.0encoding=UTF-8indent=no/>

That line specifies that the text output should be encoded using the UTF-8 code page.

Now take a good look at the General page in the screenshot earlier, more precisely this part:

Code page: 1252 (ANSI - Latin I) is not what we need right now!

Indeed, code page 1252 (ANSI – Latin I).  While the input is UTF-8.  Of course that results in a mismatch of certain characters, as demonstrated here.  The fix is fairly easy, just change the Code page setting to 65001 (UTF-8).

Code page: 65001 (UTF-8) - much better!

If we now switch back to the Columns page we should come to the following result:

Flat File Connection Manager: Columns page preview with accents!

Ah, sure looks better doesn’t it?  All accents are present as expected.

But in case you thought that’s it, I’d advise you to think again.  Don’t worry, I’ll demonstrate what I mean.  Let’s do that by setting up a simple Data Flow.

Setting Up The Data Flow

Throw in a Flat File Source and specify our Flat File Connection Manager.  I also prefer to keep NULLs as they come in, using the Retain null values from the source as null values in the data flow checkbox.

Flat File Source: Connection Manager

If you click the Preview button you should get similar output as shown one screenshot earlier.

Now hook this up to an OLE DB Destination that writes the incoming data into a table in your favorite database:

OLE DB Destination is not happy :(

As you can see, our destination is not entirely happy with all this.  Here are the details of one of the error messages:

Validation error. Data Flow Task: Data Flow Task: The column “BookTitle” cannot be processed because more than one code page (65001 and 1252) are specified for it.

Looks like once more we’ve got a code page conflict.  And we sure do. Clicking the Data Flow connector between the Flat File source and OLE DB destination shows us the following:

Data Flow Path Editor shows that our strings are encoded using the 65001 code page.

Each of our incoming string values is encoded using the 65001 (UTF-8) code page.  But our database was created using the Latin1_General_CI_AS collation.  So we’ve indeed got a code page conflict!

Fear not, that’s easily remedied.  Add a Derived Column transformation in between the source and destination and convert each incoming string value using a cast expression such as this one:

(DT_STR, 50, 1252)BookTitle_IN

Note: whenever I need to manipulate incoming columns to create a second version of the same column, I rename the incoming column to TheColumn_IN.  The new version will be called TheColumn and preferably TheColumn is the name of the field in the destination table.  This makes it easy to distinguish all columns later down the flow.

Here’s what the final version of the Derived Column looks like:

Using the Derived Column transformation to cast the incoming strings into the correct code page.

Next we’ll need to open the Destination and change the mapped fields to the new ones.  Because my new columns are called exactly the same as the fields in the destination table, I can do that easily.  In the Mappings page, all I need to do is right-click the grey background in between the two tables and click Select All Mappings, hit the Delete button, right-click again and click Map Items By Matching Names:

Using Map Items By Matching Names, easy!

With the data flow finished, let’s give our package a run!

Flat File Source has got a length issue!

Ouch, our source is not happy!  A closer examination of the Output pane brings us to the following error:

Error: 0xC02020A1 at Data Flow Task, Flat File Source [16]: Data conversion failed. The data conversion for column “Description” returned status value 4 and status text “Text was truncated or one or more characters had no match in the target code page.”.

Oh right, so far we haven’t bothered looking at the actual length of the data that we’re importing.  Actually, what is the length of our data flow columns??  Well, if you’ve been paying close attention you should have noticed the number 50 several times in the screenshots and expressions above.  That’s indeed the default length for text columns when importing a flat file.

And if you scroll back up to the sample XML, you’ll notice that the content for the description is longer than 50 characters, thus causing our error!  Let’s find out how to get that solved!

Fixing The Field Length Issue

The first step in getting this fixed is opening up the Advanced page in the Flat File Connection Manager editor.

Flat File Connection Manager: using the Advanced page to change field length.

Then select the Description field and change its OutputColumnWidth property from 50 to 500.

That will cause the source to generate a warning.  Remove this warning by opening and closing the source editor.  Click the Yes button in the popup that appears.

The next step is changing the expression for the Description field in the Derived Column to this:

(DT_STR,500,1252)Description_IN

Indeed, the field length is one of the parameters in that cast.  The other numeric parameter is obviously the code page.

Having done that you’ll notice that the destination will start complaining.  Of course, you’ll need to adapt the destination table to reflect the field length increase as well.  So change the table definition and open/close the destination editor to make it happy.

Alright, let’s run the package once more!

Finally the data flow is happy with it all and has inserted two records:

That's more like it: all components colored green!

And what does our table contain?  Let’s find out:

All accents have been imported!

That’s looking good for sure!

Conclusion

In this follow-up article I have demonstrated what might go wrong when you need to deal with special characters while importing flat files, and how to solve your possible issues.  In case you missed the original article, have a look through this link.

Have fun!

Valentino.

References

Wikipedia: UTF-8

Share

Tags: , ,

Do you like the Custom Code functionality in SSRS?

And what would you think if SSIS offered the same possibility?  Imagine, being able to write a custom function in .NET and then use it in any expression in your package, how powerful that would be!

There’s already one function I would have written today: GetFilename(string path).

If you believe such functionality to be useful, please vote on the following Connect request: Add user defined function support to the SSIS expression language

Have fun!

Valentino.

Share

Tags: , ,

« Older entries

© 2008-2017 BI: Beer Intelligence? All Rights Reserved