As you can see in the image below, I have a table in SQL Server that I am filling via a flat file source. There are two columns in the destination table that I want to update based on the logic listed below:
I don't need help with how to write the TSQL code to get this done. Instead, I would like someone to suggest a method to implement this as a Data Flow task within SSIS.
Thank you in advance for your thoughts.
Edit 11/29/2012
Since all answers so far suggested taking care of this on the SQL Server side, I wanted to show you what I had initially tried doing (see image below), but it did not work. The trigger did not fire in SQL Server after SSIS inserted the data into the destination table.
If any of you can explain why the trigger did not fire, that would be great.
An OLE DB destination includes mappings between input columns and columns in the destination data source. You do not have to map input columns to all destination columns, but depending on the properties of the destination columns, errors can occur if no input columns are mapped to the destination columns.
This means that the OLE DB destination, unlike the SQL Server destination, can be run on a computer other than the bulk load target. Because Integration Services packages that use the OLE DB destination do not need to run on the SQL Server computer itself, you can scale out the ETL flow with workhorse servers.
If you are able to modify the destination table, you could make the default values for SessionID and TimeCreated do all the work for you. SessionID would be an auto-incremental integer while the default value for TimeCreated would be getdate() or gettime() depending on the data type.
Now, if you truly need it the values to be created as part of your workflow, you can use variables for each.
SessionID would be a package variable which is set by an Execute SQL Task. Just reference the variable in your result set and have your SQL determine the next number to use. There are potential concurrency issues with this, though.
TimeCreated is easily done by creating a Derived Column in your data flow based on the system variable StartTime.
You can use a Derived Column to fill the TimeCreated
column, if you want the time of the data flow to happen, you just use the date and time function to get the current datetime. If you want a common timestamp for the whole package (all files) you can use the system variable @[System::StartTime]
(or whatitwascalled).
For the CSV looping (i guess), you use a foreach loop container, and map an iterative value to a user variable that you map in the derived column for SessionID
as mentioned above.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With