cancel
Showing results for 
Search instead for 
Did you mean: 

Data Services: Parameterize source and target tables

baljinder_nagra
Explorer
0 Kudos

Can I "parameterize" source and target SQL tables in a Data Services job that could be chosen at run time depending on what environment they were running in? I.e. PRD or Dev.

I can do this with flat files, (by parameterizing the file path), but I dont know how to do this with transparent tables.

Please advise.

Accepted Solutions (0)

Answers (3)

Answers (3)

baljinder_nagra
Explorer
0 Kudos

Ok.

Although I still don't understand the concept of Alias's, becuase I believe this doesnt work for BW source/target tables. I have found for my situation all I need to do is create multiple configurations in the DataStore definition and then when I migrate the repository job then when it is in production I should make the "Default" configuration equal to the PRD one (or QA etc).

Thanks for the helpful direction.

Former Member
0 Kudos

Yes, this is the purpose of multiple datastore configurations and system configuration.

Typically, the designer defines the required datastore configurations and organize them into system configurations, then a system administrator choose one system configuration to schedule or start a Job.

Former Member
0 Kudos

Hi all

I would create one repository for every environment and would not use the datastore configurations, due to the risk, running jobs in dev against prod. Just configure on every environment the correct datastore settings once and when you export your code from one repository to the other just exclude the datastore meta data or better use central repository for code deployment.

Datastore configuraitons and system configuration, I just use for plattform independency, e.g. my target could change from sql server to oracle in dev and test or prod. Another scenario is, if you use Data Services as a software development environment, in order to implement plattform independent code.

Cheers, Roger

baljinder_nagra
Explorer
0 Kudos

We are using multiple repositories in sync with SAP landscape yes....dev, qa, prd. But I dont understand why you say not to use multiple datastore/system configurations? That would mean (1) I would have to remove source and target tables once I migrated job from one repository to the next, (2) replace those source and target table with reimported/ new datastore configurations from the correct system environment....I think the danger in corrupting transforms and data flows is greater than risk of running a dev job with prd source/target tables....maybe I am misunderstood.

Former Member
0 Kudos

You could use this approach where you alwayse seperate datastore from the other objects when you move objects from one repo to another :

in DEV Repo : create datastore (pointing to DEV system)

Export datastore from DEV Repo, import in TEST repo

In TEST repo : edit datastore to pint to TEST system.

Now each time when you develop new jobs in the DEV repo, you export the objects WITHOUT the datastore. So tables etc will be exported, but not the datastore connection information.

Wen you import this ATL file in your TEST repository, it will not overwrite the datastore (with the TEST connection details) but add all the other objects that make up the job.

With this approach your connection details between DEV and TEST are clearly separted (in the DEV repo there is no connection to a TEST environment, in your TEST repo there is no connection to a DEV environment).

You need to be careful though that no mistaked are made when you export from DEV and that datastores are ommitted. If you use the central repo to move objects from DEV to TEST, the "get object with dependencies" will have the right behavior automatically.

Thanks,

Ben.

baljinder_nagra
Explorer
0 Kudos

Thanks for the reply.

I have found the topic in Data Services Guide and Advanced Guide...but that only appears to apply to standard SQL Data stores.

I have created BW Source target Data stores and the "Create Alias" button in the Data store Configuration Editor is grayed out. Do I have to get the physical Database information for the underlying DB in BW? Can I not use this method of assigning aliases for BW source and target tables?

Please advise.

Former Member
0 Kudos

Hi,

One approach to parameterize source table (for standard SQL datastores, I didn't try for BW) is to use the SQL transformation with an input parameter in the dataflow.

The SQL code should be as follows:

SELECT <list of columns> FROM [$p_source_table]

With a script in the job/workflow calling the dataflow, you define the name of source table and pass it to the $p_source_table parameter.

Former Member
0 Kudos

What exactly is the difference between the tables in DEV and PROD ?

A common case is that the OWNER of the tables is different (but tablenames and column names are the same). This is something you can easily parameterize in the datastore configuration using an alias and provide a different value for each datastore configuration. You can find more details in the documentation (let me know if you have trouble finding it).

If it's really the table name that is different, then it becomes more complicated. A database view could be a solution to rename the tables. You can even create the view from withing a scrip function in Data Services and make it point to a different table based on a variable.

Thanks,

Ben.