cancel
Showing results for 
Search instead for 
Did you mean: 

LONG IN BODS

karthik1993
Participant
0 Kudos

How do i load LONG datatypes in BODS? I have tried loading data keeping LONG as datatype but, the job runs forever. I tried removing these long data type columns and running the job, it ran quickly. I have heard long_to_varchar also causes performance issue. Any suggestions?

Accepted Solutions (0)

Answers (3)

Answers (3)

winfried_rhiel
Discoverer
0 Kudos

Well, you might've already figured it out by yourself, but I stumbled over the very same problem (DB2 --> BODS(4.2) --> MS-SQL-Server) and was truly frightened by the "performance" of my dataflow. And I'd like to share my solution.
My Task was to load ~245K rows with 3 columns ( BIGINT, VARCHAR(100) but always NULL, CLOB(1048576))

I created the default DF (Source-Query-Target) and was punished by >20 minutes execution time.

I came to realize, that - try as I might - I cannot circumvent the "set-commitsize-to-1-because-of-long". That simply works as designed. Altering the source-fetch-size did not have any impact at all. But here's what I could do: Altering the target-options!

1. Set "Target-Commitsize" to 1 (just to avoid the log-entry)
2. Set "Number of loaders" to 20
3. DF: Source --> Target

The above settings resulted in ~5 minutes execution time, 🙂

I expect these settings to vary in accordance with your network-speed. For example, raising the # of loaders above 20 or the source-fetch-size to 2K didn't gain me anything.

0 Kudos

Hi Karthik,


With LONG both read and write will be done row-by-row. See if you can avoid using LONG.

KBA 2311856 - Slow performance processing LONG datatypes - SAP Data Services

KBA 1430342 - Long type causing dataflow to be slow - Data Services

Thanks.

Akhilesh

former_member187605
Active Contributor
0 Kudos

What are you doing with your data? Writing them out to another table? That may be the reason for the lack of speed. With long colums the commit size is set to 1.