cancel
Showing results for 
Search instead for 
Did you mean: 

Unnesting XML and memory tables

Former Member
0 Kudos

Post Author: Guy Jeffery

CA Forum: Data Integration

Hi,

I'm having issues with unnesting incoming XML messages within a real time job. The XML is being partially unnested into intermediate memory tables, and the intermediate memory tables are used to populate physical database tables. This works fine on a small scale - if I only have a few physical tables to populate, it works as planned. However, when I increase the complexity of the job, the job will begin to execute and then just stop and 'hang'. There are no error messages, and the log for that job has a blue tick next to it - but the job never completes.

I've had a look through the technical help file with BODI, and searched this forum, but can't find anything which mentions the situation above. Is it likely to be a memory issue? Or is there some upper limit on the number of objects used within a real time BODI job?

Has anyone experienced these issues before, and found a resolution? Any suggestions would be greatly appeciated, as I need a solution pretty urgently. Thanks in advance.

Guy

Accepted Solutions (0)

Answers (6)

Answers (6)

Former Member
0 Kudos

Post Author: Guy Jeffery

CA Forum: Data Integration

Thanks Ben, I'll look into that.

Guy

Former Member
0 Kudos

Post Author: bhofmans

CA Forum: Data Integration

No, I don't see a generic solution in job design you could try. Please open a case so that an engineer can investigate and make recommendations.

I'm a little surprised you see this behavior before you reach 2 GB, there might be something else happening after all. Again, support should be able to investigate this.

Former Member
0 Kudos

Post Author: Guy Jeffery

CA Forum: Data Integration

Ben,

Thanks again for the reply, and for aiding my understanding.

Unfortunately, we're using DI 11.5.1, and there's no option to upgrade. It's running on Solaris rather than Windows. As I said before, the al_engine process seems to reach a maximum of about 1.5 GB before hanging.

So I was hoping there would be something I could around job design i.e. the most efficient way of using the memory tables, when I need to make repeated calls to them within one execution of the real time job. Is this possible at all?

Thanks,

Guy

Former Member
0 Kudos

Post Author: bhofmans

CA Forum: Data Integration

Guy,

In real-time jobs, the whole job is executed in one al_engine process, this includes all data processing, but also the memory tables. It's started once and stays active the whole time to treat incoming messages.

This is different for batch jobs where each data flow is a seperate process, and data flows are started in series or parallel depending on the job design, once finished processing they disappear again.

So for your real-time job this means the whole processing can only use a maximum of 2 GB of memory (upper limit for 32-bit processes). Upgrading to DI 11.7 would give you more options like using pageable cache which will use disk space as additional memory to overcome the 2 GB limitation. But the first thing to check would be the amount of memory used - I'm just assuming here you need more then 2 GB, you can easily check this via the Windows task manager. Which DI versions are you using ? Is this on Windows ?

- Ben.

Former Member
0 Kudos

Post Author: Guy Jeffery

CA Forum: Data Integration

Hi Ben,

Thanks for the reply!

I was wondering how BODI uses memory tables, in an effort to reduce my memory usage. Currently, I receive my nested XML, and read it into a memory table. I then use this memory table as a source for around 150 physical tables. Each of the 150 tables are populated in separate dataflows, with the memory table as a source, and unnested 3 levels - by unnesting 1 level each in 3 separate Queries - and populating the table.

Do you have any tips for ways I might be more efficient, and hopefully resolve my problem?

I've checked my memory usage, and it seems to find an upper limit of about 1.5G, rather than 2GB.

I appreciate the helpful support!

Thanks,

Guy

Former Member
0 Kudos

Post Author: bhofmans

CA Forum: Data Integration

There is no upper limit on the number of objects that can be used, but it might be a memory issue.

You could check the memory size of the al_engine when running into this issue. When you're getting close to 2 GB you will run out of memory and your job can stop (on Windows and Linux).

In DI 11.7 we've made improvements to our memory handling by providing pageable cache options.