on 05-23-2012 12:13 PM
Hi,
We have an extractor which extracts attributes for netwrok activity(0ACTIVITY_ATTR). This is a full data load as there are huge no of records we have split into multiple infopackages(by applying data selection).
Now the problem is some of the info packeges are failing due to the job cancelled in source system.
eg of job log:
19.05.2012 13:00:00 Job started
19.05.2012 13:00:00 Step 001 started (program SBIE0001, variant &0000000035667, user ID RFCUSR)
19.05.2012 13:00:00 DATASOURCE = 0ACTIVITY_ATTR
19.05.2012 13:00:00 *************************************************************************
19.05.2012 13:00:00 * Current Values for Selected Profile Parameters *
19.05.2012 13:00:00 *************************************************************************
19.05.2012 13:00:00 * abap/heap_area_nondia......... 2000683008 *
19.05.2012 13:00:00 * abap/heap_area_total.......... 2000683008 *
19.05.2012 13:00:00 * abap/heaplimit................ 40894464 *
19.05.2012 13:00:00 * zcsa/installed_languages...... 6EDGIJL *
19.05.2012 13:00:00 * zcsa/system_language.......... E *
19.05.2012 13:00:00 * ztta/max_memreq_MB............ 256 *
19.05.2012 13:00:00 * ztta/roll_area................ 6500352 *
19.05.2012 13:00:00 * ztta/roll_extension........... 2000683008 *
19.05.2012 13:00:00 *************************************************************************
19.05.2012 13:05:40 ABAP/4 processor: TSV_TNEW_PAGE_ALLOC_FAILED
19.05.2012 13:05:40 Job cancelled
When i repaet the same infopackage it is able to extract data.
Why this is happening to only this extractor.
we have another data loads which extracts more no of records than this at a strech. why is this memory overflow issue is coming up only for this extractor ( even after splitting up the extraction based on data selection)?
Please help me understanding this.
Thanks,
Harish
Hi,
1. check the data packet setting in RSCUSTV6
2. you can split the data and load it multiple times
3. check with basis which parameters and table it is getting stiuck and ask them to increase the size..they can check it now or when job is running
4. Normally these types of errors occurs in Dev /quality system where there is insufficient memory allocations..
Thanks and regards
Kiran
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
check following thraeds,there are some memory setting that u need to alter with the help of basis team
http://scn.sap.com/thread/1273642
Thanks and regards
Kiran
Hi,
I have changed the data packet size at infopackage level to 200
Scheduler menu -> DataS. Default data transfer --> data packet size to 200
still some of my infopackages are fialed with the same error .
Please suggest
Br,
Harish
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Harish,
We have faced the same issue several times when i was trying to extract the data from source system to BI. Please follow the below steps to avoid this issue in future...
1. Check the no.of bacj ground processes available
If back ground jobs are fie, then check whether there is any activity goinon in source system which is effecting your job with the help of BASIS Team.
2. Check the tRFC connection is fine or not with the help of BASIS Team
If the back gorund processes and tRFC connection is fine then there is no issue with the source system.
3. Reduce the packet size in Info Package Selection and run the job again.
The above three checks will let you know the reason for the failure and perform action accordingly as mentioned above.
Hope it will resolve your issue.....Please revert back with your responce......All the best..
Thanks,
Chandra.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi chandra,
I have checked your points.
there are sufficient Wps available ata the time of data loading and Trfc connection also fine.
How to check the packet size in Info Package Selection?
I have checked in t-code RSCUSTV6
.. if that is correct. on what basis i should change the packet size ?
Thanks,
Harish
Hi Harish,
Reduce the packet size to 5000 and try. It should work. In your system the packet size is 20,000...it means each packet can hold 20,00 records. When you reduce the packet size, the noo.of records per packet will be reduced.
Please try this and let me know the result.
Regards,
Chandra.
Hi Harish,
Reduce the packet size to 5000 and try. It should work. In your system the packet size is 20,000...it means each packet can hold 20,00 records. When you reduce the packet size, the noo.of records per packet will be reduced.
Please try this and let me know the result.
Regards,
Chandra.
Use T-code RSCUSTV6 and see the pic to reduce packet size. Pls assign points if it is useful.
Thanks,
Raghavendra.Kolli
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi harish,
This page allocation error can be solved by reducing the packet size in settings of infopackage.
It is due to memory issue at time of loading.
You can reduce packet size to prevent it from failing daily.
You can change packet size in display variant of infopackage. Goto Scheduler menu -> DataS. Default data transfer.
Here you can change the packet size.
Regards,
Harish.
Message was edited by: Harish Babu Tata
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
Check if suffiecient processes are availbale at time of running the IP.
Also Check with Basis regarding the TSV_TNEW_PAGE_ALLOC_FAILED Dump , it occurs due to memory issues.
Let us know your progress with tehe issue.
Best Regards,
Arpit
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Harish,
Note - Take the help of basis people to solve this problem.
Recommended Memory Sizing
The following recommendations for memory sizing as of R/3 Release 3.0C are the result of the experience of software developers and consultants. All recommendations are based on the assumption that the swap space is at least 3-4 times as big as the main memory. WARNING: A lack of swap space can cause the operating system to crash. Since the extended memory is not mapped to a disk file like the roll and paging areas, more virtual memory and thus more swap space is required as of R/3 Release 3.0C than for previous Releases.
Poor memory sizing and configuration causes performance problems. In the following, problem analysis is described for the four most critical problem areas:
To correct the sizing for swap space, proceed as follows:
3. If the recommendation is not fulfilled, increase the swap space. Do not decrease the swap space.
To correct the sizing for extended memory, proceed as follows:
3. If the recommendation is not fulfilled, adapt the following profile parameters to the values recommended in the section Recommended Memory Sizing:
See also:
OSS Note 23863 Memory Management 3.0
OSS Note 30628 Release 3.0A/B New Memory Management
OSS Note 33395 Memory Management in Release 3.0B
OSS Note 33576 Memory Management from Release 3.0C, Unix and NT
OSS Note 44695 Memory Management in Releases from 3.0C, AS/400
OSS Note 68544 Memory Management under Windows NT
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
83 | |
10 | |
10 | |
9 | |
7 | |
6 | |
6 | |
6 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.