on 11-05-2017 1:15 PM
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I have developed Program caetano.almeida so Everything is green .There is huge data over flow in ztable at time of mrp. Thank you for all replies
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Good to see this helped in determining the (main) Issue.
If this is sufficient for you (at this time) please Consider Closing the Question ...
Kind Regards
Nic T.
Data overflow in Z table makes me think about custom code. I suggest you to check which materials are taking a long time to be planned in MRP using report RMMDPERF and then run a trace in ST12 while planning them using transaction MD03.
You want to say you have a single material that takes 14 hours in MD02 background transaction?
How long does it take in foreground? How long to open MD04?
What info does the job log have, can you post it?
did you already do a performance trace?
What do you want to archive? What do you think which old documents could have this impact on the performance? If old documents are affecting planning runs then I bet you can't archive them as they are probably open and hence in a status that does not allow to archive them.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Go through this great document..if you absolutely need to run mrp using mode 3 then consider increasing your parallel processing servers..the blog below should help you and point towards right direction.good luck
https://blogs.sap.com/2014/10/14/analyze-and-improve-mrp-performance/
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
100 | |
11 | |
10 | |
6 | |
6 | |
5 | |
4 | |
4 | |
3 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.