Skip to Content

Can memory allocation sequence be change permanently ?

Dear All,

In Windows Server, is there any method to permanently change the sequence of allocation of memory for batch process, to allow assigning heap first, before assigning extended memory ?

Reason :

We aware in a 2 processor server, some memory access intensive batch jobs (which do a lot of sorting in memory) would be running sometimes much slower, sometimes much faster (in an order by around 2-3 times). We are suspecting that is due to NUMA effect of varied memory performance, which now undergo further verification test by repetitive testing (running on server with 1 processor socket, vs running on server with 2 processor socket) we are still awaiting the result .

As heap should be "owned" by the worker process itself, so I am guessing if OS would take into account of NUMA and prefer to assign memory "local" to the processor, as long as the requested memory size can fit in that NUMA node. (vs extended memory which is assigned for the whole instance and thus probably not able to take care of locality of every running process) We would also take further test about it through RSMEMORY program then.

However, even if the new sequence has proven any effect, RSMEMORY setting would gone by every time instance restart. So I would like to know if there is chance to persist the sequence

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

2 Answers

  • Jan 08, 2015 at 10:08 AM

    I see the description of allocation sequence from below post

    and follow moderator suggestion to post the question here

    as I guess my question may suit this forum title more

    Memory allocation sequence to work processes

    Add comment
    10|10000 characters needed characters exceeded

  • avatar image
    Former Member
    Jan 08, 2015 at 11:12 AM

    Eric,

    According to the information of SAP Note 1612283 two socket servers should still perform very well in terms of NUMA. That is one thing we already have worked out in various benchmarks. OK - benchmarks primary did focus on throughput of a system, not on single thread performance which is probably your problem. But 2-3 slower because of memory locality, that is a number I won't expect.

    Are you sure that there are no external effects (other processes, Page-In/Out activities, number of objects to be processed by the report) are causing the differences in runtime?


    Can we please have some more information about the hardware and operating system version you are using?

    Is the system running von a hypervisor (VMware or Hyper-V)?

    kind regards

    Peter

    Add comment
    10|10000 characters needed characters exceeded

    • Dear All,

      Just to update test result we performed.

      After past month of testing, the root cause of fluctuating run time of our program has been found.
      It caused by application issue, and does not related with NUMA architecture.


      Sorry for any misleading I might caused.

      Regards,
      Eric


      P.S.

      Just for info.

      The longer runtime occurs much more sparesely in past month testing, though still happens,
            So we use /SDF/MON to keep continuous monitoring on memory consumed by the job.
            and found the longer run time job shows different memory footprint at some moment.

      Through debugging in SM50, we aware looping on one code segment occurs in longer run time job
           while that looping does not occurs on shorter run time jobs.
      We thus believes even though job parameter not change, some subtle change on data environment

           could lead to severe change on program behavior.

      We thus pass back the case to Application Team for futher logic review.