Skip to Content
0
Former Member
Apr 30, 2008 at 02:53 AM

Anyone using automatic storage in 2TB or larger R/3 systems?

86 Views

Hi all,

We will be doing a Unicode conversion in the fall, and so we have the opportunity to re-architect our database subsystem by importing our data into a fresh filesystem and disk layout. Our current system is a mix of 3 different design methods, so we are looking to achieve both standardized administration, and also performance enhancements.

We are using a large SAN with a very large cache front ending the disks. LUNS are standardized in size of about 50GB each. We will most likely be using Veritas for DB2 edition, to take advantage of the CIO (Concurrent I/O) option. Target server will also be Solaris 10 running DB2 9.5.

Current ideas to improve performance:

1) Veritas CIO to eliminate double disk caching

2) Separate log disk from data/index disk

3) Upgrading SAN cards from 1 2GB throughput to 2 4GB channels (8GB total)

My main concern is automatic storage layout, and the recommend extent size (which is 2). We are currently running traditional DMS tablespaces, and resizing containers as needed. We would like to go auto storage but are not sure if this method has proven itself yet, at least with a large R/3 system.

However, we'd like to move towards automatic storage, but am worried about any potential performance issues, since I'm having trouble locating customers using auto storage in large R/3 environments. On the other hand, I'm willing to bet that the auto storage option will layout the storage better than we could manually.

I guess a couple of things I'm looking for out there, even though I know this is one of those questions that has no perfect answer:

1) Would it make sense to change the default extent size from 2 to something larger for our Unicode import? Or is this just going to set us up for bottlenecks later when we head into production? I doubt this has much impact on the speed of the import, we have many other avenues we will explore to increase our import speed (parallel processing, using DB2 LOAD, etc), but I was wondering about this value.

2) Anybody out there currently using DB2 auto storage for their storage subsystem in a large (1 TB or greater) R/3 environment?

3) Anyone ever setup special tablespaces and/or bufferpools for bottlenecked/heavily hit tables?

This is a good thread I'm referencing so far. I was thinking 500GB filesystems each, because of those limitations we don't run into (older Veritas had some issues at 1TB, so that's why I was going with 500GB):

Questions on LUNs, containers and DB2_PARALLEL_IO