DISK IO

BUFFERS* (pages)

KEY OLTP TUNING FIELD
start w/20% RAM, may be up to 50% RAM. On a dedicated machine why not start at 50%?
increase size till increase in cache hits insignificant or excess system paging occurs, use sar or vmstat to determine excess paging.
OLTP Target 95% read, 85% write cache hits
buffers smaller than largest table for DSS will force light scans. Use –g lsc to measure light scans
Maximize for data loading (50% or more) (except HPL express mode)
More buffers can mean longer checkpoints

NUMAIOVPS* – Suggestions for this seem to change with the weather.

1 per db disk + 1 for ea. chunk accessed freq.
If KAIO is used allocate 1 + 2 for each cooked chunk (get rid of any cooked chunks)
For KAIO systems 2 for OnLine 1 per controller containing cooked chunks
For systems w/o KAIO 2 for OnLine + 1 for each controller then add as indicated.
1 per dbspace
1 per disk
1 per mirrored pair
1 per chunk.
‘-g ioq’ to monitor IO Qs
DSA spawns one read thread per dbspace (AIO or KAIO)
My suggestion is to get a system that supports KAIO and then set this to 2.

RA_PAGES

most machines limit at 30, all limit at 32.
Dig: For systems that do not perform light scans, do not set RA_PAGES higher than 32.
higher for typically sequential DSS
if too high will lower %cached reads
if bufwaits unusually high RA_PAGES may be too high, or difference between RA_PAGES and RA_THRESHOLD may be too small.

RA_THRESHOLD

Set close to RA_PAGES e.g. RA_PAGES 32 and RA_THRESHOLD 30, if bufwaits (-p) increase reduce RA_THRESHOLD. If most machines limit at 30 won’t the RA_THRESHOLD remain in a constant TRUE state?
Ideally RA-pgsused = (ixda-RA + idx-RA + da-RA)

DBSPACETEMP*

at least two each on a different drive, more if building large indices
DSS environments should use HW striping a small number of TEMPDBS across multiple disks.
Max space required for index build is: non-fragmented tbls (key_size+4) * num_recs *2, fragmented (key_size+8) * num_recs *2

FILLFACTOR (indices)

90 is typical, 100 for SELECT/DELETE only tables
forces initially very compact indices & efficient caching.
50-70% for tbls with high INSERTS to delay need for node splitting

MIRROR

always mirror.
A few years ago the fellow that tests this at Informix posited on USENET to use HW mirroring over Informix every time. This make sense as who will have a more intimate knowledge of the devices? HW solutions are always faster than SW. In order of preference I would suggest HW, OS and then Informix mirroring.
For machines where availability is paramount one can mirror across controllers and even arrays.

IOSTATS

when set to "1" this undocumented parameter will generate read and write timings in the syschktab SMI table. See Appendix C of DSA Performance Tuning Training Manual

TBLSPACE_STATS

New
NOTES:

Increasing the Unix priority of AIO processes can improve the performance of data returned from disk.

Monitor IO with -g ioq (iof and iov are also worthy). When AIOs used gfd len SB < 10, maxlen <25. Maxlen often breaks 25 during engine initialization when it is unimportant so make this distinction. –D will show hotspots at disk level –g ppf at partition level.

When building an important Data Warehouse for my current employer the Sun Hotshot suggested placing all data in only the middle 2GB sectors of each 4GB disk, leaving the remaining unused. The highly paid Informix representative felt strongly that using only the leading 2GB of sectors would perform better. I suggested that we test and if it was within 5% that this be decided by ease of maintenance. The leading sectors proved 2% faster than the middle. I do not recall which I ended up implementing.

If your system is IO bound verify if it be controller or disk bound. The solutions are different.

Throughput = (pg_size * num_pgs_requested/max_transfer_rate) + latency

The use of clustered indices can greatly increase sequential reads.

Informix recommends using fragmentation over HW striping unless the table is a poor canditate for fragmentation. I would like to test this statement someday.

I have not been able to test how Kernal IO effects NUMCPUVPS configuration. From DSA Performance Tuning Manual (2-97) "If your system supports kernal aio, onstat –g ath will show one kio thread per CPUVP." Therefore should NUMCPUVPS be associated with the number of disks, etc or remain a function of the number of hardware CPUs?

UNIX COMMANDS:

iostat

sar

vmstat

 

 

Introduction

Main & CPU

Disk IO

Logging

Memory & DSS

Miscellaneous

Notes

Print Version

bamph

                     

Web design and content bamph