Informix Engine Parameters v1.7
is one of my all time favorite activities be they internal combustion or database.
Rare is the database
engine that cannot be tuned for another 20%. Several orders of magnitude
improvement is not uncommon for a well organized tuning session. This
is not due to any mysterious talents of mine. Rather, that systems go out of
tune rather quickly or were never tuned in the first place.
Below is a list of suggested
settings for Informix Engine Configuration parameters I have gathered over the
years. I keep a copy in my organizer for easy field reference.
I have collected many books
that discuss tuning Informix databases. It has been my observation that they
conflict more than they agree. In my mind there are several reasons for this:
- Changes to underlying
engine architecture obsoletes recommendations with tremendous frequency.
- Unix implementations
differ wildly at granularities important to the tuner.
- The rate of hardware
advances quickly overtakes information learned.
The first item above was
demonstrated in an early release of version 7. The Informix engine disk IO subsystem
changed so markedly that all previous NUMAIOVPS recommendations were incorrect.
A wonderful example of item
2 above is the efficiency of the implementation of the Unix process manager.
A series of tests I performed at Informix’s benchmark labs in Menlo Park demonstrated
configuring twice as many CPUVPS as hardware CPUs was optimal for HP/UX.
The version of Solaris that we were testing preferred the recommended 1:1 relationship
My favorite conflicting
pronouncements are on page 6-134 of INFORMIX-OnLine Dynamic Server Performance
Tuning Training Manual (2/97). When discussing configuring NUMAIOVPS for systems
that do not use kernel IO the page states a 1:1 relationship with controllers
while the third sentence following states one per disk.
Note that these examples
discuss wide variability in what are arguably the two most significant
This should leave the reader
with three important conclusions.
- It is dangerous to speak
of such things in absolute terms.
- Each important end user
environment must be measured and tuned individually i.e. cookie cutter approaches
will cause trouble for the DBSA.
- More importantly, due
to the fluidity of this most fundamental knowledge set DBSA should continue
to be well compensated.
What follows is my collection
of notes for each onconfig parameter. Most are direct out of a book. Comments
in blue/italics are from my own experience.
Where more than one suggestion
is listed they are in chronological order, oldest first.
The assumption is made that
the system under test (SUT) is a dedicated DB server. If it isn’t, it should
be. Informix should be thought of as an Operating System unto itself. All
it really wants from Unix is slices of CPU time. If it must share it's sandbox
with applications, other database engines, or even other Informix instances
it is operating out of the bounds of it's target design.
Acknowledgement that this
level of isolation is not always possible in the real world has been addressed
recently with new onconfig parameters. Important adjustments for tuning in a
"mixed environment" are marked "§".
Any mission critical DB
should be on a dedicated machine.
"-" implies the
Sometimes the DSBA does
not have the luxury of an extended tuning session. He must make a quick change
to a production box to get it back within acceptable parameters. I have noted
parameters that allow for a "big bang for the buck" with a
These parameters are grouped
Sometimes the DBA does not
have the luxury of an extended tuning session. He must make a quick change to
a production box to get it back within acceptable parameters. I have noted parameters
that allow for a "big bang for the buck" with a *.
- Logical Logs + [Phys
logs] + [Temp Tables] + [Data] + [On-Arc.catalogs] + crtl info (res. pages).
This is a basic size calculation formula. For systems
of any import most of these logical units should of course be on separate
- unique for each engine
a dedicated server turn this on (1). The parameter is ignored if it is not
a mixed environment turning this on may make Informix "not play well
with it's freinds"
- #HWCPUs - 1 if #HWCPUs
- #scan thrds (frag issue)
SB a multiple or factor of NUMCPUVPS determine w/ glo ath rea.
- For data loading = #HWCPUs
unless HPL used, then = HWCPUs CONVERTVPS (onpload config param)
have seen systems that enjoy 1:1 VCPUS to HW CPUs relationship and I have
seen some that prefer 2:1:1
of these turn on different housekeeping mechanics. If NUMCPUVPS is set to
one it is very important to turn both of these off, or 0 and 1 respectively
(yech). You will see no benefit to configuring one CPUVP unless these are
both off. This is overlooked more often than not!
would suggest that these should really be considered as one parameter together
as if one is adjusted the other should be adjusted likewise.
if supported this parameter turns off Unix nicing (this is good). Nicing is
the Unix mechanism that lowers a processes priority over time to ensure equality
in a mixed environment. As Informix just wants machine cycles a dedicated
server should have this enabled.
for a mixed environment is not so black and white. It may be best to have
this off initially and turn it on if it is desired to give Informix more CPU
number to start binding to. mpstat will provide number of CPUs and their associated
number. This numbering system seems whimsical.
- Number of hardware CPUs
to bind to.
- NUMCPUVPS = HW CPUs
can be very beneficial for systems that are not dedicated to the DB.
non-Informix processes may be allowed to run on the HW CPUs identified here,
but the CPUVPS will be restricted to those identified.
- 0 internal timer is faster.
I have never seen OS timing used.
There should be a 10:1 ratio between time spent in usr vs. sys for CPUVPS. (This
has become perhaps 3:1 with KAIO). If sys is too high on a system with just
a few HW CPUs try using just 1 CPUVPS.
If this shows seven or more threads waiting adding CPUVPS can bring it down.
marginal tool. load average is a combination of system resource measurements.
I have seen slow systems with a la of 2 and I have seen systems that seemed
find at double digits.
only relative to an earlier measurement on the same machine.
- KEY OLTP TUNING FIELD
- start w/20% RAM, may
be up to 50% RAM. On a dedicated machine why not
start at 50%?
- increase size till increase
in cache hits insignificant or excess system paging occurs, use sar or vmstat
to determine excess paging.
- OLTP Target 95% read,
85% write cache hits
- buffers smaller than
largest table for DSS will force light scans. Use g lsc to measure light
- Maximize for data loading
(50% or more) (except HPL express mode)
- More buffers can mean
Suggestions for this seem to change with the weather.
- 1 per db disk + 1 for
ea. chunk accessed freq.
- If KAIO is used allocate
1 + 2 for each cooked chunk (get rid of any cooked
- For KAIO systems 2 for
OnLine 1 per controller containing cooked chunks
- For systems w/o KAIO
2 for OnLine + 1 for each controller then add as indicated.
- 1 per dbspace
- 1 per disk
- 1 per mirrored pair
- 1 per chunk.
- -g ioq to
monitor IO Qs
- DSA spawns one read thread
per dbspace (AIO or KAIO)
suggestion is to get a system that supports KAIO and then set this to 2.
- most machines limit at
30, all limit at 32.
- Dig: For systems that
do not perform light scans, do not set RA_PAGES higher than 32.
for typically sequential DSS
- if too high will lower
- if bufwaits unusually
high RA_PAGES may be too high, or difference between RA_PAGES and RA_THRESHOLD
may be too small.
- Set close to RA_PAGES
e.g. RA_PAGES 32 and RA_THRESHOLD 30, if bufwaits (-p) increase reduce RA_THRESHOLD.
If most machines limit at 30 wont the RA_THRESHOLD
remain in a constant TRUE state?
- Ideally RA-pgsused =
(ixda-RA + idx-RA + da-RA)
- at least
two each on a different drive, more if building large indices
environments should use HW striping a small number of TEMPDBS across multiple
- Max space required for
index build is: non-fragmented tbls (key_size+4) * num_recs *2, fragmented
(key_size+8) * num_recs *2
- 90 is typical, 100 for
SELECT/DELETE only tables
- forces initially very
compact indices & efficient caching.
- 50-70% for tbls with
high INSERTS to delay need for node splitting
few years ago the fellow that tests this at Informix posited on USENET to
use HW mirroring over Informix every time. This make sense as
who will have a more intimate knowledge of the devices? HW solutions are always
faster than SW. In order of preference I would suggest HW, OS and then Informix
machines where availability is paramount one can mirror across controllers
and even arrays.
- when set to "1"
this undocumented parameter will generate read and write timings in the syschktab
SMI table. See Appendix C of DSA Performance Tuning Training Manual
Increasing the Unix priority
of AIO processes can improve the performance of data returned from disk.
Monitor IO with -g ioq (iof
and iov are also worthy). When AIOs used gfd len SB < 10, maxlen
<25. Maxlen often breaks 25 during engine initialization
when it is unimportant so make this distinction.
D will show hotspots at disk level g ppf at partition level.
an important Data Warehouse for my current employer the Sun Hotshot suggested
placing all data in only the middle 2GB sectors of each 4GB disk, leaving the
remaining unused. The highly paid Informix representative felt strongly that
using only the leading 2GB of sectors would perform better. I suggested that
we test and if it was within 5% that this be decided by ease of maintenance.
The leading sectors proved 2% faster than the middle. I do not recall which
I ended up implementing.
system is IO bound verify if it be controller or disk bound. The solutions are
Throughput = (pg_size *
num_pgs_requested/max_transfer_rate) + latency
The use of clustered indices
can greatly increase sequential reads.
Informix recommends using
fragmentation over HW striping unless the table is a poor canditate for fragmentation.
I would like to test this statement someday.
have not been able to test how Kernal IO effects NUMCPUVPS configuration. From
DSA Performance Tuning Manual (2-97) "If your system supports
kernal aio, onstat g ath will show one kio thread per CPUVP." Therefore
should NUMCPUVPS be associated with the number of disks, etc or remain a function
of the number of hardware CPUs?
- 1 per disk if < 20
- 1 per 2 disks if 20 to
- 4 per disk if > 100
- UPDATE at least one per
LRU queue pair. This has most recently been best
- If f indicates
that all cleaners are active allocate more.
rootdbs. Place on a separate spindle
- = usrthreads * 5 (or
size of most freq. blob) * 4
- = usrthrds * 5 * 4
- UNLESS tblspace blobs
in DB w/o logging then usrthrds * size of freq blob pg * 4
- > 3
all my manuals I can find no good reference for performance tuning this parameter.
have no experience adjusting this save for eliminating anomalous behavior.
- > 200kb
- small for greatest recovery,
if tape slow, or blobpages volatile
- (connects * maxrows (in
one trans)) * 512
- size of 3 LL buffers
- determines freq of flushing
- adjust interval of checkpoints,
- For data loading (except
HPL express mode) and parallel index builds 3000
- a large interval will
allow size of physlog (i.e. amount of work accomplished) to determine when
- F will show if
writes are LRU driven or CHKPTINTVL driven
- l and m will
determine if checkpoint interval is driven by Physlog = 75% full or this param
- max(4, NUMCPUVPS)
- if -R shows #dirty pages
> LRU_MAX add LRU Qs. If no change increase CLEANERS
- UPDATE: 4 per CPUVP (this
relationship has most recently been best for me.)
- 1 per 500-750 buffers,
up to 128 (max)
- More LRUs better support
large number of users by reducing buffwaits
To monitor g spi shows
contention for individual LRU queues
LRU_MAX[MIN]_DIRTY - %buffers
assigned to modified queue
- lower to decrease checkpoint
- For data loading (except
HPL express mode) & parallel index builds 70 & 80%, allow to almost
fill before flushing
Monitor: R queue length,
-F writes forced by this parameter by CLEANER thread
to /dev/null Informix will just marked the LL as backed up, not even going
through the motions. This should be done only on systems where logical log
recovery is not required and the user has been made aware of the implications.
LBU_PRESERVE - Preserve
last log for log backup
- If physlog frequently
fills decrease CKPINTVL
- phys logging - buffsize/pages
IO SB >75%, if near 100% increase physbuff size
- physical and logical
log buffers should be about 75% full when flushed.
bang for the buck makes Checkpoints another early thing to look at.
- 1 if CPU, additional
poll threads assign to NETVPs
- 300 single HWCPU 350
- For data loading one
per CPUVP. Each poll thread should be on a CPU class VPS (running a poll thread
- Do not increase user
connects as this will increase work for the poll thread
- max # consumed by any
query * # concurrent users
- Big Resident (buffers), small Virtual (SHMVIRTSIZE)
- Small Resident, big Virtual
- DSS may be up to 75%
RAM if paging is not induced
- DSS KEY FIELD
SHMADD (kilobytes) -
- 0 unlimited
be used to make Informix more polite reserving resources for other applications.
I have also had to use this when a failing malloc panics oninit
-g seg One of the first things I check as consolidating shared memory
segments can be huge bang for the buck.
onmode -F (used to free memory segments) can cause system failures on an active
system and should be avoided.
Sar -g 3 3 will show
To calc shared memory segments
corresponding to a DB instance shmid 52564801*.0001 = SERVERNUM
- if > 0, will enhance
parallelization of index builds (which after 7.2
are always parallel to some degree)
- DSS should be 90% of
Quantum = (PDQPRIORITY/100)*(DS_TOTAL_MEMORY/DS_MAX_QUERIES)
Each sort thread gets quantum/#sort
A users effective priority
= (pdqpriority/100) * ( MAX_PDQPRIORITY/100) where pdqpriority is set by the
environment variable or the SET PDQPRIORITY statement.
As many systems that I have seen fail due to Informix filling /tmp the "out
of the box" default should be off.
Oncheck pr can be
used to replace lost onconfig file
Resident Buffer Pool
(Pool is for residents use only - I needed a mnemonic
Virtual Light Scan
Area (Virtually no logging - this too.)
- 28 bytes per data page
is used by the engine
- 4 byte slot entry for
Scans and Sorts for index
builds are always parallel with 7.2. Idx builds on fragmented tables add parallel
single most important SQL statement for Q perf.
- lead columns in each
- all columns queried with
equality filters (=)
- all join columns
- 1st col to
uniquely distinguish a composite idx from another on same table and all cols
- all idx cols. Not run
through on HIGH
To speed up update statistics
set PSORT_NPROCS to 2, use DBSPACETEMP (duh) do NOT set DBUPSPACE (limits RAM
The following came from
a class handout. I have no idea the original author.
Database Tuning the Informix
- Establish performance
- Measure database activity
and use of resources.
- Identify performance
problems such as excessive use of CPU, memory, or disks.
- Tune the operating system.
- Tune the Online Dynamic
- Optimize the placement
of logs, sort space, and temporary space.
- Optimize table placement,
sizes of extents, and fragmentation
- Make sure the indices
- Optimize background activities
such as logging, checkpoints, and page cleaning.
- Schedule backups and
batch jobs for off-peak hours.
- Review application programs
to make sure appropriate access methods are used to retrieve data and algorithms
- Repeat steps 2 through
and 2.5 Set users expectations
& 4 Find a seasoned Systems Administrator for
could be performed much earlier in the cycle as more often than not, the largest
performance gains come from applications tuning.
aside as much time as possible to dedicate to this task. Three days is a minimum.
In a development environment ensure that two weeks are dedicated to this at
the end of the project. This will invariably get squeezed to three days which,
as above, is the absolute minimum to perform a thorough job.
Calculating maximum number
of extents for a particular table:
Max#extents <= (pagesize
((4 * number of columns in a table) + (8 * number of BLOB and VARCHAR
columns + 136) + (12 * number of indices) + (4 * number of columns in the indices)
Fragment large tables by
Find hot tables:
SELECT t.tabname table,
p.pf_dskreads + p.pf_dskwrites totalops,p.pf_dskreads diskreads , p.pf_dskwrites
FROM sysptntab p, systabnames
WHERE t.partnum = p.partnum
ORDER BY totalops DESC
Server Performance Tuning Training Manual - Informix part number 502-5-403-1-9999999-1
- Liz Suto, Course Designer
Tuning Informix Dynamic
Server and Your System for Optimum Performance - Art S. Kagel
More to come...
Please forward any comments