Last Revision Date: 08-AUG-2000
Archiver Best Practices
Creation Date: 08-May-1996
Last Revision Date: 08-May-1996
Revision Number: 0
Product Version: Oracle7
Information Type: ADVISORY
Abstract: Describes archiver functionality and some of the best
practices in configuring and tuning archiver.
provides the mechanism needed to backup the changes of the database.
The archive files are essential in providing the necessary information
recover a database. However, as transaction rates increase, we are facing
more and more problems in devising an efficient archiving strategy that
not impede with database performance which also accomodates your MTTR
(Mean Time to Recover) service levels. This paper describes some of the
practices to help tune archiver and provides suggestions in preventing
REDO LOG AND ARCHIVE FILE CONFIGURATION:
archiver reads from log files, we also need to configured log files
properly to tune archiving. Log files are primary written by log writer
(LGWR) and should be read by archiver (ARCH) or any process doing recovery.
The disks are write intensive and at times read intensive but there is
low concurrency. We suggest that log files should be Oracle multiplexed
hardware mirrored. They should never be placed on the same disks as the
archive files. Ideally, they should be located in their own set of disks
separated from all other files. The members or mirrored files should be
different disks and controllers to prevent any single point of failure
to increase thoroughput. Due to its IO behavior and importance, log files
ideally should be on raw disks with a recommended RAID 0+1 configuration
(mirroring and fine grain striping). Striping introduces parallelism to
disk writes, thus, it could speed up sequential writes by increasing the
amount of data whose write would complete in a quantum of time.
Archive files are always on ufs (UNIX file systems) with ideally a
RAID 0+1 configuration. Again, fine grained striping whenever archives
on dedicated disks is recommended. Archives should always be separated
the log files.
1) read the control file to find any unarchived logs,
2) open online redo log members to be read,
3) allocate redo log buffers (log_archive_buffers),
4) (async) read of the online redo log (log_archive_buffer_size),
usually aioread call if supported by operating system
uses alternative redo log members per buffer stream
5) fill redo log buffers,
6) (async) write of to the archive files (log_archive_buffer_size),
creates ufs archive file if not already created
first checks if buffer is full or if end of log
makes aiowrite call if supported by operating system
7) update control file with new information,
modify the archive log link list and change redo log statuses
8) starts the loop again.
Oracle Parallel Server, archiver can also prompt idle instances to archive
their logs to prevent any thread of redo from falling far behind. This
often referred to as kicking an idle instance. This help ensures that
redo logs are archived out from all enabled threads so that media recovery
a standby database environment does not fall behind.
encountering archiver busy waits or checkpoint busy waits warnings in
Alert file, one should:
evaluate the number and size of the online redo logs
Most often, increasing the size and the number of online redo log
groups will give archiver more time to catch up.
Adding more online logs does not help a situation where the archiver
cannot keep up with LGWR. It can help if there are bursts of redo
generation since it gives ARCH more time to average its processing
rate over time.
evaluate checkpoint interval and frequency
There are several possible actions include adding DBWR processes,
increasing db_block_checkpoint_batch, reducing db_block_buffers.
Turning on or allowing async IO capabilities definitely helps
alleviate most DBWR inefficiencies.
consider adding multiple archiver processes
Create 'alter system archive log all' scripts to spawn archive
processes at some fixed interval may be required. These processes
once spawned will assist archiver in archiving any unarchived log in
that thread of redo. Once it has been completed, the temporary
processes will go away.
tune archiver process
change log_archive_buffer_size (max 128 in some ports)
change log_archive_buffer (max 8 in some ports)
In many platforms, a patch needs to be applied to increase
these values. In some ports, 7.3.2 fixes this limitation.
check operating system supportability of async IO
Async reads should help tremendously.
Async writes may help if OS supports aio on file systems.
Please check with your vendor if the current version of
your operating system supports async IO to file systems (ufs).
Check for system or IO contention.
Check queue lengths, CPU waits and usage, disk/channel/controller
level bottlenecks. Please check operating system manuals for the
appropriate commands to monitor system performance. For example,
some UNIX ports can use "sar -d" and "iostat" to identify
is common for environments where there is intensive batch processing to
see ARCH fall behind of LGWR. In those cases, you should review the above
suggestions. In many cases increasing the number and/or size of log groups
well as spawning extra archive processes is the most sufficient solution.
are three primary goals:
* Ensure that all online redo logs are archived and backed up
* Prevent any archiver busy waits.
* Keep all archives on disk from last database backup to reduce
that all online redo logs are archived and backed up
To accomplish this first goal, one needs to monitor the database, archiver
progress (by looking at V$LOG and archive trace files), archive destination,
and tape management procedure. You should never archive a log until the
column in V$LOG is set to YES. Scripts can be written that log into the
database and query V$LOG to build a set of archives to write out to tape.
The tape management procedure should have checksums to ensure that the
archive file was successfully backed up. Error checking and a good reporting
tool are essential in detecting and resolving archiving and tape backup/restore
errors. In 7.3, Oracle provides a checksumming mechanism when copying
from the online redo log to the archive files. This new init.ora parameter
is called log_block_checksum.
multiple members to a logfile group is also advisable. If there
are multiple members, all members in a group are used to perform the
archive process. Assuming that there are three members, the first chunk
is read from one of the members and is being written to the archive while
reading a second chunk from second member, then a third chunk from third
member, then back to first and the process continues in round robin fashion.
If a corruption is found in one of the members, it is validated (read
and if the corruption still exists, reading from that member is stopped
and the rest are used.
is one of the big benefits of using multiple members instead of
mirroring at o/s level. Oracle knows about the multiple members so it
optimize on archiving, but it does not know about the mirrors. One other
benefit with using multiple members is that a separate write is issued
each member, so the odds of having a totally corrupted redo log is
diminished (corruption written to one mirror will usually propagate over
all other copies).
All archives from all enabled threads need to be backed up. If you have
an idle instance, it will still create archive header files that are essential
for media recovery. This applies to Oracle Parallel Server only.
any archiver busy waits
To prevent archiver busy waits, the archiver should be tuned by adjusting
log_archive_buffers and log_archive_buffer_size. The tuning tips described
earlier in the paper should be followed.
all archives on disk from last database backup
Keeping all archives on disk from last database backup will reduce recovery
time by bypassing the time required to restore the archives from tape.
may reduce MTTR (Mean Time to Recover) dramatically.
may be able to achieve this by creating several archive destinations
or having one large archive destination. For example, lets assume you
several archive destinations. Archiver is writing to archive DEST 1. When
DEST 1 fills up to a certain threshold, say 80% (enough room for two more
archive files), you can switch the archive destination by issuing the
command, 'alter system archive log to 'DEST2'. Archiver will then archive
the NEXT log in the new destination.
backups can occur in DEST1 while archiver is writing to DEST2. This
reduces IO contention for those disks. Furthermore, depending on the size
the destination, you can optimally keep a large number of archives on
Before switching back to DEST1, we can purge the archives in DEST 1 that
has been successfully backed up to tape.
sites have one very large archive destination instead of several archive
destinations. Again, scripts are created to monitor and to log in to the
database to determine which Oracle archives to back up to tape. These
are backed up as soon as possible. A purging algorithm is produce to purge
only those files that have been successfully backed up to tape and with
timestamp that is older than the beginning of the last successful Oracle
backup. Unfortunately, there may be some additional disk contention with
plan due to the IO concurrency from the archive process(es) and tape backup
best practices include monitoring the statuses of log files to check for
STALE or INVALID logs. If the logs remain STALE, then you should investigate
any possible media problems and relocate or recreate new members to maintain
the level of resiliency for the logs. STALE logs imply that there are
writes in this log. Oracle considers incomplete logs as STALE; so, you
get STALE logs after a shutdown abort or if the LGWR process simply cannot
write to that redo log member. Archiver can easily detect if there are
missing changes in the redo log member by verifying the correctness of
redo log block. If the archiver detects a problem, it will switch to another
member searching for a sound set of redo log blocks. The archiver will
complain if it can create an "good" archive file from the composite
of all the online redo log members.
archiver falls behind often, then one can spawn extra archiver processes.
We recommend monitoring V$LOG to alert and spawn extra archiver processes
whenever there are more than 2 logs that need archiving.
There is an enhancement, bug 260126, to allow for the ability to have
several archiver processes.
the alert.log for archiver or checkpoint errors, archiver and
log writer background trace files for errors, and archive destination
of free space are essential in catching most potential archiving related
is an important component in safeguarding database changes and for
recovery. The archive files should be well protected and backed up to
offsite media. However, the archiving procedure when done correctly should
not incur any performance degradation to the database. With the above
recommendations and best practices, we hope customers can configure a
availability backup/recovery solution involving archive files as well
maintain the application's performance service levels.
Cary Millsap's VLDB best practice paper
bug 282224 , bug 238308 , bug 260126 , bug 189963