Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com
http://www.free-online-exams.com
Master Note for Streams Performance Recommendations [ID
335516.1]
| ||
| ||
|
|
|
In this Document
Purpose
Scope and Application
Master Note for Streams Performance Recommendations
General Operational Considerations
Implications of Tables with no Primary Key Column(s)
General Recommendations
Logminer Related Configuration
Capture Related Recommendations
Propagation Recommendations
Apply Recommendations
Additional Apply Performance Tips
References
Purpose
Scope and Application
Master Note for Streams Performance Recommendations
General Operational Considerations
Implications of Tables with no Primary Key Column(s)
General Recommendations
Logminer Related Configuration
Capture Related Recommendations
Propagation Recommendations
Apply Recommendations
Additional Apply Performance Tips
References
Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.2 to 11.2.0.2 -
Release: 9.2 to 11.2
Information in this document applies to any platform.
Information in this document applies to any platform.
Oracle Streams enables the sharing of data and events in a data
stream either within a database or from one database to another. This Note
discusses recommendations to improve the performance of
Streams.
The information contained in this note targets Oracle support
analyst and Replication administrators to improve the performance of Streams
replication in Oracle 9.2 and higher. This note contains key recommendations for
successful implementation of Streams in Oracle database release 9.2 and above.
Customers should be on the latest patchset and it is recommended to
apply relevant patches as detailed in :
- Document 437838.1 Streams Specific Patches . These patches detailed address most of the commonly encountered performance and stability issues connected with Streams.
A complete list of 10.2 recommendations is detailed in :
- Document 418755.1 10gR2 Streams Recommended Configuration (it is anticipated that various related notes will be merged at some point).
If there are performance issues, more detail may be found to help
gather relevant information as well as isolate the cause in notes :
- Document 746247.1 Troubleshooting Streams Capture when status is Paused For Flow Control
- Document 730036.1 Overview for Troubleshooting Streams Performance Issues.
The following main areas of operation can be affected by the
configuration details which follow.
Streams uses buffered queues with the aim that all LCRs are
processed in memory.
Spill of information from memory to disk can affect performance and can occur in 3 functional areas; these spill activities are termed :
Spill of information from memory to disk can affect performance and can occur in 3 functional areas; these spill activities are termed :
- Logminer spill (source capture)
- Queue spill (capture and apply)
- Apply Spill (apply).
All Oracle versions have Queue spill. Queue spill is typically
associated with LCRs not being processed in a timely fashion from the buffered
queue ; rather than allowing old LCRs to remain in memory for an extended period
of time , they are spilled out to disk to the table
: aq$_<queue_table_name>_p.
Queue spill may also be associated with memory (Streams pool) space ;
if there is not enough memory to accommodate LCRs, they are again spilled to
disk. Spill of this nature may be associated with a variety of causes therefore
it is worth considering the parameter setting below.
Oracle versions >= 10gR2 additionally introduce Apply Spill. Apply
spill is primarily connected with the processing of large transactions although
it does take account of long running transactions. It is worth nothing that
Apply spill can operate on queue spill.
The Capture side leaves the management of these two types of
transaction to the Apply side. These types of transaction are written to a
partitioned table at the Apply side database; the partitioned table is far
easier to manipulate than the queue spill table. The apply spill threshold can
be configured.
Logminer spill writes cached logminer information out to logminer
spill tables. It may be necessary to reconfigure the amount of space for cached
logminer information. Typically the default allocation is sufficient for most
applications. Where there is Queue spill , there may be the need to deallocate
unused space or shrink the space usage of the related
aq$_<queue_table_name>_p periodically.
Streams (Capture) has to checkpoint as it goes along in order to
ensure that it can restart from a point in the past which does not require too
many redo logs to be reprocessed. Care should be taken to understand when
checkpoints will take place as well as the length of time checkpoint
information is retained. Checkpoint information is stored in a single
unpartitioned table. Therefore consideration should be given to the related
parameters below especially in the event of there being multiple Capture
processes. Checkpoint frequency and purge interval can be configured. It is
inefficient to checkpoint too frequently, similarly it is inadvisable to hold on
to checkpoint information too long - both can affect performance.
The Capture process in 10gR1 and above now uses an inbuilt flow
control mechanism. Flow control aims to ensure that the source of LCRs (the
Capture) does not flood the buffer queue of the Apply process before the Apply
process has had a chance to apply these. In certain cases it may be relevant
to amend the flow control related parameters.
This is connected with Apply processing. Streams is suited
to environments where all tables have primary keys. The Apply process should
operate as quickly as possible when performing update and delete operations and
a primary key allows the row concerned to be directly referenced for these
operation types. It is worth checking the section : 'Substitute Key Columns' which outlines what is required in the circumstances where a table
does not have a primary key - more details are found in the Oracle® Streams
Replication Administrator’s Guide.
Whilst the documentation mentions that it is possible to have nulls
in the substitute key, in practice allowing a null in an index can allow
multiple null values in the indexed column and consequentially direct access to
key data may not be possible.
Remember this point as the aim is to have the substitute key columns
effectively behave like a primary key (unique and not null data values).
- Streams Pool Allocation
- 11g: set STREAMS_POOL_SIZE to be greater or equal to 200 MB;
- 10gR2: set SGA_TARGET > 0 to enable autotuning of the Streams pool and set the STREAMS_POOL_SIZE to be greater or equal to 200 MB, this will set the lower bound.
- 10gR1: use the STREAMS_POOL_SIZE init.ora parameter to configure the Streams memory allocation. Set the STREAMS_POOL_SIZE to be greater or equal to 200 MB.
- 9.2: increase the SHARED_POOL size to minimize spill-over from buffer queue to disk
As well as addressing how memory is to be allocated for Streams , in
version 9.2 the Streams/logminer dictionary tables should be located outwith the
SYSTEM tablespace as follows :
create tablespace &tspace_name
datafile '&db_file_directory/&db_file_name' size 25 m reuse autoextend on maxsize unlimited;
execute dbms_logmnr_d.set_tablespace('&tspace_name');
datafile '&db_file_directory/&db_file_name' size 25 m reuse autoextend on maxsize unlimited;
execute dbms_logmnr_d.set_tablespace('&tspace_name');
Note in 10gR1 and above : The Streams data dictionary tables are
created in the SYSAUX tablespace by default. There is no need to move it.
Streams_pool_size should still be set even when using ASMM (200mb to
start with for instance)
- Initialization file hidden parameters
10gR2
_job_queue_interval=1
_spin_count=5000
10gR1
_job_queue_interval=1
_spin_count=5000
9.2
_first_spare_parameter=50
_kghdsidx_count=1
_job_queue_interval=1
_spin_count=5000
Note: _first_spare_parameter in 9.2.0.8 will configure a % of the
shared pool to be used for the Streams pool. A _kghdsidx_count setting of 1
avoids the use of sub pools within the shared_pool; in 9.2 multiple sub pools
have been associated with performance issues.
- Queue Monitor / AQ_TM_PROCESSES
In 10gR1 onwards, remove the AQ_TM_PROCESSES parameter from your database initialization parameter file. This will allow the queue management background activity to be managed automatically.
Note: You should not set the AQ_TM_PROCESSES parameter explicitly to
0. This would effectively break or disable the following operations : background
queue management activity as well as cleanout of processed messages and would
break AQ activity relating to message delay, expiration and
retention.
- Queue to Queue Propagation
When upgrading to Oracle Database Release 10.2 and above, from releases 9.2 or 10.1, drop any propagations and recreate them specifying the queue_to_queue parameter as TRUE. This is especially relevant where a RAC destination is in place as this allows propagation to fail over to an existing instance which now owns the Streams buffered queue.
- Multiple Separate Queues for Capture and Apply
From all Streams versions up to and including11g , where bi-directional replication is in place, configure separate Capture and Apply queues to minimize the affect of Queue spill to disk. Do not have both these processes sharing the same queue unless Capture and Apply reside in same database and there is no propagation taking place. Both processes will see LCRs for both activities; if the Capture queue has spilled (Queue spill) this can affect Apply operation and slow it down.
- Avoid Complex Rules
Make sure that rules as this allow Fast evaluation. Fast rule evaluation can be used on simple rules with object_owner, object_name, source_database_name in simple equality statements.
In 9.2, it was necessary to avoid 'LIKE' , 'NOT' and != operators as much as possible as these operators disallowed rule evaluation optimizations. It is better (i.e., faster to evaluate) to have multiple simple rules than 1 complex rule.
In 10g and above, the same can be achieved by using a negative rule set to eliminate tables or objects from Streams processing.
Examples of simple rules:
(((:dml.get_object_owner() = 'FSADBO1' and
:dml.get_object_name() = 'STREAMS_TEST')) and
:dml.is_null_tag() = 'Y' and
:dml.get_source_database_name() = 'FSABCD.WORLD')
(:dml.get_object_owner() = 'POSIT' and :dml.is_null_tag() = 'Y'
and :dml.get_object_name() IN
('TB_M21_1','TB_M21_2','TB_M21_3','TB_M21_40','TB_M21_10','PN_M211B010'))
:dml.get_object_name() = 'STREAMS_TEST')) and
:dml.is_null_tag() = 'Y' and
:dml.get_source_database_name() = 'FSABCD.WORLD')
(:dml.get_object_owner() = 'POSIT' and :dml.is_null_tag() = 'Y'
and :dml.get_object_name() IN
('TB_M21_1','TB_M21_2','TB_M21_3','TB_M21_40','TB_M21_10','PN_M211B010'))
Example of complex rules:
(((:dml.get_object_owner() = 'FSADBO1' and
:dml.get_object_name() != 'STREAMS_TEST')) and
:dml.is_null_tag() = 'Y' and
:dml.get_source_database_name() = 'FSABCD.WORLD')
(:dml.get_object_owner() = 'POSIT' and :dml.is_null_tag() = 'Y'
and :dml.get_object_name() NOT IN
('TB_M21_1','TB_M21_2','TB_M21_3','TB_M21_40','TB_M21_10','PN_M211B010'))
:dml.get_object_name() != 'STREAMS_TEST')) and
:dml.is_null_tag() = 'Y' and
:dml.get_source_database_name() = 'FSABCD.WORLD')
(:dml.get_object_owner() = 'POSIT' and :dml.is_null_tag() = 'Y'
and :dml.get_object_name() NOT IN
('TB_M21_1','TB_M21_2','TB_M21_3','TB_M21_40','TB_M21_10','PN_M211B010'))
Note: complex rules result in a sql statement being executed with
each evaluation of the rule.
Inspect the Streams Healthcheck : Document 273674.1 or use the following in order to understand if complex rules are involved.
-- Capture
select capture_name, owner, name from gv$rule_set r, dba_capture c
where c.rule_set_owner = r.owner and c.rule_set_name = r.name
and r.sql_executions > 0;
-- Propagation
select propagation_name, owner, name from gv$rule_set r, dba_propagation p where p.rule_set_owner = r.owner and p.rule_set_name = r.name
and r.sql_executions > 0;
-- Apply
select apply_name, owner, name from gv$rule_set r, dba_apply a
where a.rule_set_owner = r.owner and a.rule_set_name = r.name
and r.sql_executions > 0;
select capture_name, owner, name from gv$rule_set r, dba_capture c
where c.rule_set_owner = r.owner and c.rule_set_name = r.name
and r.sql_executions > 0;
-- Propagation
select propagation_name, owner, name from gv$rule_set r, dba_propagation p where p.rule_set_owner = r.owner and p.rule_set_name = r.name
and r.sql_executions > 0;
-- Apply
select apply_name, owner, name from gv$rule_set r, dba_apply a
where a.rule_set_owner = r.owner and a.rule_set_name = r.name
and r.sql_executions > 0;
- Implement a "heart beat" table
It is useful to do this for the reasons outlined : - To ensure that Capture checkpointing is attempted regularly and the DBA_CAPTURE view is maintained. i.e on systems where there is a lot of redo activity with little Capture activity , this will ensure that a capture process will not have to restart from an old logfile with the undesirable consequences of reprocessing significant redo;
- To provide a simple means to understand how up to date the changes are on your Apply side database. The heartbeat table should reflect the date and time of the update from the Source and therefore this will reflect the latency or current of changes applied.
- 9.2: Implement Flow Control.
These manual flow control scripts should be used on Oracle 9.2 Streams source databases (i.e., databases running 9.2 streams capture processes). Refer to Document 259609.1
Note : From 10g onwards flow control in automatically enabled.
- Supplemental Logging
Database wide supplemental logging imposes a significant overhead and may affect performance. This should therefore be avoided .
Generally, this parameter should not be modified. The only occasions
where it is valid to change the value of _SGA_SIZE for the Capture/logminer
session are under circumstances :
- ORA-1341 is observed; or
- Where there is log miner spill. Log miner spill is output as an advisory warning in the streams healthcheck report under section detailed : 'performance checks' .
exec dbms_capture_adm.set_parameter('','_SGA_SIZE','20');
Note: The default of 10MB is typically sufficient. - double
parameter setting until observed issue is removed.
The majority of Capture / logminer memory issues (ORA-01280) would be resolved by increasing _SGA_SIZE to 80 or even 150 Mb, which could be set as follows :
exec
dbms_capture_adm.set_parameter('<capture_name>','_SGA_SIZE','80');
- Capture Parallelism
Parallelism can be set on capture, however it is recommended that
Capture is not parallelised. There is little or no benefit to be gained
therefore parallelism should be configured as follows :
- 11g: parallelism=1 is the recommended setting and is the default ;
- 10g: parallelism=1 is the recommended setting ;
- 9.2: parallelism=3 is the recommended setting
Example:
exec
dbms_capture_adm.set_parameter('<capture_name>','parallelism','1');
Notes : Ensure that the PROCESSES initialization parameter is set
appropriately when you set the parallelism capture process
parameter.
- Streams Checkpoints / _CHECKPOINT_FREQUENCY
There are considerable implications surrounding checkpointing . The
significant things to consider in relation to checkpointing being :
<!--[if !supportLists]-->1.
<!--[endif]-->Frequency of checkpointing
<!--[if !supportLists]-->2.
<!--[endif]-->The period Streams should retain checkpoint information
for
Checkpointing is performed by Capture (Builder process) and is
connected with the number of Megabytes of redo mined before a logminer
checkpoint will be taken.
Notes:
<!--[if !supportLists]-->· <!--[endif]-->Checkpoints may be requested but they may not complete for a number
of valid reasons.
<!--[if !supportLists]-->· <!--[endif]-->Check dba_capture.required_checkpoint_scn to ensure that checkpoints
are occurring and as a consequence Capture does not have to restart from an old
redo log;
<!--[if !supportLists]-->· <!--[endif]-->Checkpoints can accumulate a large amount of space in
system.logmnr_restart_ckpt$.
Also consider number of sessions active on the database as well as
the number of Capture processes since both have a direct affect on number of
rows and therefore size of this table.
As the Oracle versions have developed over time, the rules relating
to the validity of a checkpoint have been relaxed and, as a result , checkpoints
need not be generated as frequently as in earlier versions since in later
versions they are more likely to complete.
The frequency with which a checkpoint is requested can be adjusted
using :
exec dbms_capture_adm.set_parameter
('<capture_name>', '_checkpoint_frequency','1000');
The above will adjust Streams Checkpoints to occur after every 1000MB of redo. Checkpoints record metadata and as a consequence generate redo; a higher than expected amount of redo generated could be a consequence of the checkpointing occurring too frequently. Similarly too few checkpoints will require that logs have to be retained on disk for longer than expected since dba_capture.required_checkpoint_scn does not move forward as readily as it should.
An indication of the recommended and setting for _CHECKPOINT_FREQUENCY is follows :
Version
|
Recommended setting
|
Default setting
|
Modify
|
>=10gR2
|
1000
|
1000
|
n/a
|
<=10gR1
|
100
|
10
|
Yes
|
Note: This value should be changed from the default setting for
database versions < 10gR2
In older versions it may have been necessary to force a checkpoint to occur at a periodic interval as follows :
execute
dbms_capture_adm.set_parameter('<capture_name>', '_CHECKPOINT_FORCE',
'Y');
- Purging Streams Checkpoints
Eliminate unnecessary Streams/Logminer checkpointing metadata.
<!--[if !supportLists]-->o <!--[endif]-->10.1: Alter the first_scn periodically (weekly or daily) to remove
unneeded metadata for Streams capture;
<!--[if !supportLists]-->o <!--[endif]-->10.2: Alter the capture parameter CHECKPOINT_RETENTION_TIME from the
default retention of 60 days to a realistic value for your database.
Note: The default retention period is typically too long a period to
retain checkpoint information.
A typical setting might be to retain 7 days worth of checkpoint metadata :
exec
dbms_capture_adm.alter_capture(capture_name=>'your_capture',
checkpoint_retention_time=> 7);
<!--[if !supportLists]-->o <!--[endif]-->11g : set the parameter : CHECKPOINT_RETENTION_TIME to an appropriate
value for your environment. Again 7 days would appear to be a reasonable period
of time.
- Propagation Job Interval
To reduce the latency between jobs, set the hidden parameter _job_queue_interval = 1. This should be done as an init.ora parameter or an spfile parameter. Because it is a hidden parameter, the database must be restarted in order for the value to take effect. The default for _job_queue_interval is 5 seconds.
- Remove unnecessary Propagation Rules
If you are configuring a propagation that takes ALL changes from the source queue to the destination queue (ie, no selectivity requirements), you can remove the rule set from the propagation definition. This will eliminate the necessity to do ANY rule evaluation and will result in higher propagation throughput.
- Propagation Latency
Propagation latency is the maximum wait, in seconds, in the propagation window for a message to be propagated after it is enqueued. The default propagation latency value is 60. Reduce the latency of the propagation schedule to 1 by issuing the following :
exec
dbms_aqadm.alter_propagation_schedule(queue_name,destination,latency=>1);
- Queue to Queue Propagation or Queue-to-Database link
Propagations configured prior to Oracle Database 10g Release 2 are
queue-to-dblink propagations. Also, any propagation that includes a queue in a
database prior to Oracle Database 10g Release 2 must be a queue-to-dblink
propagation. When queue-to-dblink propagations are used, propagation will not
succeed if the database link no longer connects to the owning instance of the
destination queue.
When upgrading to Oracle Database Release 10.2 and above, from releases 9.2 or 10.1, drop any propagations and recreate them specifying the queue_to_queue parameter as TRUE.
In 11g use queue-to-queue propagations whenever possible. A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately.
When upgrading to Oracle Database Release 10.2 and above, from releases 9.2 or 10.1, drop any propagations and recreate them specifying the queue_to_queue parameter as TRUE.
In 11g use queue-to-queue propagations whenever possible. A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately.
- TCP related Parameter Settings
Increase the SDU in a Wide Area Network for Better Network Performance In addition, the SEND_BUF_SIZE and RECV_BUF_SIZE parameters in the listener.ora and tnsnames.ora files increase the performance of propagation on your system. These parameters increase the size of the buffer used to send or receive the propagated messages. These parameters should only be increased after careful analysis of their overall impact on system performance. For more details , refer to Document 780733.1 . - Queue Spill related Space Management
Versions >= 10gR2.
To improve the performance of streams after significant Queue spill activity, perform the following:
alter table aq$_<queue_table_name>_p enable row
movement;
alter table aq$_<queue_table_name>_p shrink space;
alter table aq$_<queue_table_name>_p shrink space;
For more explanation _p which store the messages that spill from
memory see Document 242664.1
These commands can be issued against queue tables, spilled tables, and IOTs in versions 10gR2 and above
These commands can be issued against queue tables, spilled tables, and IOTs in versions 10gR2 and above
Versions <= 10gR1
To improve the performance of streams after significant Queue spill activity, perform the following:
alter table aq$_<queue_table_name>_p deallocate
unused;
The above can be performed while streams is running as long as there is no active spilling or dequeuing of messages from the spillover table. It is highly recommended that this activity be performed in a maintenance window (ie, no streams enqueue/dequeue activity). Do not use the above statement on the Index Organized Tables (IOT) for the queue table.
To reclaim space from IOTs, do the following in a maintenance window (i.e. when streams not actively running):
- export the queue table;
- drop the queue table;
- import the queue table
- drop the queue table;
- import the queue table
It is worth mentioning Apply process configuration in relation to
parallelism as it is indicated that parallelism can improve performance. If
parallelism is specified, this will have an affect on the derived parameters :
_TXN_BUFFER_SIZE and _HASH_TABLE_SIZE.
The Apply coordinator fetches transactions from the Apply reader and hands these off to the Apply Slaves. The coordinator is able to prefetch transactions from the reader before they will be allocated to slaves and the number of transactions which can be prefetched corresponds to the _TXN_BUFFER_SIZE parameter value (i.e this effectively specifies a number of transactions and not a size). If transactions are very large , prefetching these from the reader process can put pressure on memory usage and therefore impact performance. As outlined below this is something to consider and profile if large transactions are likely in the environment. Increasing Apply parallelism increases the number of transactions which can be prefetched.
The _HASH_TABLE_SIZE is used for dependency tracking. Again, as parallelism is increased, this value is correspondingly increased.
Derived values for the above are as follows :
The Apply coordinator fetches transactions from the Apply reader and hands these off to the Apply Slaves. The coordinator is able to prefetch transactions from the reader before they will be allocated to slaves and the number of transactions which can be prefetched corresponds to the _TXN_BUFFER_SIZE parameter value (i.e this effectively specifies a number of transactions and not a size). If transactions are very large , prefetching these from the reader process can put pressure on memory usage and therefore impact performance. As outlined below this is something to consider and profile if large transactions are likely in the environment. Increasing Apply parallelism increases the number of transactions which can be prefetched.
The _HASH_TABLE_SIZE is used for dependency tracking. Again, as parallelism is increased, this value is correspondingly increased.
Derived values for the above are as follows :
Parameter
|
Version
|
Value
|
_TXN_BUFFER_SIZE
|
10.1, 10.2
|
Derived value : 80 * Parallelism; minimum value : 80
|
|
11.1.0.6
|
80 * Parallelism; minimum value : 320
|
|
> 11.1.0.6
|
11.1.0.6 Auto tuned
|
_HASH_TABLE_SIZE
|
10.1, 10.2
|
Derived value : 8000 * Parallelism
|
|
10.2.0.4 (unpublished bug 5720734), 11.1.0.6
|
1000000
|
|
> 11.1.0.6
|
1000000
|
Note: In 11.1.0.7 specifically, the auto tuned value for
_TXN_BUFFER_SIZE may result in reduced apply throughput, in these cases set this
parameter manually starting with a minimum value of 320. The issue has been
fixed in 11.2.
Further useful pointers follow :
- Controlling Apply Spill
The number of LCRs in a transaction which trigger Apply Spill can be configured. The default is 100000. This can only be amended in 10gR2 and above. Refer to Document 365648.1 for more details.
- Handling Very Large Transactions
Set Hidden apply parameter _TXN_BUFFER_SIZE ONLY for Large transaction
For 10g and where parallelism > 1, to prevent spillover from occurring when using parallelism, reduce the hidden apply parameter _TXN_BUFFER_SIZE to 10. If the transactions are very large (ie, have many row changes within a single transaction [100000's] and parallelism > 1, consider reducing the transaction buffer size even lower (for example, _TXN_BUFFER_SIZE=2)
Note: If your environment supports only small transaction then
ignore this parameter
In 11g :The parameter _TXN_BUFFER_SIZE is autotuned so this parameter should not be set. If upgrading from 10g to 11g, _TXN_BUFFER_SIZE should be unset :
exec dbms_apply_adm.set_parameter('<apply_name>',
'_TXN_BUFFER_SIZE',null);
- Recommended Parameters
For 10.2 and above set the following apply parameters:
exec dbms_apply_adm.Set_parameter('<apply
name>','parallelism','4')
exec dbms_apply_adm.Set_parameter('<apply name>','_dynamic_stmts','Y')
exec dbms_apply_adm.Set_parameter('<apply name>','_hash_table_size','1000000')
exec dbms_apply_adm.Set_parameter('<apply name>','disable_on_error','N')
exec dbms_apply_adm.Set_parameter('<apply name>','_dynamic_stmts','Y')
exec dbms_apply_adm.Set_parameter('<apply name>','_hash_table_size','1000000')
exec dbms_apply_adm.Set_parameter('<apply name>','disable_on_error','N')
Note: The default value of buffsize is as follows :
bufsize=min(320,80*parallelism).
If the size of each transaction is typically > 100000 LCRs; i.e there are over 100,000 row changes per transaction for most transactions, then bufsize= 10+parallelism
If the size of each transaction is typically > 100000 LCRs; i.e there are over 100,000 row changes per transaction for most transactions, then bufsize= 10+parallelism
exec dbms_apply_adm.set_parameter('<apply
name>'_txn_buffer_size',bufsize)
- Apply Parallelism
Increase the parallelism of the apply process at the destination to match the concurrency of the source database activity : - If you set parallelism > 1 on the apply process, be sure to configure the appropriate constraints at the destination site (as well as supplemental logging at the source site) to do dependency computations.
- For releases lower than 10.1.0.5, be sure to specify UNCONDITIONAL supplemental logging at the source database for ANY indexed columns at the destination database that participate in Streams.
- Test with parallelism values based on available cpu. Typical settings are 4, 8, 16, or 24. 16 and 24 are normally used for high transaction loading with high end machines. Values between 1 and 4 are typically used on low end machines. Mid-range system typically configure 4 or 8 apply servers.
Note: Modify the INITRANS, PCTFREE of SYS.STREAMS$_APPLY_PROGRESS
table. The INITRANS value should be >= to the apply process parallelism.
Set PCTFREE to at least 10 :
- _DYNAMIC_STMTS:
For 10gR2 and above with a workload consisting of many UPDATE transactions that modify less than half of the columns for any given row, consider using the hidden apply parameter _DYNAMIC_STMTS. This parameter setting may reduce redo generation as well as improve apply performance. To set this parameter, issue :
exec dbms_apply_adm.set_parameter ('<apply
name>','_dynamic_stmts','y');
If more than half of the columns are updated in an LCR, this parameter is ignored for that particular update LCR.
- _HASH_TABLE_SIZE:
The following is applicable to all releases. In an mixed (Insert/UPdate/Delete) or heavy update workload that results in a large number of dependency waits (WAIT_DEPs) on replicated tables with many constraints , consider increasing the size of the dependency hash table with the hidden parameter _HASH_TABLE_SIZE. Set the value of this parameter to a large number, for example, 1 million (1000000) or 10 million (10000000). i.e :
exec dbms_apply_adm.set_parameter('<apply
name>','_hash_table_size','10000000');
- Are All Changes to be Applied
If you are configuring an apply process to take ALL changes that are put into the queue (ie, no selectivity or subsetting requirements), you can remove the rule set from the apply definition. This will eliminate the necessity to do ANY rule evaluations and will result in higher apply throughput.
- Constraints + Supplemental Logging
Increase the parallelism of the apply process at the destination to match the concurrency of the source database. If you set parallelism > 1 on the apply process, be sure to configure the appropriate constraints at the destination site (as well as supplemental logging at the source site) to do dependency computations.
If adding additional columns at the target database, specify DEFAULT values for those additional columns, if possible, so that a DML_HANDLER is not required for INSERTs into the table. For example, if the table at the target site includes an additional column LAST_UPDATE (ie, the LAST_UPDATE column does not exist at the source database -only at the target database) which is the date/timestamp of the most recent update, then specify a DEFAULT value of SYSDATE for the LAST_UPDATE column. This will eliminate the need to execute a dml_handler for any insert into the table. Of course, UPDATEs will require that the dml_handler be invoked.
For complete details refer to the Oracle® Streams Replication Administrator’s Guide. - Conflict Detection:
If the target database is used in a read-only mode (i.e., no DMLs are performed on the replicated tables), consider disabling conflict detection for non-key columns on replicated tables with the DBMS_APPLY_ADM.COMPARE_OLD_VALUES procedure.
For heavy DELETE workloads, use the DBMS_STREAMS_ADM.DELETE_COLUMN procedure for the APPLY table rule to remove the non-key columns from the DELETE LCRs. - Apply Parallelism and INITRANS
To minimize ITL contention issues, modify the INITRANS value for each replicated table and index so that INITRANS is equal to the apply parallelism value or higher.
Note: INITRANS is not connected with ASSM (Automatic Segment-Space
Management). Ensure that INITRANS >= PARALLELISM of the apply process for
all objects that have LCRs applied to them in a Streams
environment.
alter table <table name> initrans 16;
alter index <index name> rebuild initrans 16;
alter index <index name> rebuild initrans 16;
Note: Initrans for LOBINDEXES is always 2. See Bug:8762509 INITRANS FOR A LOB
INDEX IS IGNORED. This clause has been deprecated, if you specify this clause,
then Oracle ignores it. Oracle automatically generates an index for each LOB
column and names and manages the LOB indexes
internally.
Example:
CREATE TABLE test (a blob) initrans 4
LOB (a) STORE AS ( INDEX my_lob_index (initrans 4) );
select ini_trans from user_indexes where index_name = 'MY_LOB_INDEX';
INI_TRANS
----------
2
Example:
CREATE TABLE test (a blob) initrans 4
LOB (a) STORE AS ( INDEX my_lob_index (initrans 4) );
select ini_trans from user_indexes where index_name = 'MY_LOB_INDEX';
INI_TRANS
----------
2
For replicated objects in manually managed tablespaces, PCTFREE should be at least 10.
alter table <table name> pctfree 20;
alter index <index name> rebuild pctfree 20;
alter index <index name> rebuild pctfree 20;
- Streams and Large Transaction
If possible, decrease transaction sizes to less than 1000 LCRs. Large
or long transactions will affect Streams. These may result in Queue spill or
Apply spill. As outlined, most of the areas which can cause issues relate to
large and long running transactions which may be associated with Queue spill and
Apply spill. Queue spill is more onerous than apply spill.
- Foreign Key Constraints
Foreign Key constraints impact Apply performance. If possible, remove
FK constraints and any unnecessary indexes;
- Partition Tables
Consider partitioning the primary key index for the replicated tables
to improve Apply processing, e.g :
create table t1 (
my_id integer,
integer_0001 integer,
integer_0002 integer,
integer_0003 integer,
integer_0004 integer,
integer_0005 integer,
varchar_0001 varchar(4000),
varchar_0002 varchar(4000),
varchar_0003 varchar(4000),
varchar_0004 varchar(4000),
varchar_0005 varchar(4000),
CONSTRAINT t1_my_pk PRIMARY KEY (my_id)
using index (create index t1_my_pk on t1(my_id) local))
PARTITION BY HASH (my_id) PARTITIONS 5 ;
my_id integer,
integer_0001 integer,
integer_0002 integer,
integer_0003 integer,
integer_0004 integer,
integer_0005 integer,
varchar_0001 varchar(4000),
varchar_0002 varchar(4000),
varchar_0003 varchar(4000),
varchar_0004 varchar(4000),
varchar_0005 varchar(4000),
CONSTRAINT t1_my_pk PRIMARY KEY (my_id)
using index (create index t1_my_pk on t1(my_id) local))
PARTITION BY HASH (my_id) PARTITIONS 5 ;
9.2, 10.1, 10.2: For workloads that are INSERT ONLY with no dependencies
Where there are no referential integrity constraints or other logical dependencies. Configure the Apply process as follows :
<!--[if !supportLists]-->· <!--[endif]-->_HASH_TABLE_SIZE: In an insert only workload, eliminate the
dependency calculation between transactions by setting the apply hidden
parameter _HASH_TABLE_SIZE to 0;
<!--[if !supportLists]-->· <!--[endif]-->COMMIT_SERIALIZATION: In an insert only workload, turn commit
serialization off by setting COMMIT_SERIALIZATION to
NONE;
<!--[if !supportLists]-->· <!--[endif]-->PARALLELISM: Increase the parallelism apply parameter to allow more
apply servers;
<!--[if !supportLists]-->· <!--[endif]-->To minimize ITL pressure issues, be sure to modify the INITRANS value
for each replicated table so that INITRANS is equal to the apply parallelism
value or higher. PCTFREE for replicated tables should be at least 10. (ALTER
TABLE ... INITRANS 16 PCTFREE 10)
<!--[if !supportLists]-->· <!--[endif]-->Consider partitioning the primary key index for the replicated tables
to improve apply processing.
NOTE:238455.1 -
Streams DML Types Supported and Supported Datatypes
NOTE:259609.1 - Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk
NOTE:273674.1 - Streams Configuration Report and Health Check Script
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:298877.1 - 10gR1 Streams Recommended Configuration
NOTE:313748.1 - Using Automatic Statistics Collection In A Streams Environment
NOTE:377152.1 - Best Practices for automatic statistics collection on Oracle 10g
NOTE:780733.1 - Streams Propagation Tuning with Network Parameters
http://www.oracle.com/technology/products/dataint/pdf/twp_streams_performance_11gr1.pdf
http://www.oracle.com/technology/deploy/availability/pdf/maa_wp_10gr2_streams_performance.pdf
Streams Concepts and Administration Guide
Streams Replication Administrator's Guide
NOTE:259609.1 - Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk
NOTE:273674.1 - Streams Configuration Report and Health Check Script
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:298877.1 - 10gR1 Streams Recommended Configuration
NOTE:313748.1 - Using Automatic Statistics Collection In A Streams Environment
NOTE:377152.1 - Best Practices for automatic statistics collection on Oracle 10g
NOTE:780733.1 - Streams Propagation Tuning with Network Parameters
http://www.oracle.com/technology/products/dataint/pdf/twp_streams_performance_11gr1.pdf
http://www.oracle.com/technology/deploy/availability/pdf/maa_wp_10gr2_streams_performance.pdf
Streams Concepts and Administration Guide
Streams Replication Administrator's Guide
No comments:
Post a Comment