Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com
http://www.free-online-exams.com
Master Note for Streams Recommended Configuration [ID
418755.1]
| ||
| ||
|
Modified
13-DEC-2010 Type
BULLETIN Status
PUBLISHED
|
|
In this Document
Purpose
Scope and Application
Master Note for Streams Recommended Configuration
Configuration
1.0 Software Version
2.0 Database Parameters
Database Version 9iR2
Database Version 10gR2
3.0 Database Storage
3.1. Tablespace for Streams Administrator queues
3. 2. Separate queues for capture and apply
4.0 Privileges
5.0 Source Site Configuration
5.1. Streams and Flash Recovery Area (FRA)
5.2. Archive Logging must be enabled
5.3. Supplemental logging
5.4. Implement a Heartbeat Table
5.5. Flow Control
5.6. Perform periodic maintenance
Database Version 9iR2 and 10gR1
Database Version 10gR2 and above
5.7. Capture Process Configuration
5.8. Propagation Configuration
5.9. Additional Configuration for RAC Environments for a Source Database
6.0 Target Site Configuration
6.1. Privileges
6.2. Instantiation
6.3. Conflict Resolution
6.4. Apply Process Configuration
6.5. Additional Configuration for RAC Environments for an Apply Database
OPERATION
Global Name
Apply Error Management
Backup Considerations
Batch Processing
Source Queue Growth
Streams Cleanup/Removal
Automatic Optimizer Statistics Collection
MONITORING
Dynamic Streams views
Static Streams Views
Streams Views
Capture Views:
Propagation & Queue Views
Apply Views:
Monitoring Utility STRMMON
Alert Log
Streams Healthcheck Scripts
References
Purpose
Scope and Application
Master Note for Streams Recommended Configuration
Configuration
1.0 Software Version
2.0 Database Parameters
Database Version 9iR2
Database Version 10gR2
3.0 Database Storage
3.1. Tablespace for Streams Administrator queues
3. 2. Separate queues for capture and apply
4.0 Privileges
5.0 Source Site Configuration
5.1. Streams and Flash Recovery Area (FRA)
5.2. Archive Logging must be enabled
5.3. Supplemental logging
5.4. Implement a Heartbeat Table
5.5. Flow Control
5.6. Perform periodic maintenance
Database Version 9iR2 and 10gR1
Database Version 10gR2 and above
5.7. Capture Process Configuration
5.8. Propagation Configuration
5.9. Additional Configuration for RAC Environments for a Source Database
6.0 Target Site Configuration
6.1. Privileges
6.2. Instantiation
6.3. Conflict Resolution
6.4. Apply Process Configuration
6.5. Additional Configuration for RAC Environments for an Apply Database
OPERATION
Global Name
Apply Error Management
Backup Considerations
Batch Processing
Source Queue Growth
Streams Cleanup/Removal
Automatic Optimizer Statistics Collection
MONITORING
Dynamic Streams views
Static Streams Views
Streams Views
Capture Views:
Propagation & Queue Views
Apply Views:
Monitoring Utility STRMMON
Alert Log
Streams Healthcheck Scripts
References
Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.8 to 11.2.0.1 -
Release: 9.2 to 11.2
Information in this document applies to any platform.
Information in this document applies to any platform.
Oracle Streams enables the sharing of data and events in a data
stream either within a database or from one database to another. This Note
describes best practices for Oracle Streams configurations for both downstream
capture and upstream (local) capture in version 9.2 and above.
The information contained in this note targets Replication
administrators implementing Streams replication in Oracle 9.2 and higher. This
note contains key recommendations for successful implementation of Streams in
Oracle database release 9.2 and above.
To ensure a successful Streams implementation, use the following
recommendations when setting up a Streams environment:
- Software Version
- Database Settings: Parameters, Storage, and Privileges
- Source Site Configuration
- Target Site Configuration
Oracle recommends to run streams with the latest available patchset,
and the list of recommended patches from Document 437838.1 Streams Specific
Patches.
Please assess if any recommended patch conflicts with existing patches on your system.
There is Streams support in both DbControl and GridControl. GridControl should be used to manage multiple databases in a Streams environment.
Please assess if any recommended patch conflicts with existing patches on your system.
There is Streams support in both DbControl and GridControl. GridControl should be used to manage multiple databases in a Streams environment.
For best results in a Streams environment, set the following
initialization parameters, as necessary, at each participating instance:
global_names, _job_queue_interval, sga_target,
streams_pool_size:
Parameter Name & Recommendation
|
Description
|
Considerations
|
_job_queue_interval = 1
|
Scan rate interval (seconds) of job queue. Default is
5
|
This improves the scan rate for propagation jobs to every second,
rather than every 5 seconds.
|
COMPATIBLE = 9.2.0.0
|
This parameter specifies the release with which the Oracle server
must maintain compatibility. Oracle servers with different compatibility levels
can interoperate.
|
|
GLOBAL_NAMES =true
|
Specifies whether a database link is required to have the same name
as the database to which it connects. Default is FALSE
|
This parameter should be set to TRUE at each database that is
participating in your Streams environment to eliminate errors resulting from
incorrect database connections. This parameter setting is honored by database
links.
|
JOB_QUEUE_PROCESSES > 2
|
Specifies the number of Jn job queue processes for the database
instance
|
|
AQ_TM_PROCESSES >= 1
|
Specifies the number of
queue monitor processes for the database instance |
|
LOGMNR_MAX_PERSISTENT_SESSIONS >= Number of capture
processes
|
Specifies the maximum number of persistent LOGMINER mining sessions.
Streams Capture Process uses LOGMINER to mine the redo
logs.
|
If there is a need to run multiple Streams capture
processes on a single database, then this parameter needs to be set equal to or higher than the number of planned capture processes. |
LOG_PARALLELISM = 1
|
Specifies the level of concurrency for redo allocation within the
database instance.
|
|
PARALLEL_MAX_SERVERS >= 2
|
Default: Derived from the values of the following
parameters:
CPU_COUNT PARALLEL_ADAPTIVE_MULTI_USER PARALLEL_AUTOMATIC_TUNING Range: 0 to 3599 Modifiable?: Yes |
Specifies the maximum number of parallel execution processes and
parallel recovery processes for an instance. As demand increases, Oracle will
increase the number of processes from the number created at instance startup up
to this value.
In a Streams environment, each capture process and apply process may use multiple parallel execution servers. Set this initialization parameter to an appropriate value to ensure that there are enough parallel execution servers. For each defined Streams process (capture or apply), increase this parameter by 2+parallelism, where parallelism is the value of the capture or apply parallelism parameter. |
SHARED_POOL_SIZE
|
|
Each capture process needs 10MB of shared pool space, by
default
Streams is limited to using a maximum of 10% of the shared pool. The 10% of the shared_pool_size is in reference to the size of the buffer queue before spillover occurs. Shared_pool_size must be significantly larger if Streams capture is implemented, especially if there is a large workload. The typical recommendation is to double the existing shared_pool_size and set the _first_spare_parameter to 50. |
OPEN_LINKS >= 4
|
Specifies the maximum number of concurrent open connections to remote
databases in one session. |
|
PROCESSES
|
Specifies the maximum number of operating system user processes that
can simultaneously connect to the database.
|
Make sure the value of this parameter allows for all background
processes, such as locks, job queue processes, and parallel execution processes. In Streams, capture processes and apply processes use background processes and parallel execution processes, and propagation jobs use job queue processes. |
SESSIONS
|
Specifies the maximum number of sessions that can be created in the
system. Because every login requires a session, this parameter effectively
determines the maximum number of concurrent users in the system.
|
If you plan to run one or more capture processes or apply processes
in a database, then you may need to increase the size of this parameter. Each background process in a database requires a session. |
SGA_MAX_SIZE
|
Specifies the maximum size of SGA for the lifetime of the
instance.
|
If you plan to run multiple capture processes on a single database,
then you may need to increase the size of this parameter. Note :_SGA_SIZE should only be increased if a logminer error is returned indicating a need for more memory. Any memory allocated to logminer is used solely by logminer - it is not returned to the shared_pool after it has been allocated by capture until the capture process is restarted. |
TIMED_STATISTICS
|
Specifies whether or not statistics related to time are
collected.
|
If you want to collect elapsed time statistics in the data dictionary
views related to Streams, then set this parameter to true. The views that include elapsed time statistics include: V$STREAMS_CAPTURE, V$STREAMS_APPLY_COORDINATOR, V$STREAMS_APPLY_READER, V$STREAMS_APPLY_SERVER. |
Parameter Name & Recommendation
|
Description
|
Considerations
|
_job_queue_interval = 1
|
Scan rate interval (seconds) of job queue. Default is
5
|
This improves the scan rate for propagation jobs to every second,
rather than every 5 seconds.
|
COMPATIBLE = 10.2.0.0
|
This parameter specifies the release with which the Oracle server
must maintain compatibility. Oracle servers with different compatibility levels
can interoperate.
|
To use the new Streams features introduced in Oracle Database 10g
Release 2, this parameter must be set to 10.2.0.0 or higher.
To use 10.2 downstream capture on logs from a 10.1 source , this parameter must be set to 10.1.0.0 at the source database and 10.2.0.0 at the downstream capture database. |
GLOBAL_NAMES =true
|
Specifies whether a database link is required to have the same name as the database to which it connects. Default is FALSE |
This parameter should be set to TRUE at each database that is
participating in your Streams environment to eliminate errors resulting from
incorrect database connections. This parameter setting is honored by database
links.
|
JOB_QUEUE_PROCESSES > 4 number of propagations
defined
|
Specifies the number of Jn job queue processes for each instance
(J000 ... J999). Job queue processes handle requests created by
DBMS_JOB.
|
This parameter controls the maximum number of jobs that can run
concurrently within the instance and should be set to a value greater than the
number of propagations configured for the database. Be sure to increase this
parameter if there are any automated jobs configured for the
database.
|
LOG_ARCHIVE_DEST_n
|
Defines up to ten log archive destinations, where n is 1, 2, 3, ... 10. |
A specific archive log destination should be specified if this
database is the source for a Streams capture process. Specify a specific
destination other than the flash recovery area (FRA) for storing archived logs
if a local capture process is enabled.
To use downstream capture and copy the redo log files to the downstream database using log transport services, at least one log archive destination must be to the site running the downstream capture process. Avoid copying log files to a remote flash recovery area for downstream capture processing. See Also: Oracle Data Guard Concepts and Administration |
LOG_ARCHIVE_DEST_STATE_n
|
Specifies the availability state of the corresponding destination.
The parameter suffix (1 through 10) specifies one of the ten corresponding
LOG_ARCHIVE_DEST_n destination parameters.
|
Enable archive logging to the specified destination for both local
and downstream capture. To use downstream capture and copy the redo log files to
the downstream database using log transport services, make sure the destination
that corresponds to the LOG_ARCHIVE_DEST_n destination for the downstream
database is set to enable.
|
PARALLEL_MAX_SERVERS
|
Default: Derived from the values of the following
parameters:
CPU_COUNT PARALLEL_ADAPTIVE_MULTI_USER PARALLEL_AUTOMATIC_TUNING Range: 0 to 3599 Modifiable?: Yes |
Specifies the maximum number of parallel execution processes and
parallel recovery processes for an instance. As demand increases, Oracle will
increase the number of processes from the number created at instance startup up
to this value.
In a Streams environment, each capture process and apply process may use multiple parallel execution servers. Set this initialization parameter to an appropriate value to ensure that there are enough parallel execution servers. For each defined Streams process (capture or apply), increase this parameter by 2+parallelism, where parallelism is the value of the capture or apply parallelism parameter. |
REMOTE_ARCHIVE_ENABLE
|
Enables or disables the sending of redo archival to remote destinations and the receipt of remotely archived redo. |
To use downstream capture and copy the redo log files to the
downstream database using log transport services, this parameter must be set to
true at both the source database and the downstream database. This parameter is
not required for local capture configuration.
|
SGA_MAX_SIZE
|
Specifies the maximum size of SGA for the lifetime of a database
instance.
|
To run multiple Streams processes on a single database, you may need
to increase the size of this parameter.
|
SGA_TARGET =0
|
Specifies the total size of all System Global Area (SGA)
components.
If this parameter is set to a nonzero value, then the size of the Streams pool is managed by Automatic Shared Memory Management. |
For best results, size the shared_pool and streams_pool
explicitly.
|
Tune the STREAMS_POOL_SIZE
|
Specifies (in bytes) the size of the Streams pool. The Streams pool
contains buffered queue messages. In addition, the Streams pool is used for
internal communications during parallel capture and apply. Refer to
V$STREAMS_POOL_ADVICE to determine the correct size to avoid excessive
spills.
|
This parameter is modifiable. If this parameter is reduced to zero
when an instance is running, then Streams processes and jobs will not
run.
The size of the Streams pool is affected by each of the following factors:
Minimally set the Streams Pool Size to 256Mb on low activity
databases or 500Mb on more active OLTP configurations. Adjust the Streams Pool
size to an appropriate value using the V$STREAMS_POOL_ADVICE view to avoid
excessive spill from the buffered queue to
disk.
|
Create a separate tablespace for the streams administrator schema
(STRMADMIN) at each participating Streams database. This tablespace will be used
for any objects created in the streams administrator schema, including any
spillover of messages from the in-memory queue.
For example:
CREATE TABLESPACE &streams_tbs_name DATAFILE '&db_file_directory/&db_file_name' SIZE 25 M REUSE AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED;
ALTER USER strmadmin DEFAULT TABLESPACE &streams_tbs_name QUOTA UNLIMITED ON &streams_tbs_name;
For example:
CREATE TABLESPACE &streams_tbs_name DATAFILE '&db_file_directory/&db_file_name' SIZE 25 M REUSE AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED;
ALTER USER strmadmin DEFAULT TABLESPACE &streams_tbs_name QUOTA UNLIMITED ON &streams_tbs_name;
Configure separate queues for changes that are captured locally and
for receiving captured changes from each remote site. This is especially
important when configuring bi-directional replication between multiple
databases. For example, consider the situation where Database db1.net replicates
its changes to databases db2.net, and Database db2.net replicates to db1.net.
Each database will maintain 2 queues: one for capturing the changes made locally
and other queue receiving changes from the other database.
Similarly, for 3 databases (db1.net, db2.net, db3.net) replicating the local changes directly to each other database, there will be 3 queues at each database. For example at db1.net, queue1 for the capture process, and queue2 and queue3 for receiving changes from each of the other databases. The two apply processes on db1.net (apply_from_db2, apply_from_db3) apply the changes, each associated with a specific queue (queue2 or queue3)
Queue names should not exceed 24 characters in length. Queue table names should not exceed 24 characters in length. To pre-create a queue for Streams, use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package. If you use the MAINTAIN_TABLES, MAINTAIN_SCHEMAS, or MAINTAIN_GLOBAL procedures to configure Streams and do not identify specific queue names, individual queues will be created automatically.
Example: To configure a site (SITEA) that is capturing changes for distribution to another site, as well as receiving changes from that other site (SITEB), configure each queue at SITEA with a separate queue_table as follows:
dbms_streams_adm.set_up_queue(queue_table_name='QT_CAP_SITE_A, queue_name=>'CAP_SITEA', )
dbms_streams_adm.set_up_queue(queue_table_name='QT_APP_FROM_SITEB', queue_name=>'APP_FROM_SITEB');
If desired, the above set_up_queue procedure calls can include a storage_clause parameter to configure separate tablespace and storage specifications for each queue table. Typically, Logical Change Records (LCRs) are queued to an in-memory buffer and processed from memory. However, they can be spilled to disk if they remain in memory too long due to an unavailable destination or on memory pressure (Streams_Pool memory is too low). The storage clause parameter can be used to preallocate space for the queue table or specify an alternative tablespace for the queue table without changing the default tablespace for the Streams Administrator.
Similarly, for 3 databases (db1.net, db2.net, db3.net) replicating the local changes directly to each other database, there will be 3 queues at each database. For example at db1.net, queue1 for the capture process, and queue2 and queue3 for receiving changes from each of the other databases. The two apply processes on db1.net (apply_from_db2, apply_from_db3) apply the changes, each associated with a specific queue (queue2 or queue3)
Queue names should not exceed 24 characters in length. Queue table names should not exceed 24 characters in length. To pre-create a queue for Streams, use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package. If you use the MAINTAIN_TABLES, MAINTAIN_SCHEMAS, or MAINTAIN_GLOBAL procedures to configure Streams and do not identify specific queue names, individual queues will be created automatically.
Example: To configure a site (SITEA) that is capturing changes for distribution to another site, as well as receiving changes from that other site (SITEB), configure each queue at SITEA with a separate queue_table as follows:
dbms_streams_adm.set_up_queue(queue_table_name='QT_CAP_SITE_A, queue_name=>'CAP_SITEA', )
dbms_streams_adm.set_up_queue(queue_table_name='QT_APP_FROM_SITEB', queue_name=>'APP_FROM_SITEB');
If desired, the above set_up_queue procedure calls can include a storage_clause parameter to configure separate tablespace and storage specifications for each queue table. Typically, Logical Change Records (LCRs) are queued to an in-memory buffer and processed from memory. However, they can be spilled to disk if they remain in memory too long due to an unavailable destination or on memory pressure (Streams_Pool memory is too low). The storage clause parameter can be used to preallocate space for the queue table or specify an alternative tablespace for the queue table without changing the default tablespace for the Streams Administrator.
The streams administrator (strmadmin) must be granted the following
on each participating Streams participating database:
GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE); END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/
GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE); END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/
In order to create capture and apply processes, the Streams
Administrator must have DBA privilege. This privilege must be explicitly granted
to the Streams Administrator.
GRANT DBA to STRMADMIN;
In addition, other required privileges must be granted to the Streams Administrator schema (strmadmin) on each participating Streams database with the GRANT_ADMIN_PRIVILEGE procedure:
In Oracle 10g and above, all the above (except DBA) can be granted using the procedure:
GRANT DBA to STRMADMIN;
In addition, other required privileges must be granted to the Streams Administrator schema (strmadmin) on each participating Streams database with the GRANT_ADMIN_PRIVILEGE procedure:
In Oracle 10g and above, all the above (except DBA) can be granted using the procedure:
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
The following recommendations apply to source databases, ie,
databases in which Streams capture is configured.
In Oracle 10g and above, configure a separate log archive destination
independent of the Flash Recovery Area for the Streams capture process for the
database. Archive logs in the FRA can be removed automatically on space
pressure, even if the Streams capture process still requires them. Do not allow
the archive logs for Streams capture to reside solely in the
FRA.
Verify that each source database is running in ARCHIVE LOG mode. For
downstream capture sites (ie, databases in which the Streams capture is
configured for another database), the database at which the source redo logs are
created must have archive logging enabled.
Confirm supplemental logging is enabled at each source site. In 9iR2
Streams apply requires unconditional logging of Unique Index and Foreign Key
constraints, even if those columns are not modified. This is because of Bug
4198593 Apply incorrectly requires unconditional logging of Unique and FK
constraints fixed in 9.2.0.8.
If you set the parallelism apply process parameter to a value greater than 1, then you must specify a conditional supplemental log group at the source database for all of the unique and foreign key columns in the tables for which an apply process applies changes. Supplemental logging may be required for other columns in these tables as well, depending on your configuration.
If you set the parallelism apply process parameter to a value greater than 1, then you must specify a conditional supplemental log group at the source database for all of the unique and foreign key columns in the tables for which an apply process applies changes. Supplemental logging may be required for other columns in these tables as well, depending on your configuration.
Any columns specified in rule-based transformations or used within
DML Handlers at target site must be unconditionally logged at the source
site.
Supplemental logging can be specified at the source either at the database level or for the individual replicated table.
Supplemental logging can be specified at the source either at the database level or for the individual replicated table.
In 10gR2, supplemental logging is automatically configured for tables
on which primary, unique, or foreign keys are defined when the database object
is prepared for Streams capture. The procedures for maintaining streams and
adding rules in the DBMS_STREAMS_ADM package
automatically prepare objects for a local Streams capture. For downstream
capture sites (ie, databases in which the Streams capture is configured for
another database), the database at which the source redo logs are created must
have supplemental logging for the database objects of interest to the downstream
capture process.
All target site indexed columns, including the primary key, unique
index, and foreign key columns of a replicated table or database must be logged
at the source site. Primary Key logging must be unconditionally logged, unique
index and foreign keys can be conditionally logged. This supplemental logging is
enabled automatically when the source table is prepared for capture with DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION.
Any columns specified in rule-based transformations or used within DML Handlers at target site must be unconditionally logged at the source site. Supplemental logging for these columns must be configured explicitly by the database administrator, using the table sql syntax: ALTER TABLE... ADD SUPPLEMENTAL LOG... .
Any columns specified in rule-based transformations or used within DML Handlers at target site must be unconditionally logged at the source site. Supplemental logging for these columns must be configured explicitly by the database administrator, using the table sql syntax: ALTER TABLE... ADD SUPPLEMENTAL LOG... .
To Verify that supplemental logging has been specified at the source
either at the database level or for the individual replicated
table:
Database level logging:
SELECT supplemental_log_data_pk, supplemental_log_data_ui FROM V$DATABASE;
Table level logging:
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_tables UNION
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_schemas UNION
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_database;
Check supplemental log groups
Select log_group_name, table_name,
decode(always, 'ALWAYS', 'Unconditional', NULL, 'Conditional') ALWAYS from DBA_LOG_GROUPS
Check columns in supplemental log groups
Select log_group_name, column_name, position
from dba_log_group_columns where table_name = 'DEPARTMENTS' and owner='HR';
SELECT supplemental_log_data_pk, supplemental_log_data_ui FROM V$DATABASE;
Table level logging:
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_tables UNION
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_schemas UNION
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_database;
Check supplemental log groups
Select log_group_name, table_name,
decode(always, 'ALWAYS', 'Unconditional', NULL, 'Conditional') ALWAYS from DBA_LOG_GROUPS
Check columns in supplemental log groups
Select log_group_name, column_name, position
from dba_log_group_columns where table_name = 'DEPARTMENTS' and owner='HR';
Refer to Document 782541.1 Streams
Replication Supplemental Logging Requirements
To ensure that the applied_scn of the DBA_CAPTURE view is updated
periodically, implement a "heart beat" table. A "heart beat" table is especially
useful for databases that have a low activity rate. The streams capture process
requests a checkpoint after every 10Mb of generated redo. During the checkpoint,
the metadata for streams is maintained if there are active transactions.
Implementing a heartbeat table ensures that there are open transactions
occurring regularly within the source database enabling additional opportunities
for the metadata to be updated frequently. Additionally, the heartbeat table
provides quick feedback to the database administrator as to the health of the
streams replication.
To implement a heartbeat table: Create a table at the source site that includes a date or timestamp column and the global name of the database. Add a rule to capture changes to this table and propagate the changes to each target destination. Make sure that the target destination will apply changes to this table as well. Set up an automated job to update this table at the source site periodically, for example every minute.
Refer to Document 461278.1 Example of a Streams Heartbeat Table
To implement a heartbeat table: Create a table at the source site that includes a date or timestamp column and the global name of the database. Add a rule to capture changes to this table and propagate the changes to each target destination. Make sure that the target destination will apply changes to this table as well. Set up an automated job to update this table at the source site periodically, for example every minute.
Refer to Document 461278.1 Example of a Streams Heartbeat Table
In Oracle 9iR2, when the threshold for memory of the buffer queue is
exceeded, Streams will write the messages to disk. This is sometimes referred to
as "spillover". When spillover occurs, Streams can no longer take advantage of
the in-memory queue optimization. One technique to minimize this spillover is to
implement a form of flow control. See the following note for the scripts and
pre-requisites:
Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk (Doc ID 259609.1)
In Oracle 10g and above flow control is automatically handled by the database so there is no need to implement it manually.
Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk (Doc ID 259609.1)
In Oracle 10g and above flow control is automatically handled by the database so there is no need to implement it manually.
Periodically force capture to checkpoint. This checkpoint is not the
same as a database checkpoint. To force capture to checkpoint, use the capture
parameter _CHECKPOINT_FORCE and set the value to YES. Forcing a checkpoint
ensure that the DBA_CAPTURE view columns CAPTURED_SCN and APPLIED_SCN are
maintained.
A. Confirm checkpoint retention.
In Oracle 10gR2 and above, the mining process checkpoints itself for quicker
restart. These checkpoints are maintained in the SYSAUX tablespace by default.
The capture parameter, checkpoint_retention_time,
controls the amount of checkpoint data retained by moving the FIRST_SCN of the
capture process forward. The FIRST_SCN is the lowest possible scn available for
capturing changes. When the checkpoint_retention_time is exceeded (default = 60
days), the FIRST_SCN is moved and the Streams metadata tables previous to this
scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To
alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE
procedure.
B. Dump fresh copy of Dictionary to redo. Issue a DBMS_CAPTURE_ADM.BUILD command to dump a current copy of the data dictionary to the redo logs. Doing this will reduce the amount of logs to be processed in case of additional capture process creation or process rebuild.
C. Prepare database objects for instantiation Issue a DBMS_CAPTURE_ADM.PREPARE_*_INSTANTIATION where * indicates the level (TABLE, SCHEMA, GLOBAL) for the database objects captured by Streams. This is used in conjunction with the BUILD in B above for new capture creation or rebuild purposes.
B. Dump fresh copy of Dictionary to redo. Issue a DBMS_CAPTURE_ADM.BUILD command to dump a current copy of the data dictionary to the redo logs. Doing this will reduce the amount of logs to be processed in case of additional capture process creation or process rebuild.
C. Prepare database objects for instantiation Issue a DBMS_CAPTURE_ADM.PREPARE_*_INSTANTIATION where * indicates the level (TABLE, SCHEMA, GLOBAL) for the database objects captured by Streams. This is used in conjunction with the BUILD in B above for new capture creation or rebuild purposes.
A. Configuring Capture
Use the DBMS_STREAMS_ADM.ADD_*_RULES procedures (ADD_TABLE_RULES,
ADD_SCHEMA_RULES for DML and DDL, ADD_GLOBAL_RULES for DDL only). These
procedures minimize the number of steps required to configure Streams processes.
Also, it is possible to create rules for non-existent objects, so be sure to
check the spelling of each object specified in a rule carefully.
CAPTURE requires a rule set with rules.The ADD_GLOBAL_RULES procedure
cannot be used to capture DML changes for entire database. ADD_GLOBAL_RULES can
be used to capture all DDL changes for the database.
A single Streams capture can process rules for multiple tables or schemas. For best performance, rules should be simple. Rules that include NOT or LIKE clauses are not simple and will impact the performance of Streams.
Minimize the number of rules added into the process rule set. A good rule of thumb is to keep the number of rules in the rule set to less than 100. If more objects need to be included in the ruleset, consider constructing rules using the IN clause. For example, a rule for the 6 TB_M21* tables in the MYACCT schema would look like the following:
A single Streams capture can process rules for multiple tables or schemas. For best performance, rules should be simple. Rules that include NOT or LIKE clauses are not simple and will impact the performance of Streams.
Minimize the number of rules added into the process rule set. A good rule of thumb is to keep the number of rules in the rule set to less than 100. If more objects need to be included in the ruleset, consider constructing rules using the IN clause. For example, a rule for the 6 TB_M21* tables in the MYACCT schema would look like the following:
(:dml.get_object_owner() = 'MYACCT' and :dml.is_null_tag() = 'Y' and
:dml.get_object_name() IN ('TB_M21_1','TB_M21_2','TB_M21_3',
'TB_M21_40','TB_M21_10','TB_M211B010'))
In version 10.2 and above, use the DBMS_STREAMS_ADM. MAINTAIN_* (where *=TABLE,SCHEMA,GLOBAL, TTS) procedures to configure Streams. These procedures automate the entire configuration of the streams processes between databases, following the Streams best practices. For local capture, the default behavior of these procedures is to implement a separate queue for capture and apply. If you are configuring a downstream capture and applying the changes within the same database, override this behavior by specifying the same queue for both the capture_queue_name and apply_queue_name.
If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES procedures (ADD_TABLE_RULES, ADD_SCHEMA_RULES for DML and DDL, ADD_SUBSET_RULES for DML only, and ADD_GLOBAL_RULES for DDL only). These procedures minimize the number of steps required to configure Streams processes. It is also possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.
The Streams capture process requires a rule set with rules. The ADD_GLOBAL_RULES procedure can be used to capture DML changes for entire database as long as a negative ruleset is created for the capture process that includes rules for objects with unsupported datatypes.. ADD_GLOBAL_RULES can be used to capture all DDL changes for the database.
A single Streams capture can process changes for multiple tables or schemas. For best performance, rules for these multiple tables or schemas should be simple. Rules that include LIKE clauses are not simple and will impact the performance of Streams.
To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the table or object name in the negative rule set for the Streams process. Setting this clause will eliminate ALL changes, tagged or not, for the table or object.
B. Capture Parameters
Set the following parameters after a capture process is created:
Set the following parameters after a capture process is created:
Parameter & Recommendation |
Values
|
Comment
|
Default: 1
|
Number of parallel execution servers to configure one or more
preparer processes used to prefilter changes for the capture process.
Recommended value is 1.
| |
_CHEKPOINT_FREQUENCY=500
|
Default: 10 <10.2.0.4
Default 1000 in 10.2.0.4 |
Modify the frequency of logminer checkpoints especially in a database
with significant LOB or DDL activity. Larger values decrease the frequency of
logminer checkpoints. Smaller numbers increase the frequency of those
checkpoints. Logminer checkpoints are not the same as database checkpoints.
Availability of logminer checkpoints impacts the time required to
recover/restart the capture after database restart. In a low activity database
(ie, small amounts of data or the data to be captured is changed infrequently),
use a lower value, such as 100.
A logminer checkpoint is requested by default every 10Mb of redo mined. If the value is set to 500, a logminer checkpoint is requested after every 500Mb of redo mined. Increasing the value of this parameter is recommended for active databases with significant redo generated per hour. It should not be necessary to configure _CHECKPOINT_FREQUENCY in 10.2.0.4 or higher |
_SGA_SIZE
|
Default: 10
|
Amount of memory available from the streams pool for logminer
processing. The default amount of streams_pool memory allocated to logminer is
10Mb. Increase this value especially in environments where large LOBs are
processed. This parameter should not be increased unless the logminer error
ORA-1341 is encountered. Streams pool memory allocated to logminer is unavailble
for other
|
Capture parameters can be set using the SET_PARAMETER procedure from
the DBMS_CAPTURE_ADM package. For example, to set the checkpoint frequency of
the streams capture process named CAPTURE_EX, use the following syntax while
logged in as the Streams Administrator to request a logminer checkpoint after
processing every Gigabyte (1000Mb) of
redo:
dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');
dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');
A. Configuring Propagation
If the maintain_* procedures are not suitable for your environment(Oracle 9iR2 and 10gR1), please use the ADD_*__PROPAGATION_RULES procedures (ADD_TABLE_PROPAGATION_RULES, ADD_SCHEMA_PROPAGATION_RULES , ADD_GLOBAL_PROPAGATION_RULES for both DML and DDL., ADD_SUBSET_PROPAGATION_RULES for DML only) These procedures minimize the number of steps required to configure Streams processes. Also, it is possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.
The rules in the rule set for propagation can differ from the rules specified for the capture process. For example, to configure that all captured changes be propagated to a target site, a single ADD_GLOBAL_PROPAGATION_RULES procedure can be specified for the propagation even though multiple ADD_TABLE_RULES might have been configured for the capture process.
If the maintain_* procedures are not suitable for your environment(Oracle 9iR2 and 10gR1), please use the ADD_*__PROPAGATION_RULES procedures (ADD_TABLE_PROPAGATION_RULES, ADD_SCHEMA_PROPAGATION_RULES , ADD_GLOBAL_PROPAGATION_RULES for both DML and DDL., ADD_SUBSET_PROPAGATION_RULES for DML only) These procedures minimize the number of steps required to configure Streams processes. Also, it is possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.
The rules in the rule set for propagation can differ from the rules specified for the capture process. For example, to configure that all captured changes be propagated to a target site, a single ADD_GLOBAL_PROPAGATION_RULES procedure can be specified for the propagation even though multiple ADD_TABLE_RULES might have been configured for the capture process.
B. Propagation mode
For new propagation processes configured in 10.2 and above. set the queue_to_queue propagation parameter to TRUE. If the database is RAC enabled, an additional service is created typically named in the format: sys$schema.queue_name.global_name when the Streams subscribers are initially created. A streams subscriber is a defined propagation between two Streams queues or an apply process with the apply_captured parameter set to TRUE. This service automatically follows the ownership of the queue on queue ownership switches (ie, instance startup, shutdown, etc). The service name can be found in the network name column of DBA_SERVICES view.
If the maintain_* (TABLE,SCHEMA,GLOBAL) procedures are used to configure Streams, queue_to_queue is automatically set to TRUE, if possible. The database link for this queue_to_queue propagation must use a TNS servicename (or connect name) that specifies the GLOBAL_NAME in the CONNECT_DATA clause of the descriptor. See section 6 on Additional Considerations for RAC below.
Propagation process configured prior to 10.2 continue to use the dblink mode of propagation. In this situation, if the database link no longer connects to the owning instance of the queue, propagation will not succeed. You can continue to use the 10.1. best practices for this propagation, or during a maintenance window recreate propagation. Make sure that the queue is empty with no unapplied spilled messages before you drop the propagation. Then, recreate the propagation with the queue_to_queue parameter set to TRUE.
Queues created prior to 10.2 on RAC instances should be dropped and recreated in order to take advantage of the automatic service generation and queue_to_queue propagation. Be sure to perform this activity when the queue is empty and no new LCRs are being enqueued into the queue.
For new propagation processes configured in 10.2 and above. set the queue_to_queue propagation parameter to TRUE. If the database is RAC enabled, an additional service is created typically named in the format: sys$schema.queue_name.global_name when the Streams subscribers are initially created. A streams subscriber is a defined propagation between two Streams queues or an apply process with the apply_captured parameter set to TRUE. This service automatically follows the ownership of the queue on queue ownership switches (ie, instance startup, shutdown, etc). The service name can be found in the network name column of DBA_SERVICES view.
If the maintain_* (TABLE,SCHEMA,GLOBAL) procedures are used to configure Streams, queue_to_queue is automatically set to TRUE, if possible. The database link for this queue_to_queue propagation must use a TNS servicename (or connect name) that specifies the GLOBAL_NAME in the CONNECT_DATA clause of the descriptor. See section 6 on Additional Considerations for RAC below.
Propagation process configured prior to 10.2 continue to use the dblink mode of propagation. In this situation, if the database link no longer connects to the owning instance of the queue, propagation will not succeed. You can continue to use the 10.1. best practices for this propagation, or during a maintenance window recreate propagation. Make sure that the queue is empty with no unapplied spilled messages before you drop the propagation. Then, recreate the propagation with the queue_to_queue parameter set to TRUE.
Queues created prior to 10.2 on RAC instances should be dropped and recreated in order to take advantage of the automatic service generation and queue_to_queue propagation. Be sure to perform this activity when the queue is empty and no new LCRs are being enqueued into the queue.
C. Propagation Parameters
Parameter & Recommendation
|
Values
|
Comment
|
latency=5
|
Default: 60
|
Maximum wait, in seconds, in the propagation window for a message to
be propagated after it is enqueued.
The default value is 60. Caution: if latency is not specified for this call, then latency will over-write any existing value with this default value (60). For example, if the latency is 60 seconds, then during the propagation window, if there are no messages to be propagated, then messages from that queue for the destination will not be propagated for at least 60 more seconds. It will be at least 60 seconds before the queue will be checked again for messages to be propagated for the specified destination. If the latency is 600, then the queue will not be checked for 10 minutes and if the latency is 0, then a job queue process will be waiting for messages to be enqueued for the destination and as soon as a message is enqueued it will be propagated. <!--[if !supportLineBreakNewLine]--> <!--[endif]--> |
Propagation parameters can be set using the ALTER_PROPAGATION_SCHEDULE procedure from the DBMS_AQADM package. For example, to set the latency parameter of the streams propagation from the STREAMS_QUEUE owned by STRMADMIN to the target database whose global_name is DEST_DB for the queue Q1, use the following syntax while logged in as the Streams Administrator:
dbms_aqadm.alter_propagation_schedule('strmadmin.streams_queue','DEST_DB',destination_queue=>'Q1',latency=>5);
D. Network Connectivity
When using Streams propagation across a Wide Area Network (WAN), increase the session data unit (SDU) to improve the propagation performance. The maximum value for SDU is 32K (32767). The SDU value for network transmission is negotiated between the sender and receiver sides of the connection: the minimum SDU value of the two endpoints is used for any individual connection. In order to take advantage of an increased SDU for Streams propagation, the receiving side sqlnet.ora file must include the default_sdu_size parameter. The receiving side listener.ora must indicate the SDU change for the SID. The sending side tnsnames.ora connect string must also include the SDU modification for the particular service.
Tuning the tcp/ip networking parameters can significantly improve performance across the WAN. Here are some example tuning parameters for Linux. These parameters can be set in the /etc/sysctl.conf file and running sysctl -p . When using RAC, be sure to configure this at each instance.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits # min, default, and max # number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits # min, default, and max # number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
In addition, the SEND_BUF_SIZE and RECV_BUF_SIZE sqlnet.ora parameters increase the performance of propagation on your system. These parameters increase the size of the buffer used to send or receive the propagated messages. These parameters should only be increased after careful analysis on their overall impact on system performance.
For further information, please review the Oracle Net Services Guide
Archive Logs
The archive log threads from all instances must be available to any instance running a capture process. This is true for both local and downstream capture.
Queue Ownership
When Streams is configured in a RAC environment, each queue table has an "owning" instance. All queues within an individual queue table are owned by the same instance. The Streams components (capture/propagation/apply) all use that same owning instance to perform their work. This means that
The archive log threads from all instances must be available to any instance running a capture process. This is true for both local and downstream capture.
Queue Ownership
When Streams is configured in a RAC environment, each queue table has an "owning" instance. All queues within an individual queue table are owned by the same instance. The Streams components (capture/propagation/apply) all use that same owning instance to perform their work. This means that
- a capture process is run at the owning instance of the source queue.
- a propagation job must run at the owning instance of the queue
- a propagation job must connect to the owning instance of the target queue.
Ownership of the queue can be configured to remain on a specific
instance, as long as that instance is available, by setting the PRIMARY
_INSTANCE and/or SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If
the primary_instance is set to a specific instance (ie, not 0), the queue
ownership will return to the specified instance whenever the instance is
up.
Capture will automatically follow the ownership of the queue. If the ownership changes while capture is running, capture will stop on the current instance and restart at the new owner instance.
For queues created with Oracle Database 10g Release 2, a service will be created with the service name= schema.queue and the network name SYS$schema.queue.global_name for that queue. If the global_name of the database does not match the db_name.db_domain name of the database, be sure to include the global_name as a service name in the init.ora.
For propagations created with the Oracle Database 10g Release 2 code with the queue_to_queue parameter to TRUE, the propagation job will deliver only to the specific queue identified. Also, the source dblink for the target database connect descriptor must specify the correct service (global name of the target database ) to connect to the target database. For example, the tnsnames.ora entry for the target database should include the CONNECT_DATA clause in the connect descriptor for the target database. This clause should specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')). Do NOT include a specific INSTANCE in the CONNECT_DATA clause.
For example, consider the tnsnames.ora file for a database with the global name db.mycompany.com. Assume that the alias name for the first instance is db1 and that the alias for the second instance is db2. The tnsnames.ora file for this database might include the following entries:
Capture will automatically follow the ownership of the queue. If the ownership changes while capture is running, capture will stop on the current instance and restart at the new owner instance.
For queues created with Oracle Database 10g Release 2, a service will be created with the service name= schema.queue and the network name SYS$schema.queue.global_name for that queue. If the global_name of the database does not match the db_name.db_domain name of the database, be sure to include the global_name as a service name in the init.ora.
For propagations created with the Oracle Database 10g Release 2 code with the queue_to_queue parameter to TRUE, the propagation job will deliver only to the specific queue identified. Also, the source dblink for the target database connect descriptor must specify the correct service (global name of the target database ) to connect to the target database. For example, the tnsnames.ora entry for the target database should include the CONNECT_DATA clause in the connect descriptor for the target database. This clause should specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')). Do NOT include a specific INSTANCE in the CONNECT_DATA clause.
For example, consider the tnsnames.ora file for a database with the global name db.mycompany.com. Assume that the alias name for the first instance is db1 and that the alias for the second instance is db2. The tnsnames.ora file for this database might include the following entries:
db.mycompany.com=
(description=
(load_balance=on)
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)))
db1.mycompany.com=
(description=
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db1)))
db2.mycompany.com=
(description=
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db2)))
(description=
(load_balance=on)
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)))
db1.mycompany.com=
(description=
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db1)))
db2.mycompany.com=
(description=
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db2)))
Use the italicized tnsnames.ora alias in the target database link USING clause.
DBA_SERVICES lists all services for the database. GV$ACTIVE_SERVICES identifies all active services for the database In non_RAC configurations, the service name will typically be the global_name. However, it is possible for users to manually create alternative services and use them in the TNS connect_data specification . For RAC configurations, the service will appear in these views as SYS$schema.queue.global_name.
Propagation Restart
Use the procedures START_PROPAGATION and STOP_PROPAGATION from DBMS_PROPAGATION_ADM to enable and disable the propagation schedule. These procedures automatically handle queue_to_queue propagation.
Example:
exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation'); or
exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation',force=>true);
exec DBMS_PROPAGATION_ADM.START_PROPAGATION('name_of_propagation');
The following recommendations apply to target databases, ie,
databases in which Streams apply is configured.
Grant Explicit Privileges to APPLY_USER for the user
tables
Examples:
Privileges for table level DML: INSERT/UPDATE/DELETE,
Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY) INDEX, CREATE (ANY) PROCEDURE
Examples:
Privileges for table level DML: INSERT/UPDATE/DELETE,
Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY) INDEX, CREATE (ANY) PROCEDURE
Set Instantiation SCNs manually if not using export/import. If
manually configuring the instantiation scn for each table within the schema, use
the RECURSIVE=>TRUE option on the DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
procedure
For DDL Set Instantiation SCN at next higher level (ie, SCHEMA or GLOBAL level).
For DDL Set Instantiation SCN at next higher level (ie, SCHEMA or GLOBAL level).
If updates will be performed in multiple databases for the same
shared object, be sure to configure conflict resolution. See the Streams
Replication Administrator's Guide Chapter 3 Streams Conflict Resolution, for
more detail.
To simplify conflict resolution on tables with LOB columns, create an error handler to handle errors for the table. When registering the handler using the DBMS_APPLY_ADM.SET_DML_HANDLER procedure, be sure to specify the ASSEMBLE_LOBS parameter as TRUE.
Refer to Document 779801.1 Streams Conflict Resolution
To simplify conflict resolution on tables with LOB columns, create an error handler to handle errors for the table. When registering the handler using the DBMS_APPLY_ADM.SET_DML_HANDLER procedure, be sure to specify the ASSEMBLE_LOBS parameter as TRUE.
Refer to Document 779801.1 Streams Conflict Resolution
A. Rules
If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES procedures (ADD_TABLE_RULES , ADD_SCHEMA_RULES , ADD_GLOBAL_RULES (for DML and DDL), ADD_SUBSET_RULES
APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can be used to apply all changes in the queue for the database. If no ruleset is specified for the apply process, all changes in the queue are processed by the apply process.
A single Streams apply can process rules for multiple tables or schemas located in a single queue that are received from a single source database . For best performance, rules should be simple. Rules that include LIKE clauses are not simple and will impact the performance of Streams.
To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the table or object name in the negative rule set for the Streams process. Setting this clause will eliminate all changes, tagged or not, for the table or object.
B. Parameters
If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES procedures (ADD_TABLE_RULES , ADD_SCHEMA_RULES , ADD_GLOBAL_RULES (for DML and DDL), ADD_SUBSET_RULES
APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can be used to apply all changes in the queue for the database. If no ruleset is specified for the apply process, all changes in the queue are processed by the apply process.
A single Streams apply can process rules for multiple tables or schemas located in a single queue that are received from a single source database . For best performance, rules should be simple. Rules that include LIKE clauses are not simple and will impact the performance of Streams.
To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the table or object name in the negative rule set for the Streams process. Setting this clause will eliminate all changes, tagged or not, for the table or object.
B. Parameters
Parameter
|
Values
|
Comment
|
Default: Y
|
If Y, then the apply process is disabled on the first unresolved
error, even if the error is not fatal.
If N, then the apply process continues regardless of unresolved errors. | |
PARALLELISM= 4
|
Default: 1
|
Parallelism configures the number of apply servers available to the
apply process for performing user transactions from the source database. Choose
a value 4, 8, 12, 16 based on the concurrent replicated workload generated at
the source AND the number of CPUs available at the target.
|
Default=10,000
|
New in 10.2. Leave this parameter as default initially.
It enables you to specify that an apply process begins to spill messages for a transaction from memory to disk when the number of messages in memory for a particular transaction exceeds the specified number. Setting this parameter to a value that is higher than the default to try to stage everything in memory must be done carefully so that queue spilling is not increased. Setting TXN_LCR_SPILL_THRESHOLD to 'infinite' is not recommended because this will revert Streams to the old pre-10.2 behaviour. The DBA_APPLY_SPILL_TXN and V$STREAMS_APPLY_READER views enable you to monitor the number of transactions and messages spilled by an apply process. Refer to Document 365648.1 Explain TXN_LCR_SPILL_THRESHOLD in Oracle10GR2 Streams |
Apply parameters can be set using the SET_PARAMETER procedure from the DBMS_APPLY_ADM package. For example, to set the DISABLE_ON_ERROR parameter of the streams apply process named APPLY_EX, use the following syntax while logged in as the Streams Administrator:
exec
dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');
In some cases, performance can be improved by setting the following
hidden parameter. This parameter should be set when the major workload is
UPDATEs and the updates are performed on just a few columns of a many-column
table.
Parameter
|
Values
|
Comment
|
_DYNAMIC_STMTS=Y
|
Default: N
|
If Y, then for UPDATE statements, the apply process will optimize the
generation of SQL statements based on required
columns.
|
_HASH_TABLE_SIZE=1000000
|
Default: 80*parallelism
|
Set the size of the hash table used to calculate transaction
dependencies to 1 million.
|
Queue Ownership
When Streams is configured in a RAC environment, each queue table has an "owning" instance. All queues within an individual queue table are owned by the same instance. The Streams components (capture/propagation/apply) all use that same owning instance to perform their work. This means that
When Streams is configured in a RAC environment, each queue table has an "owning" instance. All queues within an individual queue table are owned by the same instance. The Streams components (capture/propagation/apply) all use that same owning instance to perform their work. This means that
- the database link specified in the propagation must connect to the owning instance of the target queue.
- the apply process is run at the owning instance of the target queue
Ownership of the queue can be configured to remain on a specific
instance, as long as that instance is available, by setting the PRIMARY
_INSTANCE and SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If
the primary_instance is set to a specific instance (ie, not 0), the queue
ownership will return to the specified instance whenever the instance is
up.
Apply will automatically follow the ownership of the queue. If the ownership changes while apply is running, apply will stop on the current instance and restart at the new owner instance.
Changing the GLOBAL_NAME of the Database
See the OPERATION section on Global_name below. The following are some additional considerations when running in a RAC environment. If the GLOBAL_NAME of the database is changed, ensure that the queue is empty before changing the name and that the apply process is dropped and recreated with the apply_captured parameter = TRUE. In addition, if the GLOBAL_NAME does not match the db_name.db_domain of the database, include the GLOBAL_NAME in the list of services for the database in the database parameter initialization file.
Apply will automatically follow the ownership of the queue. If the ownership changes while apply is running, apply will stop on the current instance and restart at the new owner instance.
Changing the GLOBAL_NAME of the Database
See the OPERATION section on Global_name below. The following are some additional considerations when running in a RAC environment. If the GLOBAL_NAME of the database is changed, ensure that the queue is empty before changing the name and that the apply process is dropped and recreated with the apply_captured parameter = TRUE. In addition, if the GLOBAL_NAME does not match the db_name.db_domain of the database, include the GLOBAL_NAME in the list of services for the database in the database parameter initialization file.
A Streams process will automatically restart after a database
startup, assuming that the process was in a running state before the database
shut down. No special startup or shutdown procedures are required in the normal
case.
Streams uses the GLOBAL_NAME of the database to identify changes from
or to a particular database. Do not modify the GLOBAL NAME of a Streams database
after capture has been created. Changes captured by the Streams capture process
automatically include the current global name of the source database. This means
that if the global name is modified after a capture process has been configured,
the capture process will need to be dropped and recreated following the
GLOBAL_NAME modification. In addition, the system-generated rules for capture,
propagation, and apply typically specify the global name of the source database.
These rule will need to be modified or recreated to adjust the
source_database_name. Finally, if the GLOBAL_NAME does not match the
db_name.db_domain of the database, include the GLOBAL_NAME in the list of
services for the database in the database parameter initialization
file.
If the global name must be modified on the database, do it at a time when NO user changes are possible on the database and the Streams queues are empty with no outstanding changes to be applied, so that the Streams configuration can be recreated. Keep in mind that all subscribers (propagations to target databses and the target apply processes) must also be recreated if the source database global_name is changed. Follow the directions in the Streams Replication Administrator's Guide for Changing the DBID or GLOBAL NAME of a source database.
It is also strongly recommended that the database init.ora parameter global_names be set to TRUE to guarantee that database link names match the global name of the target database.
If the global name must be modified on the database, do it at a time when NO user changes are possible on the database and the Streams queues are empty with no outstanding changes to be applied, so that the Streams configuration can be recreated. Keep in mind that all subscribers (propagations to target databses and the target apply processes) must also be recreated if the source database global_name is changed. Follow the directions in the Streams Replication Administrator's Guide for Changing the DBID or GLOBAL NAME of a source database.
It is also strongly recommended that the database init.ora parameter global_names be set to TRUE to guarantee that database link names match the global name of the target database.
The view DBA_APPLY_ERROR includes the message_number within the
transaction on which the reported error occurred. Use this message number in
conjunction with the procedures from the documentation manual Streams Concepts
and Administration ( Chapter 22 Monitoring Streams Apply Processes "Displaying Detailed Information About Apply
Errors") to print out the column values of each logical change record
within the failed transaction.
1. Ensure that any manual backup procedures
that include the any of the following statements include a non-null Streams
tag:
ALTER TABLESPACE ... BEGIN BACKUP
ALTER TABLESPACE ... END BACKUP
The tag should be chosen such that these DDL commands will be ignored by the capture rule set.
To set a streams tag, use the DBMS_STREAMS.SET_TAG procedure. A non-null tag should be specified to avoid capturing these commands.
Backups performed using RMAN do not need to set a Streams tag.
2. Do not allow any automated backup of the archived logs to remove necessary archive logs. It is especially important in a Streams environment that all necessary archived logs remain available online and in the expected location until the capture process has finished processing them. If a log required by the capture process is unavailable, the capture process will abort. Force a checkpoint (capture/logminer) before beginning the manual backup procedures. To force a checkpoint, explicitly reset the hidden capture parameter _CHECKPOINT_FORCE to 'Y'. The REQUIRED_CHECKPOINT_SCN column of the DBA_CAPTURE view specifies the lowest required SCN to restart capture. A procedure to determine the minimum archive log necessary for successful capture restart is available in the Streams health check script.
3. Ensure that all archive logs (from all threads) are available. Database recovery depends on the availability of these logs, and a missing log will result in incomplete recovery.
4. Ensure that the APPLY process parameter, COMMIT_SERIALIZATION, is set to the default value, FULL.
5. Implement a "heartbeat" table. To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically, implement a "heart beat" table. Implementing a heartbeat table ensures that the metadata is updated frequently. Additionally, the heartbeat table provides quick feedback as to the health of streams replication. Refer to the Source Site Configuration Section: Implement a Hearbeat Table for more details.
6. In situations that result in incomplete recovery (Point-in-Time recovery) at the source site, follow the instructions in Chapter 9 of the Streams Replication Administrators Guide
Performing Point-in-Time Recovery on the Source in a Single-Source Environment
Performing Point-in-Time Recovery in a Multiple-Source Environment
7. In situations that result in incomplete recovery at the destination site, follow the instructions in Chapter 9 of the Streams Replication Administrator's Guide
Performing Point-in-Time Recovery on a Destination Database
ALTER TABLESPACE ... BEGIN BACKUP
ALTER TABLESPACE ... END BACKUP
The tag should be chosen such that these DDL commands will be ignored by the capture rule set.
To set a streams tag, use the DBMS_STREAMS.SET_TAG procedure. A non-null tag should be specified to avoid capturing these commands.
Backups performed using RMAN do not need to set a Streams tag.
2. Do not allow any automated backup of the archived logs to remove necessary archive logs. It is especially important in a Streams environment that all necessary archived logs remain available online and in the expected location until the capture process has finished processing them. If a log required by the capture process is unavailable, the capture process will abort. Force a checkpoint (capture/logminer) before beginning the manual backup procedures. To force a checkpoint, explicitly reset the hidden capture parameter _CHECKPOINT_FORCE to 'Y'. The REQUIRED_CHECKPOINT_SCN column of the DBA_CAPTURE view specifies the lowest required SCN to restart capture. A procedure to determine the minimum archive log necessary for successful capture restart is available in the Streams health check script.
3. Ensure that all archive logs (from all threads) are available. Database recovery depends on the availability of these logs, and a missing log will result in incomplete recovery.
4. Ensure that the APPLY process parameter, COMMIT_SERIALIZATION, is set to the default value, FULL.
5. Implement a "heartbeat" table. To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically, implement a "heart beat" table. Implementing a heartbeat table ensures that the metadata is updated frequently. Additionally, the heartbeat table provides quick feedback as to the health of streams replication. Refer to the Source Site Configuration Section: Implement a Hearbeat Table for more details.
6. In situations that result in incomplete recovery (Point-in-Time recovery) at the source site, follow the instructions in Chapter 9 of the Streams Replication Administrators Guide
Performing Point-in-Time Recovery on the Source in a Single-Source Environment
Performing Point-in-Time Recovery in a Multiple-Source Environment
7. In situations that result in incomplete recovery at the destination site, follow the instructions in Chapter 9 of the Streams Replication Administrator's Guide
Performing Point-in-Time Recovery on a Destination Database
For best performance, the commit point for batch processing should be
kept low. It is preferable that excessively large batch processing be run
independently at each site. If this technique is utilized, be sure to implement
DBMS_STREAMS.SET_TAG to skip the
capture of batch processing session. Setting this tag is valid only in the
connected session issuing the set_tag command and will not impact the capture of
changes from any other database sessions.
DDL Replication
When replicating DDL, keep in mind the effect the DDL statement will have on the replicated sites. In particular, do not allow system generated naming for constraints or indexes, as modifications to these will most likely fail at the replicated site. Also, storage clauses may cause some issues if the target sites are not identical.
If you decide NOT to replicate DDL in your Streams environment, any table structure change must be performed manually.
Refer to Document 313478.1 Performing Manual DDL in a Streams Environment
Propagation
At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. The typical solution is to disable the propagation and then re-enable it.
DDL Replication
When replicating DDL, keep in mind the effect the DDL statement will have on the replicated sites. In particular, do not allow system generated naming for constraints or indexes, as modifications to these will most likely fail at the replicated site. Also, storage clauses may cause some issues if the target sites are not identical.
If you decide NOT to replicate DDL in your Streams environment, any table structure change must be performed manually.
Refer to Document 313478.1 Performing Manual DDL in a Streams Environment
Propagation
At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. The typical solution is to disable the propagation and then re-enable it.
- exec dbms_propagation_adm.stop_propagation('propagation_name');
- exec dbms_propagation_adm.start_propagation('propagation_name');
If the above does not fix the problem, perform a stop of propagation
with the force parameter and then start propagation again.
- exec dbms_propagation_adm.stop_propagation('propagation_name',force=>true);
- exec dbms_propagation_adm.start_propagation('propagation_name');
An additional side-effect of stopping the propagation with the force
parameter is that the statistics for the propagation are cleared
The above is documented in the Streams Replication Administrator's Guide: Restart Broken Propagations
The above is documented in the Streams Replication Administrator's Guide: Restart Broken Propagations
Source queue may grow if one of the target sites is down for an
extended period, or propagation is unable to deliver the messages to a
particular target site (subscriber) due to network problems for an extended
period.
- Automatic flow control minimizes the impact of this queue growth. Queued messages (LCRs) for unavailable target sites will spill to disk storage while messages for available sites are processed normally.
- Propagation is implemented using the DBMS_JOB subsystem. If a job is unable to execute 16 successive times, the job will be marked as "broken" and become disabled. Be sure to periodically check that the job is running successfully to minimize source queue growth due to this problem.
Removing the Streams administrator schema with DROP USER .....
CASCADE can be used to remove the entire Streams configuration.
Oracle database 10g has the Automatic Optimizer Statistics Collection
feature that runs every night and gathers optimizer stats of tables whose stats
have become stale. The problem with volatile tables, such as the Streams queue
tables, is that it is quite possible that when the stats collection job runs
these tables may not have data that is representative of their full load period.
For this reason we recommend to customers that for volatile tables, they run the
dbms_stats.gather job manually on them when they are at the fullest and then
immediately lock the stats of using the PL/SQL API's (dbms_stats.lock ...)
provided. This will ensure that when the nightly Automatic Optimizer Statistics
Collection job runs, these volatile tables will be skipped and hence not
analyzed.
These volatile AQ/Streams tables are created through a call to dbms_aqadm.create_queue_table (qtable_name, etc.) or dbms_streams_adm.setup_queue() command with a user defined queue table (qtable_name). In addition to the queue table, the call internally creates the following tables which also tend to be volatile:
These volatile AQ/Streams tables are created through a call to dbms_aqadm.create_queue_table (qtable_name, etc.) or dbms_streams_adm.setup_queue() command with a user defined queue table (qtable_name). In addition to the queue table, the call internally creates the following tables which also tend to be volatile:
aq$_{qtable_name}_i
aq$_{qtable_name}_h
aq$_{qtable_name}_t
aq$_{qtable_name}_p
aq$_{qtable_name}_d
aq$_{qtable_name}_c
aq$_{qtable_name}_h
aq$_{qtable_name}_t
aq$_{qtable_name}_p
aq$_{qtable_name}_d
aq$_{qtable_name}_c
Oracle has the ability to restore old stats on tables including data dictionary tables using the dbms_stats.restore... API's. This feature can be used for short term resolution, but the real solution is the first one, where you lock optimizer stats of volatile tables.
All Streams processing is done at the "owning instance" of the queue.
To determine the owning instance, use the query below:
SELECT q.owner, q.name, t.queue_table,
t.owner_instance
FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
WHERE t.object_type = 'SYS.ANYDATA' AND
q.queue_table = t.queue_table AND
q.owner = t.owner;
FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
WHERE t.object_type = 'SYS.ANYDATA' AND
q.queue_table = t.queue_table AND
q.owner = t.owner;
To display the monitoring view information, either query the monitoring views from the owning instance or use the GV$ views for dynamic streams views.
The views listed below with larger size font are the most commonly
monitored runtime views in Streams. The hyperlinks below link to the view
descriptions in the Oracle Database 10g Release 2 Database Reference
manual.
Streams View Name
|
Streams View Name from any RAC
instance
|
GV$STREAMS_CAPTURE
| |
GV$STREAMS_APPLY_COORDINATOR
| |
GV$STREAMS_APPLY_READER
| |
GV$STREAMS_APPLY_SERVER
| |
GV$STREAMS_POOL_ADVICE
| |
GV$STREAMS_TRANSACTION
| |
GV$BUFFERED_PUBLISHERS
| |
GV$BUFFERED_QUEUES
| |
GV$BUFFERED_SUBSCRIBERS
| |
GV$PROPAGATION_RECEIVER
| |
GV$RULE
| |
GV$RULE_SET
| |
GV$RULE_SET_AGGREGATE_STATS
|
The views listed below with larger size font are the most commonly
monitored configuration views in Streams. The hyperlinks below link to the view
descriptions in the Oracle Database 10g Release 2 Database Reference
manual.
DBA_REGISTERED_ARCHIVED_LOG
DBA_RECOVERABLE_SCRIPT
DBA_RECOVERABLE_SCRIPT_BLOCKS
DBA_RECOVERABLE_SCRIPT_ERRORS
DBA_RECOVERABLE_SCRIPT_PARAMS
DBA_STREAMS_ADD_COLUMN
DBA_STREAMS_ADMINISTRATOR
DBA_STREAMS_DELETE_COLUMN
DBA_STREAMS_GLOBAL_RULES
DBA_STREAMS_MESSAGE_CONSUMERS
DBA_STREAMS_MESSAGE_RULES
DBA_STREAMS_NEWLY_SUPPORTED
DBA_STREAMS_RENAME_COLUMN
DBA_STREAMS_RENAME_SCHEMA
DBA_STREAMS_RENAME_TABLE
DBA_STREAMS_RULES
DBA_STREAMS_SCHEMA_RULES
DBA_STREAMS_TABLE_RULES
DBA_STREAMS_TRANSFORM_FUNCTION
DBA_STREAMS_TRANSFORMATIONS
DBA_STREAMS_UNSUPPORTED
DBA_RULE_SET_RULES
DBA_RULE_SETS
DBA_RULES
DBA_HIST_BUFFERED_QUEUES
DBA_HIST_BUFFERED_SUBSCRIBERS
DBA_HIST_RULE_SET
DBA_HIST_STREAMS_APPLY_SUM
DBA_HIST_STREAMS_CAPTURE
DBA_HIST_STREAMS_POOL_ADVICE
DBA_RECOVERABLE_SCRIPT
DBA_RECOVERABLE_SCRIPT_BLOCKS
DBA_RECOVERABLE_SCRIPT_ERRORS
DBA_RECOVERABLE_SCRIPT_PARAMS
DBA_STREAMS_ADD_COLUMN
DBA_STREAMS_ADMINISTRATOR
DBA_STREAMS_DELETE_COLUMN
DBA_STREAMS_GLOBAL_RULES
DBA_STREAMS_MESSAGE_CONSUMERS
DBA_STREAMS_MESSAGE_RULES
DBA_STREAMS_NEWLY_SUPPORTED
DBA_STREAMS_RENAME_COLUMN
DBA_STREAMS_RENAME_SCHEMA
DBA_STREAMS_RENAME_TABLE
DBA_STREAMS_RULES
DBA_STREAMS_SCHEMA_RULES
DBA_STREAMS_TABLE_RULES
DBA_STREAMS_TRANSFORM_FUNCTION
DBA_STREAMS_TRANSFORMATIONS
DBA_STREAMS_UNSUPPORTED
DBA_RULE_SET_RULES
DBA_RULE_SETS
DBA_RULES
DBA_HIST_BUFFERED_QUEUES
DBA_HIST_BUFFERED_SUBSCRIBERS
DBA_HIST_RULE_SET
DBA_HIST_STREAMS_APPLY_SUM
DBA_HIST_STREAMS_CAPTURE
DBA_HIST_STREAMS_POOL_ADVICE
DBA_CAPTURE
DBA_CAPTURE_EXTRA_ATTRIBUTES
DBA_CAPTURE_PARAMETERS
DBA_CAPTURE_PREPARED_DATABASE
DBA_CAPTURE_PREPARED_SCHEMAS
DBA_CAPTURE_PREPARED_TABLES
DBA_CAPTURE_EXTRA_ATTRIBUTES
DBA_CAPTURE_PARAMETERS
DBA_CAPTURE_PREPARED_DATABASE
DBA_CAPTURE_PREPARED_SCHEMAS
DBA_CAPTURE_PREPARED_TABLES
DBA_APPLY
DBA_APPLY_CONFLICT_COLUMNS
DBA_APPLY_DML_HANDLERS
DBA_APPLY_ENQUEUE
DBA_APPLY_ERROR
DBA_APPLY_EXECUTE
DBA_APPLY_INSTANTIATED_GLOBAL
DBA_APPLY_INSTANTIATED_OBJECTS
DBA_APPLY_INSTANTIATED_SCHEMAS
DBA_APPLY_KEY_COLUMNS
DBA_APPLY_OBJECT_DEPENDENCIES
DBA_APPLY_PARAMETERS
DBA_APPLY_PROGRESS
DBA_APPLY_SPILL_TXN
DBA_APPLY_TABLE_COLUMNS
DBA_APPLY_VALUE_DEPENDENCIES
DBA_APPLY_CONFLICT_COLUMNS
DBA_APPLY_DML_HANDLERS
DBA_APPLY_ENQUEUE
DBA_APPLY_ERROR
DBA_APPLY_EXECUTE
DBA_APPLY_INSTANTIATED_GLOBAL
DBA_APPLY_INSTANTIATED_OBJECTS
DBA_APPLY_INSTANTIATED_SCHEMAS
DBA_APPLY_KEY_COLUMNS
DBA_APPLY_OBJECT_DEPENDENCIES
DBA_APPLY_PARAMETERS
DBA_APPLY_PROGRESS
DBA_APPLY_SPILL_TXN
DBA_APPLY_TABLE_COLUMNS
DBA_APPLY_VALUE_DEPENDENCIES
STRMMON is a monitoring tool
focused on Oracle Streams. Using this tool, database administrators get a quick
overview of the Streams activity occurring within a database. In a single line
display, strmmon reports information The reporting interval and number of
iterations to display are configurable. STRMMON is available in the rdbms/demo
directory in $ORACLE_HOME. The most recent version of the tool is available from
Document 290605.1 Oracle Streams
STRMMON Monitoring Utility
Streams capture and apply processes report long-running and long
transactions in the alert log.
Long-running transactions are open transactions with no activity( ie, no new change records , rollback or commit ) for an extended period (20 minutes). Large transactions are open transactions with a large number of change records. The alert log will report the fact that a long-running or large transaction has been seen every 20 minutes. Not all such transactions will be reported - only 1 per 10 minute period. When the commit or rollback is received, this fact will be reported in the alert log as well.
Long-running transactions are open transactions with no activity( ie, no new change records , rollback or commit ) for an extended period (20 minutes). Large transactions are open transactions with a large number of change records. The alert log will report the fact that a long-running or large transaction has been seen every 20 minutes. Not all such transactions will be reported - only 1 per 10 minute period. When the commit or rollback is received, this fact will be reported in the alert log as well.
The Streams health check script is a collection of queries to
determine the configuration of the streams environment. This script should be
run at each participating database in a streams configuration. In addition to
configuration information, analysis of the rules specified for streams is
included to enable quicker diagnosis of problems. A guide to interpreting the
output is provided. The healthcheck script is an invaluable tool for problem
solving customer issues. The Streams Healthcheck script is available from Document 273674.1 Streams
Configuration Report and Health Check Script
To browse through the complete list of streams published articles
refer to Knowledge > Browse. Then select Oracle Technology -> Database
-> Information Integration -> Streams.
To learn aboutOracle University offerings related to Oracle Streams,
refer to Document 762188.1 Oracle
University Offerings Related to Oracle Streams.
To learn about
NOTE:265201.1 -
Master Note for Troubleshooting Streams Apply Errors ORA-1403, ORA-26787 or
ORA-26786
NOTE:335516.1 - Master Note for Streams Performance Recommendations
NOTE:789445.1 - Master Note for Streams Setup Scripts
NOTE:1264598.1 - Master Note for Streams Downstream Capture - 10g and 11g [Video]
NOTE:313279.1 - Master Note for Troubleshooting Streams capture 'WAITING For REDO' or INITIALIZING
NOTE:779801.1 - Streams Conflict Resolution
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:730036.1 - Overview for Troubleshooting Streams Performance Issues
NOTE:437838.1 - Streams Specific Patches
NOTE:273674.1 - Streams Configuration Report and Health Check Script
NOTE:259609.1 - Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk
NOTE:365648.1 - Explain TXN_LCR_SPILL_THRESHOLD in Oracle10GR2 Streams
NOTE:782541.1 - Streams Replication Supplemental Logging Requirements
NOTE:313478.1 - Performing Manual DDL in a Streams Environment
NOTE:461278.1 - Example of a Streams Heartbeat Table
NOTE:335516.1 - Master Note for Streams Performance Recommendations
NOTE:789445.1 - Master Note for Streams Setup Scripts
NOTE:1264598.1 - Master Note for Streams Downstream Capture - 10g and 11g [Video]
NOTE:313279.1 - Master Note for Troubleshooting Streams capture 'WAITING For REDO' or INITIALIZING
NOTE:779801.1 - Streams Conflict Resolution
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:730036.1 - Overview for Troubleshooting Streams Performance Issues
NOTE:437838.1 - Streams Specific Patches
NOTE:273674.1 - Streams Configuration Report and Health Check Script
NOTE:259609.1 - Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk
NOTE:365648.1 - Explain TXN_LCR_SPILL_THRESHOLD in Oracle10GR2 Streams
NOTE:782541.1 - Streams Replication Supplemental Logging Requirements
NOTE:313478.1 - Performing Manual DDL in a Streams Environment
NOTE:461278.1 - Example of a Streams Heartbeat Table
No comments:
Post a Comment