Friday, October 21, 2011

BPEL DB Adapter having issues while writing data into Microsoft SQL Server 2008

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:    BPEL DB Adapter having issues while writing data into Microsoft SQL Server 2008
We have a BPEL DB adapter trying to write data into the Microsoft SQL Server 2008 which was recently migrated.
Now, it has become a common practice of restarting the BPEL OC4J container very frequently. This was not the case before the SQL server 2008 migration.


Log files:
09/10/27 01:23:56 pFaultMessage is org.collaxa.thirdparty.apache.wsif.base.WSIFDefaultMessage@53a422ee name:DPPException
09/10/27 01:23:56 QName string is DPPException
<2009-10-27 05:45:06,826> <ERROR> <declaration.collaxa.cube.engine.data> <BaseCubeInstancePersistenceAdaptor::store>
java.sql.SQLException: Closed Connection
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:138)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:175)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:240)
at oracle.jdbc.driver.PhysicalConnection.prepareStatement(PhysicalConnection.java:931)
at oracle.jdbc.driver.PhysicalConnection.prepareStatement(PhysicalConnection.java:827)
at oracle.jdbc.OracleConnectionWrapper.prepareStatement(OracleConnectionWrapper.java:73)
at oracle_jdbc_driver_LogicalConnection_Proxy.prepareStatement()
at com.collaxa.cube.engine.adaptors.common.BaseCubeInstancePersistenceAdaptor.store(BaseCubeInstancePersistenceAdapto
r.java:598)
at com.collaxa.cube.engine.adaptors.oracle.CubeInstancePersistenceAdaptor.store(CubeInstancePersistenceAdaptor.java:8
5)
at com.collaxa.cube.engine.data.CubeInstancePersistenceMgr.store(CubeInstancePersistenceMgr.java:367)
at com.collaxa.cube.engine.CubeEngine.store(CubeEngine.java:5542)
at com.collaxa.cube.engine.CubeEngine.endRequest(CubeEngine.java:5799)
at com.collaxa.cube.engine.CubeEngine.createAndInvoke(CubeEngine.java:1088)
at com.collaxa.cube.engine.ejb.impl.CubeEngineBean.__createAndInvoke(CubeEngineBean.java:125)
at com.collaxa.cube.engine.ejb.impl.CubeEngineBean.createAndInvoke(CubeEngineBean.java:168)
at com.collaxa.cube.engine.ejb.impl.CubeEngineBean.syncCreateAndInvoke(CubeEngineBean.java:188)
at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke(EJBJoinPointImpl.java:35)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.JAASInterceptor$1.run(JAASInterceptor.java:31)
at com.evermind.server.ThreadState.runAs(ThreadState.java:693)
at com.evermind.server.ejb.interceptor.system.JAASInterceptor.invoke(JAASInterceptor.java:34)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.TxRequiresNewInterceptor.invoke(TxRequiresNewInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.InvocationContextPool.invoke(InvocationContextPool.java:55)
at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(StatelessSessionEJBObject.java:87)
at CubeEngineBean_LocalProxy_4bin6i8.syncCreateAndInvoke(Unknown Source)
at com.collaxa.cube.engine.delivery.DeliveryHandler.initialRequestAnyType(DeliveryHandler.java:549)
at com.collaxa.cube.engine.delivery.DeliveryHandler.initialRequest(DeliveryHandler.java:465)
at com.collaxa.cube.engine.delivery.DeliveryHandler.request(DeliveryHandler.java:134)
at com.collaxa.cube.ejb.impl.DeliveryBean.request(DeliveryBean.java:95)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke(EJBJoinPointImpl.java:35)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.JAASInterceptor$1.run(JAASInterceptor.java:31)
at com.evermind.server.ThreadState.runAs(ThreadState.java:693)
at com.evermind.server.ejb.interceptor.system.JAASInterceptor.invoke(JAASInterceptor.java:34)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.TxRequiredInterceptor.invoke(TxRequiredInterceptor.java:50)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.InvocationContextPool.invoke(InvocationContextPool.java:55)
at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(StatelessSessionEJBObject.java:87)
at DeliveryBean_RemoteProxy_4bin6i8.request(Unknown Source)

Solution:


The JARS you use "jtds 1.2.2" look to be certified only up to SQL Server 2005 . 

Looking on the oficial JTDS site I cannot find any refence to SQL Server 2008 . It sais on the feature page that only SQL server 2000 and 2005 are supported

http://jtds.sourceforge.net/features.html

Compliance
SQL Server 2005, 2000, 7.0, 6.5 Compliant Yes
Sybase 10, 11, 12, 15 Compliant Yes (not all features available for Sybase)
JDK 1.4, 1.3 and 1.2 Compliant Yes
Sun Certified 100% Pure JavaTM Yes

So that is the reason of the errors you get 

The recommended drivers for MS SQL are listed on 
http://download.oracle.com/docs/cd/B31017_01/integrate.1013/b28994/adptr_db.htm#CHDCAEGH
in section
"4.6.1 Using a Microsoft SQL Server" 

1) The data-direct driver 
http://www.oracle.com/technology/tech/java/oc4j/htdocs/datadirect-jdbc-certification.html
cannot be used as it also supports only SQL Server 2008 

2) The only drivers for SQL Server 2008 that I could found are on Microsoft download page 
http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=99b21b65-e98f-4a61-b811-19912601fdc9
so you can try using these


Get Oracle Certifications for all Exams
Free Online Exams.com

Proxy Generation Issue /Duplication of XSD's in OWSM “error: 'UNBType' is already defined”

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:    Proxy Generation Issue /Duplication of XSD's in OWSM “error: 'UNBType' is already defined”
When client is trying to create the Java Source required 
for the interface described by OWSM proxy URL having 
registered WSDL, It displays following error message.
On 10.1.3.4 in Production:
When attempting to generate web services proxy from WSDL URL secured by OWSM10.1.3.4, the following error occurs:

ERROR
-----------------------
Error while using wsimport tool:
error: 'UNBType' is already defined
line 63 of http://m2b2btest.dubaitrade.ae/gateway/services/
SID0003037/N21716673/SID0003037N947906590/OwsmImport.xsd
error: (related to above error) the first definition appears here
line 4 ofhttp://m2b2btest.dubaitrade.ae/gateway/services/SID0003037
/N21716673/SID0003037/P2059444629/SID0003037/P91224747/SID0003037/
N947906590/OwsmImport.xsd
error: 'UNZType' is already defined
line 93 ofhttp://m2b2btest.dubaitrade.ae/gateway/services/SID0003037/
N21716673/SID0003037/N947906590/OwsmImport.xsd
error: (related to above error) the first definition appears here
line 63 ofhttp://m2b2btest.dubaitrade.ae/gateway/services/SID0003037/
N21716673/SID0003037
/P2059444629/SID0003037/P91224747/SID0003037/N947906590/OwsmImport.xsd


 
Log files:
Log File
-----------

We found the below error in the log file "gateway.log":
2009-10-05 09:04:42,318 FINEST [Thread-119] configuration.PolicySetWatchdog - Checking Policy Manager
2009-10-05 09:04:42,589 WARNING [Thread-119] configuration.PolicySetWatchdog - Failed to retrieve policy set from policy manager with url http://m2b2btest.dubaitrade.ae/policymanager/services/RegistrationService: com.cfluent.policymanager.sdk.base.exception.ServerException: (404)Not Found
2009-10-05 09:04:42,596 FINEST [Thread-119] configuration.PolicySetWatchdog - Failed to retrieve policy set from policy manager with url http://m2b2btest.dubaitrade.ae/policymanager/services/RegistrationService
com.cfluent.policymanager.sdk.base.exception.ServerException: (404)Not Found
at com.cfluent.policymanager.sdk.client.soap.SoapComponentConfigurator.getPolicies(SoapComponentConfigurator.java:185)
at com.cfluent.agent.configuration.PolicySetWatchdog.getPolicySetFromPolicyManager(PolicySetWatchdog.java:166)
at com.cfluent.agent.configuration.PolicySetWatchdog.pollFromPolicyManager(PolicySetWatchdog.java:205)
at com.cfluent.agent.configuration.PolicySetWatchdog.run(PolicySetWatchdog.java:77)


We found the below error in the log file "coreman.log":
2009-10-05 08:48:03,136 WARNING [shutdownThread] util.CoremanInit - Taking Coreman down
2009-10-05 08:48:03,151 WARNING [shutdownThread] util.CoremanInit - Stopping threads for scheduled tasks (This can take up to 5 seconds)
2009-10-05 08:50:22,625 INFO [OC4J Launcher] mstore.JDBCMStore - Creating table: MEASUREMENT_AGGSTORE
2009-10-05 08:50:24,517 INFO [OC4J Launcher] rmi.RmiServer - Starting Confluent RMI Name Server
2009-10-05 08:50:26,130 INFO [OC4J Launcher] util.InitServlet - Confluent Monitor is initialized
2009-10-05 09:00:15,940 WARNING [DeployerRunnable Thread for coreman] util.CoremanInit - Taking Coreman down
2009-10-05 09:00:15,947 WARNING [DeployerRunnable Thread for coreman] util.CoremanInit - Stopping threads for scheduled tasks (This can take up to 5 seconds)
2009-10-05 09:00:21,101 WARNING [DeployerRunnable Thread for coreman] persistor.PersistTask - Could not start persist task : com.cfluent.coreman.measurement.persistor.PersistException: Could not schedule persist task
2009-10-05 09:00:21,109 SEVERE [DeployerRunnable Thread for coreman] processing.MStoreProcessing - Could not schedule store processing task : java.lang.IllegalStateException: Timer already cancelled.
2009-10-05 09:00:21,205 WARNING [DeployerRunnable Thread for coreman] persistor.PersistTask - Could not start persist task : com.cfluent.coreman.measurement.persistor.PersistException: Could not schedule persist task
2009-10-05 09:00:21,207 SEVERE [DeployerRunnable Thread for coreman] processing.MStoreProcessing - Could not schedule store processing task : java.lang.IllegalStateException: Timer already cancelled.
2009-10-05 09:00:21,208 SEVERE [DeployerRunnable Thread for coreman] mstore.MStoreConnectionPoolManager - Could not schedule cleaning task : java.lang.IllegalStateException: Timer already cancelled.
2009-10-05 09:00:21,221 SEVERE [DeployerRunnable Thread for coreman] mstore.MStoreConnectionPoolManager - Could not schedule cleaning task : java.lang.IllegalStateException: Timer already cancelled.
2009-10-05 09:00:21,229 SEVERE [DeployerRunnable Thread for coreman] processing.MStoreProcessing - Could not schedule store processing task : java.lang.IllegalStateException: Timer already cancelled.
2009-10-05 09:00:21,230 SEVERE [DeployerRunnable Thread for coreman] mstore.MStoreConnectionPoolManager - Could not schedule cleaning task : java.lang.IllegalStateException: Timer already cancelled.
2009-10-05 09:00:21,906 SEVERE [DeployerRunnable Thread for coreman] util.CoremanInit - Could not start main processing unit : com.cfluent.coreman.filter.FilterException: Could not schedule filter task
com.cfluent.coreman.filter.FilterException: Could not schedule filter task
at com.cfluent.coreman.filter.FilterAgent.schedule_task(FilterAgent.java:88)
at com.cfluent.coreman.filter.FilterAgent.<init>(FilterAgent.java:73)
at com.cfluent.coreman.filter.FilterEngine.addFilterAgent(FilterEngine.java:128)
at com.cfluent.coreman.util.CoremanInit.create_main_filter(CoremanInit.java:614)
at com.cfluent.coreman.util.CoremanInit.create_processing_units(CoremanInit.java:584)
at com.cfluent.coreman.util.CoremanInit.coreman_start(CoremanInit.java:226)
at com.cfluent.coreman.util.InitServlet.handleInit(InitServlet.java:54)
at com.cfluent.common.servlet.BaseServlet.init(BaseServlet.java:39)
at javax.servlet.GenericServlet.init(GenericServlet.java:256)
at com.cfluent.common.servlet.BaseServlet.init(BaseServlet.java:72)
target exception:

java.lang.IllegalStateException: Timer already cancelled.
at java.util.Timer.sched(Timer.java:354)
at java.util.Timer.scheduleAtFixedRate(Timer.java:333)
at com.cfluent.ccore.util.scheduler.TaskScheduler.schedule(TaskScheduler.java:138)
at com.cfluent.coreman.filter.FilterAgent.schedule_task(FilterAgent.java:86)
at com.cfluent.coreman.filter.FilterAgent.<init>(FilterAgent.java:73)
at com.cfluent.coreman.filter.FilterEngine.addFilterAgent(FilterEngine.java:128)
at com.cfluent.coreman.util.CoremanInit.create_main_filter(CoremanInit.java:614)
at com.cfluent.coreman.util.CoremanInit.create_processing_units(CoremanInit.java:584)
at com.cfluent.coreman.util.CoremanInit.coreman_start(CoremanInit.java:226)
at com.cfluent.coreman.util.InitServlet.handleInit(InitServlet.java:54)
at com.cfluent.common.servlet.BaseServlet.init(BaseServlet.java:39)
at javax.servlet.GenericServlet.init(GenericServlet.java:256)
at com.cfluent.common.servlet.BaseServlet.init(BaseServlet.java:72)

2009-10-05 09:00:21,908 SEVERE [DeployerRunnable Thread for coreman] pu.ProcessingUnit - Could not create processing unit : com.cfluent.coreman.util.pu.ProcessingUnitException: Could not schedule alarm_pu task
com.cfluent.coreman.util.pu.ProcessingUnitException: Could not schedule alarm_pu task
at com.cfluent.coreman.util.pu.ProcessingTask.schedule_task(ProcessingTask.java:85)
at com.cfluent.coreman.util.pu.ProcessingTask.<init>(ProcessingTask.java:72)
at com.cfluent.coreman.util.pu.ProcessingUnit.<init>(ProcessingUnit.java:36)
at com.cfluent.coreman.util.CoremanInit.create_processing_units(CoremanInit.java:588)
at com.cfluent.coreman.util.CoremanInit.coreman_start(CoremanInit.java:226)
at com.cfluent.coreman.util.InitServlet.handleInit(InitServlet.java:54)
at com.cfluent.common.servlet.BaseServlet.init(BaseServlet.java:39)
at javax.servlet.GenericServlet.init(GenericServlet.java:256)
at com.cfluent.common.servlet.BaseServlet.init(BaseServlet.java:72)
at com.cfluent.common.servlet.BaseInitServlet.init(BaseInitServlet.java:30)
at com.evermind.server.http.HttpApplication.loadServlet(HttpApplication.java:2379)
2009-10-05 09:00:21,910 SEVERE [DeployerRunnable Thread for coreman] pu.ProcessingUnit - Could not create processing unit : com.cfluent.coreman.util.pu.ProcessingUnitException: Could not schedule notification task
com.cfluent.coreman.util.pu.ProcessingUnitException: Could not schedule notification task
at com.cfluent.coreman.util.pu.ProcessingTask.schedule_task(ProcessingTask.java:85)
at com.cfluent.coreman.util.pu.ProcessingTask.<init>(ProcessingTask.java:72)
at com.cfluent.coreman.util.pu.ProcessingUnit.<init>(ProcessingUnit.java:36)
at com.cfluent.coreman.util.CoremanInit.create_processing_units(CoremanInit.java:588)
at com.cfluent.coreman.util.CoremanInit.coreman_start(CoremanInit.java:226)
at com.cfluent.coreman.util.InitServlet.handleInit(InitServlet.java:54)
at com.cfluent.common.servlet.BaseServlet.init(BaseServlet.java:39)
at javax.servlet.GenericServlet.init(GenericServlet.java:256)


Error while using wsimport tool:
error: 'UNBType' is already defined
line 63 of http://m2b2btest/gateway/services/
SID0003037/N21716673/SID0003037N947906590/OwsmImport.xsd
error: (related to above error) the first definition appears here
line 4 ofhttp://m2b2btest/gateway/services/SID0003037
/N21716673/SID0003037/P2059444629/SID0003037/P91224747/SID0003037/
N947906590/OwsmImport.xsd
error: 'UNZType' is already defined
line 93 ofhttp://m2b2btest/gateway/services/SID0003037/
N21716673/SID0003037/N947906590/OwsmImport.xsd
error: (related to above error) the first definition appears here
line 63 ofhttp://m2b2btest/gateway/services/SID0003037/
N21716673/SID0003037
/P2059444629/SID0003037/P91224747/SID0003037/N947906590/OwsmImport.xsd
error: 'UNHType' is already defined
line 124 ofhttp://m2b2btest/gateway/services/SID0003037/N21716673/
SID0003037/N947906590/OwsmImport.xsd
error: (related to above error) the first definition appears here
line 93 ofhttp://m2b2btest/gateway/services/SID0003037
/N21716673/SID0003037/P2059444629/SID0003037/P91224747/SID0003037/
N947906590/OwsmImport.xsd
error: 'UTHType' is already defined
line 168 of http://m2b2btest/gateway/services/SID0003037/
N21716673/SID0003037/N947906590/OwsmImport.xsd
error: (related to above error) the first definition appears here
line 124 ofhttp://m2b2btest/gateway/services/SID0003037
/N21716673/SID0003037/P2059444629/SID0003037/P91224747/SID0003037/
N947906590/OwsmImport.xsd
error: 'ExceptionType' is already defined
line 174 ofhttp://m2b2btest/gateway/services/SID0003037/
N21716673/SID0003037/N94790/OwsmImport.xsd

During analysis and comparing the wsdl from original xsds and wsdl 
and the new generated OWSM xsds and wsdl, We noticed the error was 
occurring because the same xsd is duplicated and available at different locations (only for generated wsdl). 


Solution:


Upgrade to OWSM 10.1.3.5




Get Oracle Certifications for all Exams
Free Online Exams.com

Setup.product.install, Utility, err, unable to launch: "ksh export LD_LIBRARY_PATH=, error code: "255"(SBL-STJ-00152) in Sieble

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:    Setup.product.install, Utility, err, unable to launch: "ksh export LD_LIBRARY_PATH=, error code: "255"(SBL-STJ-00152)  in Sieble
Issue in installing Siebel Webserver with OHS on Linux
Log files:
while configuring the Web Server we are getting error (please refer to sw_cfg_util.log) and setup exits with an error message:



GenericLog GenericError 1 000000034b555a56:0 2010-01-19 14:17:52 Executing step: RestartWebServer

GenericLog GenericError 1 000000034b555a56:0 2010-01-19 14:17:52 Executing step: ShutdownApacheServer

GenericLog GenericError 1 000000034b555a56:0 2010-01-19 14:17:52 (ossystem.cpp (96) err=255 sys=0) SBL-GEN-00255: Internal: Error during exec()

GenericLog GenericError 1 000000034b555a56:0 2010-01-19 14:17:52 Step ShutdownApacheServer: failed to run program %%WebServerInstance%%%%OSDirSeparator%%bin%%OSDirSeparator%%apachectl with cmdline stop

GenericLog GenericError 1 000000034b555a56:0 2010-01-19 14:17:52 Failed during Execution, err: 255



Solution:
This error message “SBL-STJ-00152” is known to occur when you launch the Siebel installer in console mode, but without modifying the siebel.ini file to prevent the installer from launching the configuration wizard automatically.
 in order to run the installation in Console Mode, the installation must be performed in two steps (installer and configuration). This is achieved by editing the siebel.ini file to include the following changes:

Siebel Enterprise Server (where Siebel Management Agent is installed with Siebel Server)
[RunAfter.Unix.AIX]
ConfigGateway.AIX = no
ConfigServer.AIX = no
ConfigAgent.AIX = no

Siebel Web Server Extension
[RunAfter.Unix.AIX]
Config.SWSE.AIX = no

With the above changes the installer will not try to launch the configuration wizards and will finish correctly.

In cases where the installer has already finished its processing, you can simply launch the configuration wizards manually in order to configure your Enterprise.

Please check the following section for the configuration steps:

Siebel Installation Guide for UNIX > Configuring Siebel Enterprise Server and Related Components > Launching the Siebel Configuration Wizard.




Get Oracle Certifications for all Exams
Free Online Exams.com

In Siebel Environment, while start the application, all jobs goes into Queued state. New Jobs stuck in Queued Status. SBL-SMI-00033

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:    
In Siebel Environment, while start the application, all jobs goes into Queued state. New Jobs stuck in Queued Status. SBL-SMI-00033


Solution:


One of the SRProc tasks is exiting with the error, SBL-SMI-00033. Please see here the task, from the 'list tasks for comp SRProc' output:

SRProc 5242887 15260 Exited with error Interactive 2010-01-26 10:42:31 2010-01-26 10:43:31 SBL-SMI-00033: The client exited without closing the SISNAPI co SystemAux 3 Normal


This is a known defect in Siebel version 8.0.0.x on the HP-UX platform. This My Oracle Support posting describes this:

Service Request Processor is exiting with SBL-SMI-00033 error (Doc ID 780307.1)


As a workaround, you will need to restart the SRProc component.

To fix this defect, please apply the latest Fix Pack, which is 8.0.0.8 [20430] released on October 30th last year.

Work around:
1. Unassigned the component SRProc, Synchronized and restarted the Environment.
2. Assigned the component SRProc, Synchronized and restarted the Environment.
3. Changed the password for Comp SRProc from SRVRMGR and restarted the Environment.
4. Changed the Max MTS = 2 and Min MTS = 2 for the SRProc component, Synchronized and restart the environment.




Get Oracle Certifications for all Exams
Free Online Exams.com

Using Bonded Network Device Can Cause OCFS2 to Detect Network Outage

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:    Using Bonded Network Device Can Cause OCFS2 to Detect Network Outage
During Network interface Card failure test, after removing nic from 1 node after 1 or 2 minutes second node restart automatically

Symptoms:
10.2.0.4 With ASM / CRS / Database / ocfs2 / AMSLIB


Here is the full test carried out and the problem faced. 

1. We have 2 NIC Card. One is onboard and another is AD-ON network Card. 
2. Bonding has been done on this 2 card for public ip. 
3. If i remove cable from the AD-ON card everything works fine. CRS understand the change and switches to other network card. 
4. Next stage is to remove the cable from the ON Board network card. If i do so RAC understand that cable has been removed and start communicating using other network card. But after some time say 2 to 3 minutes it gives error voting disk hangs and it panic and restart both the server. 

Heartbit cable is on the Addon Card. 


Log files:
ocssd.log (s1gz0ss016)
================
[ CSSD]CLSS-3000: reconfiguration successful, incarnation 12 with 2 nodes

[ CSSD]CLSS-3001: local node number 2, master node number 1

[ CSSD]2010-01-25 15:23:39.930 [1287670080] >TRACE: clssgmReconfigThread: completed for reconfig(12), with status(1)
[ CSSD]2010-01-25 15:23:40.015 [1245710656] >TRACE: clssgmCommonAddMember: clsomon joined (2/0x1000000/#CSS_CLSSOMON)
[ CSSD]2010-01-25 15:37:19.190 [1245710656] >TRACE: clscsendx: (0x2aaaac089bb0) Connection not active

[ CSSD]2010-01-25 15:37:19.190 [1245710656] >TRACE: clssgmSendClient: Send failed rc 6, con (0x2aaaac089bb0), client (0x2aaaac089eb0), proc ((nil))
[ CSSD]2010-01-25 15:43:52.310 >USER: Copyright 2010, Oracle version 10.2.0.4.0
[ clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=s1gz0ss016DBG_CSSD))
[ CSSD]2010-01-25 15:43:52.310 >USER: CSS daemon log for node s1gz0ss016, number 2, in cluster crs
……………………
[ CSSD]2010-01-25 15:46:38.827 [1250961728] >WARNING: clssnmPollingThread: node s1gz0ss015 (1) at 50 6.875823e-316artbeat fatal, eviction in 29.760 seconds
…………………..
[ CSSD]2010-01-25 15:47:07.885 [1250961728] >WARNING: clssnmPollingThread: node s1gz0ss015 (1) at 90 6.875823e-316artbeat fatal, eviction in 0.710 seconds
[ CSSD]2010-01-25 15:47:08.597 [1250961728] >TRACE: clssnmPollingThread: Eviction started for node s1gz0ss015 (1), flags 0x0001, state 3, wt4c 0


ocssd.log (s1gz0ss015)
================
[ CSSD]CLSS-3001: local node number 1, master node number 1

[ CSSD]2010-01-25 15:43:20.545 [1279928640] >TRACE: clssgmReconfigThread: completed for reconfig(14), with status(1)
[ CSSD]2010-01-25 15:51:54.617 >USER: Copyright 2010, Oracle version 10.2.0.4.0
[ clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=s1gz0ss015DBG_CSSD))
……………….
[ CSSD]2010-01-25 16:25:21.074 [1248102720] >WARNING: clssnmPollingThread: node s1gz0ss016 (2) at 50 2.312183e-315artbeat fatal, eviction in 29.450 seconds
[ CSSD]2010-01-25 16:25:21.074 [1248102720] >TRACE: clssnmPollingThread: node s1gz0ss016 (2) is impending reconfig, flag 1037, misstime 30550
[ CSSD]2010-01-25 16:25:21.074 [1248102720] >TRACE: clssnmPollingThread: diskTimeout set to (57000)ms impending reconfig status(1)
[ CSSD]2010-01-25 16:25:21.764 [1185163584] >TRACE: clssnmDiskPMT: stale disk (84550 ms) (0//u02/oradata/vote/vote_data1.dbf)
[ CSSD]2010-01-25 16:25:21.764 [1185163584] >TRACE: clssnmDiskPMT: stale disk (84550 ms) (1//u03/oradata/vote/vote_data2.dbf)
[ CSSD]2010-01-25 16:25:21.764 [1185163584] >TRACE: clssnmDiskPMT: stale disk (84550 ms) (2//u03/oradata/vote1/vote_data3.dbf)
[ CSSD]2010-01-25 16:25:21.764 [1185163584] >ERROR: clssnmDiskPMT: Aborting, 3 of 3 voting disks unavailable
[ CSSD]2010-01-25 16:25:21.765 [1185163584] >ERROR: ###################################
[ CSSD]2010-01-25 16:25:21.765 [1185163584] >ERROR: clssscExit: CSSD aborting
[ CSSD]2010-01-25 16:25:21.765 [1185163584] >ERROR: ###################################

OS system logs (s1gz0ss015)
====================

Comments
--------
Jan 25 16:23:15 S1GZ0SS015 kernel: bnx2: eth0 NIC Copper Link is Down
Jan 25 16:23:16 S1GZ0SS015 kernel: bonding: bond0: link status definitely down for interface eth0, disabling it
Jan 25 16:23:16 S1GZ0SS015 kernel: bonding: bond0: now running without any active interface !
Jan 25 16:23:19 S1GZ0SS015 snmpd[7769]: Connection from UDP: [127.0.0.1]:57543 
Jan 25 16:23:19 S1GZ0SS015 snmpd[7769]: Received SNMP packet(s) from UDP: [127.0.0.1]:57543 
Jan 25 16:23:34 S1GZ0SS015 snmpd[7769]: Connection from UDP: [127.0.0.1]:36167 
Jan 25 16:23:34 S1GZ0SS015 snmpd[7769]: Received SNMP packet(s) from UDP: [127.0.0.1]:36167 
Jan 25 16:23:45 S1GZ0SS015 kernel: o2net: connection to node S1GZ0SS016 (num 1) at 10.254.55.23:7777 has been idle for 30.0 seconds, shutting it down.
Jan 25 16:23:45 S1GZ0SS015 kernel: (0,2):o2net_idle_timer:1503 here are some times that might help debug the situation: (tmr 1264422195.99914 now 1264422225.99378 dr 1264422195.99905 adv 1264422195.99925:1264422195.99926 func (d24f33b7:505) 1264422143.100124:1264422143.100126)
Jan 25 16:23:45 S1GZ0SS015 kernel: o2net: no longer connected to node S1GZ0SS016 (num 1) at 10.254.55.23:7777
…………………
…………………
Jan 25 16:25:50 S1GZ0SS015 kernel: (19284,5):dlm_wait_for_node_death:370 8C697813DF044440B4D31E99E57E44EF: waiting 5000ms for notification of death of node 1
Jan 25 16:25:52 S1GZ0SS015 snmpd[7769]: Connection from UDP: [127.0.0.1]:38343 
Jan 25 16:25:52 S1GZ0SS015 snmpd[7769]: Received SNMP packet(s) from UDP: [127.0.0.1]:38343 
Jan 25 16:25:54 S1GZ0SS015 kernel: (11141,2):ocfs2_dlm_eviction_cb:98 device (8,129): dlm has evicted node 1
Jan 25 16:25:54 S1GZ0SS015 kernel: (11141,2):ocfs2_dlm_eviction_cb:98 device (8,209): dlm has evicted node 1

Solution:


problem with network bonding failover.
1.       Change Private IP with Public IP in OCFS2.
2.       Specify heartbeat dead threshold (>=7) [61]: 61 
3.       Specify network idle timeout in ms (>=5000) [10000]: 60000

References:

Note 423183.1 == > Using Bonded Network Device Can Cause OCFS2 to Detect Network Outage


Get Oracle Certifications for all Exams
Free Online Exams.com

EVERE: Error initializing endpoint java.net.BindException: Address already in use: JVM_Bind:8080 Siebel error message (SBL-CMS-00403)

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:    


EVERE: Error initializing endpoint java.net.BindException: Address already in use: JVM_Bind:8080  Siebel error message (SBL-CMS-00403)
When click on startconsole.bat its throwing java bind exception error. Attached management Server logs.

EVERE: Error initializing endpoint
java.net.BindException: Address already in use: JVM_Bind:8080


Symptoms:

Installed Siebel Management Server 8.0 & 8.0.0.8 version in windows 2003 Server and installed Siebel Management Agent 8.0 & 8.0.0.8 version in HP-UX Titanium.

Solution:



You should use jre >=1.5 and set it on the classpath
References:
Doc ID 751694.1 Diagnistic tool is not showing a single Component and not working at all
Doc ID 528859.1 Management Agent is not starting on Siebel 8.0,
Doc ID 504373.1 Diagnostic Tool Error SBL-CMS-00403 which address the same Siebel error for Siebel Management Server Diagnostic Tool. 



Get Oracle Certifications for all Exams
Free Online Exams.com

Package CS_PARTYMERGE_PKG Is Invalid After Applying Patch 8204291

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:    Package CS_PARTYMERGE_PKG Is Invalid After Applying Patch 8204291

Symptoms:

Note.842007.1 content
-------------------------------
Symptoms
Package CS_PARTYMERGE_PKG becomes invalid after applying <<Patch 8204291>> - 11.5.10 CS Feb'09.   

Manually re-compiling the package, using the command "alter package apps.CS_PARTYMERGE_PKG compile body;" ... returned the following errors.

PLS-00201: identifier 'CS_PARTYMERGE_PKG' must be declared 
PLS-00304: cannot compile body of 'CS_PARTYMERGE_PKG' without its specification


Cause
Same issue was reported in <<Note 738740.1>> re: no specification for CS_PARTYMERGE_PKG package body.  The file 'cssrpmns.pls' is missing from the patch.

Internal Bug.8449015 - 11.5.10.7.0 CS.INV.1 INVALID OBJECTS FOUND BY AUTOBUILD addresses this issue.


Solution
The missing package is available as part of (Dev controlled) Patch 4865965.

Analysis
----------------
Having reviewed both Notes 842007.1 and 738740.1 , I find that the 

The file 'cssrpmns.pls'  which is the CS_PARTYMERGE_PKG package specification is missing from the monthly 
service patches in
Patch 8204291 11.5.10: BUG FIXES FOR CS: FEB 09 PATCH
Patch 7309926 11.5.10  BUG FIXES FOR CS: JUL 08 PATCH
and in the one that this customer applied which is  Patch  7581246 11.5.10: BUG FIXES FOR CS: DEC 08 PATCH

In the internal Bug 8449015 the solution given is to apply the control Patch 4865965 which has cssrpmns.pls create the package specs and cssrpmnb.pls create the package body. 

The same solution is given in Notes 842007.1 and .738740.1 .

Reviewing control Patch 4865965 for it's suitability for this Customer environment 11.5.10.

It sets out that this patch might not be suitable for 11.5.10 as it was built for 11.5.10.1 and to get Development advice first.  Also shows that  control Patch 4865965 is actually built to resolve the performance issue for the party merge program.  

Solution:



The package specification file cssrpmns.pls was missing from patch 7581246 

Either apply Patch 6035651 - APPSPERF: CS: USE THE PRIMARY KEY PARAMETER TO PERFORM PARTY MERGE

or Simple solution is for Support to download it from aru, and then email it to customer.


References:

Note.842007.1 Package CS_PARTYMERGE_PKG Is Invalid After Applying Patch 8204291
Note.738740.1 PLS-00201 and PLS-00304 Errors when Compile Package Body CS_PARTYMERGE_PKG


Get Oracle Certifications for all Exams
Free Online Exams.com

ORA-00600: internal error code, arguments: [kope2upic2954], ORA-07445 core dump

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:    


ORA-00600: internal error code, arguments: [kope2upic2954], ORA-07445 core dump
1) ORA-00600 [1433], [60] followed by instance crash

2) ORA-00600 [kope2upic2954] (most probably related to Note 846297.1) 


Symptoms:

ORA-00600 internal error code, arguments: [1433], [60]

ORA-07445 exception encountered core dump [kslgetl()+352]

once we enable the AQ Schedule

Log files:

Alert Log
-------------


Wed Sep 2 11:52:51 2009
Errors in file /bdump/sid_arc3_20547.trc:
ORA-00600: internal error code, arguments: [1433], [60], [], [], [], [], [], []
. . . 
Wed Sep 2 11:52:55 2009
Errors in file /bdump/sid_dbw0_20515.trc:
ORA-00600: internal error code, arguments: [1433], [60], [], [], [], [], [], []
Wed Sep 2 11:52:55 2009
DBW0: terminating instance due to error 471
Wed Sep 2 11:52:55 2009
Errors in file /bdump/sid_j001_20555.trc:
ORA-00600: internal error code, arguments: [504], [0xC000000010610440], [256], [5], [session allocation], [0], [0], [0x000000000]
. . . 
Wed Sep 2 12:51:20 2009
Errors in file /bdump/sid_j004_23047.trc:
ORA-00600: internal error code, arguments: [kope2upic2954], [104], [105], [0], [], [], [], []
. . . 
Wed Sep 2 14:03:44 2009
Errors in file /bdump/sid_j005_26049.trc:
ORA-07445: exception encountered: core dump [0000000000000000] [SIGSEGV] [Invalid permissions for mapped object] [0x000000000] [] []
ORA-00600: internal error code, arguments: [kope2upic2954], [209], [121], [68], [], [], [], []
ORA-00600: internal error code, arguments: [kope2upic2954], [104], [105], [0], [], [], [], []



Trace File
--------------


/bdump/sid_j004_21772.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u07/app/oracle/product/10.2.0.4
System name: HP-UX
. . .
*** ACTION NAME:(PROCESS_VOYAGE_SCH) 2009-07-23 07:58:00.745
*** MODULE NAME:(DBMS_SCHEDULER) 2009-07-23 07:58:00.745
*** SERVICE NAME:(SYS$USERS) 2009-07-23 07:58:00.745
*** CLIENT ID:() 2009-07-23 07:58:00.745
*** SESSION ID:(130.9495) 2009-07-23 07:58:00.745
*** 2009-07-23 07:58:00.745
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [kope2upic2954], [104], [105], [0], [], [], [], []
Current SQL statement for this session:
select tab.rowid, tab.msgid, tab.corrid, tab.priority, tab.delay, tab.expiration, tab.retry_count, tab.exception_qschema, tab.exception_queue, tab.chain_no, tab.local_order_no, tab.enq_time, tab.time_manager_info, tab.state, tab.enq_tid, tab.step_no, tab.sender_name, tab.sender_address, tab.sender_protocol, tab.dequeue_msgid, tab.user_prop, tab.user_data from "MARINE_AQ"."CTMS_VOY_I_QTAB" tab where msgid = :1
----- PL/SQL Call Stack -----
object line object
handle number name
c000000066c82580 335 package body SYS.DBMS_AQ
c000000066d9b2f0 119 package body MARINE_AQ.CTMS_QUEUE_INTF_PKG
c0000000660fbd78 1 anonymous block
. . . 
Function List 
ksedst <- ksedmp <- ksfdmp <- kgerinv <- kgesinv
<- kgesin <- $cold_kope2upic2 <- 14384 <- kodpunp <- kopu2upkl2
<- kope2uattr <- kope2ucoll <- kope2upic2 <- kolcp2upkl <- $cold_kopu2upkl2
<- 2624 <- kope2uattr <- kope2upic2 <- kodpunp <- kokoupkl
<- kpcocaup <- kprcdt <- kprccu <- opifcr <- qertbFetchByRowID
<- opifch2 <- kpoal8 <- opiodr <- kpoodrc <- rpiswu2
<- kpoodr <- upirtrc <- kpurcsc <- kpuexecv8 <- kpuexec
. . . 
----------------------------------------
SO: c00000006b75c2b8, type: 4, owner: c00000006b663228, flag: INIT/-/-/0x00
(session) sid: 130 trans: c00000006a2df778, creator: c00000006b663228, flag: (48110041) USR/- BSY/-/-/-/-/-
DID: 0001-0010-00003E54, short-term DID: 0000-0000-00000000
txn branch: 0000000000000000
oct: 3, prv: 0, sql: c0000000660b7e20, psql: c000000067fabad0, user: 0/SYS
service name: SYS$USERS
O/S info: user: oracle, term: UNKNOWN, ospid: 21772, machine: jaunxdb0
program: oracle@jaunxdb0 (J004)
application name: DBMS_SCHEDULER, hash value=2478762354
action name: PROCESS_VOYAGE_SCH, hash value=3181464456
last wait for 'jobq slave wait' blocking sess=0x0000000000000000 seq=1 wait_time=1591713 seconds since wait started=2
=0, =0, =0
Dumping Session Wait History
for 'jobq slave wait' count=1 wait_time=1591713
=0, =0, =0
temporary object counter: 0

Solution:



The errorstack matches that of bug 8421532. Bug 8421532 has been closed as a duplicate of Bug 8506458.


References:

Bug 7661852 - SELECT ON AQ OBJECTS FAILS WITH ORA-600 [KOPE2UPIC:NULL] 
Status: 11,Code Bug (Response/Resolution)

Bug 8421532 - ORA-600: INTERNAL ERROR CODE, ARGUMENTS: [KOPE2UPIC2954], [31], [32], [0], [] 
Closed as duplicate of
Bug 8506458 - ORA-600 [KOPE2UPIC2954] ENCOUNTERED BY PROPAGATION PROCESS

ug 4331301: "ORA-600[KOPE2UPIC2954] AND ORA-4031 IN OBJECT TEST"

Bug 3784037: "ORA-600 INTERNAL ERROR CODE, ARGUMENTS [KOPE2UPIC2954], 
[277907], [557915"
.
Bug 7661852: 10.2.0.3 RDBMS 10.2.0.3 AQ PRODID-5 PORTID-197
Abstract: SELECT ON AQ OBJECTS FAILS WITH ORA-600 [KOPE2UPIC:NULL]


Article-ID: Note 846297.1
Circulation: MODERATED (EXTERNAL)
Folder: ST.Server.AdvancedQueueing
Topic: Known Problems - Propagation
Title: AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-
00600[Kghsstream_copyn]

Hdr: 8506458 10.2.0.4 RDBMS 10.2.0.4 AQ PRODID-5 PORTID-212
Abstract: ORA-600 [KOPE2UPIC2954] ENCOUNTERED BY PROPAGATION PROCESS

Bug 8410497 - ORA-600 [KGHSSTREAM_COPYN]
Bug 8501556 - ORA-00600[KGHSSTREAM_COPYN] ENCOUNTERED BY PROPAGATION PROCESS
Get Oracle Certifications for all Exams
Free Online Exams.com

How to move Standby Database to a new server

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:    How to move Standby Database to a new server


1. Prepare the new host with fresh rdbms installation same operating system and database patch level
Make sure you handle and files outside ORACLE_HOME (/var/opt/oracle/oratab)

2. Make sure you create necessary directory structure and the listener.ora & tnsnames.ora
( Listener.ora should have the new servername and IP address.)

3. Defer the redo transport from primary

alter system set log_archive_dest_state=defer scope=both sid='*';

4. Shutdown the standby

alter database recover managed standby database cancel;
shu immediate

5. Copy standby database files (data,ctrl,online log,spfile) to new host

6. Make sure we able to connect to new host using tns alias from the primary after modifying the prod tnsnames to reflect the new configuration

7. Bring up the standby
startup mount
alter database recover managed standby database disconnect;



8. Resume redo transport from primary

alter system set log_archive_dest_state=enable scope=both sid='*';




Get Oracle Certifications for all Exams
Free Online Exams.com

asm diskgroup creation failed ORA-15032, ORA-15063

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Problem:   


 asm diskgroup creation failed ORA-15032, ORA-15063

dbca failed to start the diskgroup on the remote node during dbca session

ALTER DISKGROUP ALL MOUNT;
ALTER DISKGROUP ALL MOUNT
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "LOG2"
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "LOG1"
ORA-15063: ASM discovered an insufficient number of disks for diskgroup
"IPDBFDATA"
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "FLASH"
ORA-15063: ASM discovered an insufficient number of disks for diskgroup
"CLONELOG2"
ORA-15063: ASM discovered an insufficient number of disks for diskgroup
"CLONELOG1"
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "ARCH" 


Solution:



asmlib layer was misconfigured on node2. Hence the owner of the disks was not set to be oracle:dbaadmin and the second asm instance could not mount the asm diskgroup.

Use /etc/init.d/oracleasm configure to set the correct disk owner/group
Once set, asm diskgroups/disks could be recognized on the remote node
Use /etc/init.d/oracleasm configure to set the permissions

References:

http://www.oracle.com/technology/products/database/asm/index.html



Get Oracle Certifications for all Exams
Free Online Exams.com

Master Note for Streams Performance Recommendations

Visit the Below Website to access unlimited exam questions for all IT vendors and Get Oracle Certifications for FREE
http://www.free-online-exams.com

Master Note for Streams Performance Recommendations [ID 335516.1]









Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.2 to 11.2.0.2 - Release: 9.2 to 11.2
Information in this document applies to any platform.
Purpose
Oracle Streams enables the sharing of data and events in a data stream either within a database or from one database to another. This Note discusses recommendations to improve the performance of Streams.
Scope and Application
 The information contained in this note targets Oracle support analyst and Replication administrators to improve the performance of Streams replication in Oracle 9.2 and higher. This note contains key recommendations for successful implementation of Streams in Oracle database release 9.2 and above.

Master Note for Streams Performance Recommendations
Customers should be on the latest patchset and it is recommended to apply relevant patches as detailed in :
  • Document 437838.1 Streams Specific Patches . These patches detailed address most of the commonly encountered performance and stability issues connected with Streams.
A complete list of 10.2 recommendations is detailed in : 
  • Document 418755.1 10gR2 Streams Recommended Configuration  (it is anticipated that various related notes will be merged at some point).
If there are performance issues, more detail may be found to help gather relevant information as well as isolate the cause in notes :

General Operational Considerations
The following main areas of operation can be affected by the configuration details which follow.
Streams uses buffered queues with the aim that all LCRs are processed in memory.

Spill of information from memory to disk can affect performance and can occur in 3 functional areas; these spill activities are termed  :
  • Logminer spill (source capture)
  • Queue spill (capture and apply)
  • Apply Spill (apply).
All Oracle versions have Queue spill. Queue spill is typically associated with LCRs not being processed in a timely fashion from the buffered queue ; rather than allowing old LCRs to remain in memory for an extended period of time , they are spilled out to disk to the table : aq$_<queue_table_name>_p.
Queue spill may also be associated with memory (Streams pool) space ; if there is not enough memory to accommodate LCRs, they are again spilled to disk.  Spill of this nature may be associated with a variety of causes therefore it is worth considering the parameter setting below.
Oracle versions >= 10gR2 additionally introduce Apply Spill. Apply spill is primarily connected with the processing of large transactions although it does take account of long running transactions.  It is worth nothing that Apply spill can operate on queue spill.  
The Capture side leaves the management of  these two types of transaction to the Apply side.  These types of transaction are written to a partitioned table at the Apply side database; the partitioned table is far easier to manipulate than the queue spill table.  The apply spill threshold can be configured.
Logminer spill writes cached logminer information out to logminer spill tables. It may be necessary to reconfigure the amount of space for cached logminer information. Typically the default allocation is sufficient for most applications. Where there is Queue spill , there may be the need to deallocate unused space or shrink the space usage of the related aq$_<queue_table_name>_p periodically.
Streams (Capture) has to checkpoint as it goes along in order to ensure that it can restart from a point in the past which does not require too many redo logs to be reprocessed. Care should be taken to understand when checkpoints will take place as well as the length of time checkpoint  information is retained.   Checkpoint information is stored in a single unpartitioned table. Therefore consideration should be given to the related parameters below especially in the event of there being multiple Capture processes.  Checkpoint frequency and purge interval can be configured.   It is inefficient to checkpoint too frequently, similarly it is inadvisable to hold on to checkpoint information too long - both can affect performance.
The Capture process in 10gR1 and above now uses an inbuilt flow control mechanism. Flow control aims to ensure that the source of LCRs (the Capture) does not flood the buffer queue of the Apply process before the Apply process has had a chance to apply these.  In certain  cases it may be relevant to amend the flow control related parameters. 

Implications of Tables with no Primary Key Column(s)
This is connected with Apply processing. Streams is suited to environments where all tables have primary keys. The Apply process should operate as quickly as possible when performing update and delete operations and a primary key allows the row concerned to be directly referenced for these operation types. It is worth checking the section : 'Substitute Key Columns' which outlines what is required in the circumstances where a table does not have a primary key - more details are found in the Oracle® Streams Replication Administrator’s Guide. 
Whilst the documentation mentions that it is possible to have nulls in the substitute key, in practice allowing a null in an index can allow multiple null values in the indexed column and consequentially direct access to key data may not be possible.
Remember this point as the aim is to have the substitute key columns effectively behave like a primary key (unique and not null data values).

General Recommendations
  • Streams Pool Allocation
    • 11g: set STREAMS_POOL_SIZE to be greater or equal to 200 MB;
    • 10gR2: set SGA_TARGET > 0 to enable autotuning of the Streams pool and set the STREAMS_POOL_SIZE to be greater or equal to 200 MB, this will set the lower bound.
    • 10gR1: use the STREAMS_POOL_SIZE init.ora parameter to configure the Streams memory allocation. Set the STREAMS_POOL_SIZE to be greater or equal to 200 MB.
    • 9.2: increase the SHARED_POOL size to minimize spill-over from buffer queue to disk
As well as addressing how memory is to be allocated for Streams , in version 9.2 the Streams/logminer dictionary tables should be located outwith the SYSTEM tablespace as follows :
create tablespace &tspace_name
datafile '&db_file_directory/&db_file_name' size 25 m
reuse autoextend on maxsize unlimited;
execute dbms_logmnr_d.set_tablespace('&tspace_name');
Note in 10gR1 and above : The Streams data dictionary tables are created in the SYSAUX tablespace by default. There is no need to move it.
Streams_pool_size should still be set even when using ASMM (200mb to start with for instance)
  • Initialization file hidden parameters
    10gR2
    _job_queue_interval=1
    _spin_count=5000
    10gR1
    _job_queue_interval=1
    _spin_count=5000
    9.2
    _first_spare_parameter=50
    _kghdsidx_count=1
    _job_queue_interval=1
    _spin_count=5000

Note: _first_spare_parameter in 9.2.0.8 will configure a % of the shared pool to be used for the Streams pool. A _kghdsidx_count setting of 1 avoids the use of sub pools within the shared_pool; in 9.2 multiple sub pools have been associated with performance issues.

  • Queue Monitor / AQ_TM_PROCESSES

    In 10gR1 onwards, remove the AQ_TM_PROCESSES parameter from your database initialization parameter file. This will allow the queue management background activity to be managed automatically.
Note: You should not set the AQ_TM_PROCESSES parameter explicitly to 0. This would effectively break or disable the following operations : background queue management activity as well as cleanout of processed messages and would break AQ activity relating to message delay, expiration and retention.
  • Queue to Queue Propagation

    When upgrading to Oracle Database Release 10.2 and above,  from releases 9.2 or 10.1, drop any propagations and recreate them specifying the queue_to_queue parameter as TRUE.  This is especially relevant where a RAC destination is in place as this allows propagation to fail over to an existing instance which now owns the Streams buffered queue. 
  • Multiple Separate Queues for Capture and Apply

    From all Streams versions up to and including11g , where bi-directional replication is in place, configure separate Capture and Apply queues to minimize the affect of
    Queue spill to disk. Do not have both these processes sharing the same queue unless Capture and Apply reside in same database and there is no propagation taking place. Both processes will see LCRs for both activities; if the Capture queue has spilled (Queue spill) this can affect Apply operation and slow it down.
  • Avoid Complex Rules

    Make sure that rules as this allow Fast evaluation. Fast rule evaluation can be used on simple rules with object_owner, object_name, source_database_name in simple equality statements.

    In 9.2, it was necessary to avoid 'LIKE' , 'NOT' and != operators as much as possible as these operators disallowed rule evaluation optimizations. It is better (i.e., faster to evaluate) to have multiple simple rules than 1 complex rule.

    In 10g and above, the same can be achieved by using a negative rule set to eliminate tables or objects from Streams processing.

    Examples of simple rules:
(((:dml.get_object_owner() = 'FSADBO1' and
:dml.get_object_name() = 'STREAMS_TEST')) and
:dml.is_null_tag() = 'Y' and
:dml.get_source_database_name() = 'FSABCD.WORLD') 
(:dml.get_object_owner() = 'POSIT' and :dml.is_null_tag() = 'Y'
and :dml.get_object_name() IN
('TB_M21_1','TB_M21_2','TB_M21_3','TB_M21_40','TB_M21_10','PN_M211B010'))

Example of complex rules: 
(((:dml.get_object_owner() = 'FSADBO1' and
:dml.get_object_name() != 'STREAMS_TEST')) and
:dml.is_null_tag() = 'Y' and
:dml.get_source_database_name() = 'FSABCD.WORLD')
(:dml.get_object_owner() = 'POSIT' and :dml.is_null_tag() = 'Y'
and :dml.get_object_name() NOT IN
('TB_M21_1','TB_M21_2','TB_M21_3','TB_M21_40','TB_M21_10','PN_M211B010'))

Note: complex rules result in a sql statement being executed with each evaluation of the rule. 

Inspect the Streams Healthcheck : Document 273674.1 or use the following in order to understand if complex rules are involved.
-- Capture
select capture_name, owner, name from gv$rule_set r, dba_capture c
where c.rule_set_owner = r.owner and c.rule_set_name = r.name
and r.sql_executions > 0;
-- Propagation
select propagation_name, owner, name from gv$rule_set r, dba_propagation p where p.rule_set_owner = r.owner and p.rule_set_name = r.name
and r.sql_executions > 0;
-- Apply
select apply_name, owner, name from gv$rule_set r, dba_apply a
where a.rule_set_owner = r.owner and a.rule_set_name = r.name
and r.sql_executions > 0;

  •  Implement a "heart beat" table

    It is useful to do this for the reasons outlined :
    1. To ensure that Capture checkpointing is attempted regularly and the DBA_CAPTURE view is maintained. i.e on systems where there is a lot of redo activity with little Capture activity , this will ensure that a capture process will not have to restart from an old logfile with the undesirable consequences of reprocessing significant redo;
    2. To provide a simple means to understand how up to date the changes are on your Apply side database. The heartbeat table should reflect the date and time of the update from the Source and therefore this will reflect the latency or current of changes applied.
  •  9.2: Implement Flow Control.

    These manual flow control scripts should be used on Oracle 9.2 Streams source databases (i.e., databases running 9.2 streams capture processes). Refer to Document 259609.1
Note : From 10g onwards flow control in automatically enabled.

  • Supplemental Logging

    Database wide supplemental logging imposes a significant overhead and may affect performance.  This should therefore be avoided .

Logminer Related Configuration
Generally, this parameter should not be modified. The only occasions where it is valid to change the value of _SGA_SIZE for the Capture/logminer session are under circumstances :
  • ORA-1341 is observed; or
  • Where there is log miner spill. Log miner spill is output as an advisory warning in the streams healthcheck report under section detailed : 'performance checks' .
exec dbms_capture_adm.set_parameter('','_SGA_SIZE','20');

 Note:  The default of 10MB is typically sufficient. - double parameter setting until observed issue is removed.

The majority of Capture / logminer memory issues (ORA-01280) would be resolved by increasing _SGA_SIZE to 80 or even 150 Mb, which could be set as follows :
exec dbms_capture_adm.set_parameter('<capture_name>','_SGA_SIZE','80');
 
Capture Related Recommendations
  • Capture Parallelism
Parallelism can be set on capture, however it is recommended that Capture is not parallelised.  There is little or no benefit to be gained therefore parallelism should be configured as follows :
    • 11g: parallelism=1 is the recommended setting and is the default ;
    • 10g: parallelism=1 is the recommended setting ;
    • 9.2: parallelism=3 is the recommended setting
Example:
exec dbms_capture_adm.set_parameter('<capture_name>','parallelism','1');

Notes :  Ensure that the PROCESSES initialization parameter is set appropriately when you set the parallelism capture process parameter.

  • Streams Checkpoints  / _CHECKPOINT_FREQUENCY
There are considerable implications surrounding checkpointing . The significant things to consider in relation to checkpointing being :
<!--[if !supportLists]-->1.       <!--[endif]-->Frequency of checkpointing
<!--[if !supportLists]-->2.       <!--[endif]-->The period Streams should retain checkpoint information for
Checkpointing is performed by Capture (Builder process) and is connected with the number of Megabytes of redo mined before a logminer checkpoint will be taken. 
Notes:
<!--[if !supportLists]-->·         <!--[endif]-->Checkpoints may be requested but they may not complete for a number of valid reasons.
<!--[if !supportLists]-->·         <!--[endif]-->Check dba_capture.required_checkpoint_scn to ensure that checkpoints are occurring and as a consequence Capture does not have to restart from an old redo log;
<!--[if !supportLists]-->·         <!--[endif]-->Checkpoints can accumulate a large amount of space in system.logmnr_restart_ckpt$.
Also consider number of sessions active on the database as well as the number of Capture processes since both have a direct affect on number of rows and therefore size of this table.
As the Oracle versions have developed over time, the rules relating to the validity of a checkpoint have been relaxed and, as a result , checkpoints need not be generated as frequently as in earlier versions since in later versions they are more likely to complete.
The frequency with which a checkpoint is requested can be adjusted using : 
 exec dbms_capture_adm.set_parameter ('<capture_name>', '_checkpoint_frequency','1000');

The above will adjust Streams Checkpoints to occur after every 1000MB of redo.  Checkpoints record metadata and as a consequence generate redo; a higher than expected amount of redo generated could be a consequence of the checkpointing occurring too frequently.  Similarly too few checkpoints will require that logs have to be retained on disk for longer than expected since dba_capture.required_checkpoint_scn does not move forward as readily as it should.

An indication of the recommended and setting for _CHECKPOINT_FREQUENCY is follows :
Version
Recommended setting
Default setting
Modify
>=10gR2
1000
1000
n/a
<=10gR1
100
10
Yes

Note:  This value should be changed from the default setting for database versions  < 10gR2

In older versions it may have been necessary to force a checkpoint to occur at a periodic interval as follows :
execute dbms_capture_adm.set_parameter('<capture_name>', '_CHECKPOINT_FORCE', 'Y');

  • Purging Streams Checkpoints
Eliminate unnecessary Streams/Logminer checkpointing metadata.
<!--[if !supportLists]-->o        <!--[endif]-->10.1:  Alter the first_scn periodically (weekly or daily) to remove unneeded metadata for Streams capture;
<!--[if !supportLists]-->o        <!--[endif]-->10.2:  Alter the capture parameter CHECKPOINT_RETENTION_TIME from the default retention of 60 days to a realistic value for your database.
Note:  The default retention period is typically too long a period to retain checkpoint information.

A typical setting might be to retain 7 days worth of checkpoint metadata :
exec dbms_capture_adm.alter_capture(capture_name=>'your_capture', checkpoint_retention_time=> 7);
<!--[if !supportLists]-->o        <!--[endif]-->11g : set the parameter : CHECKPOINT_RETENTION_TIME to an appropriate value for your environment. Again 7 days would appear to be a reasonable period of time.

Propagation  Recommendations
  • Propagation Job Interval

    To reduce the latency between jobs, set the hidden parameter _job_queue_interval = 1.  This should be done as an init.ora parameter or an spfile parameter. Because it is a hidden parameter, the database must be restarted in order for the value to take effect. The default for _job_queue_interval  is 5 seconds.
  • Remove unnecessary Propagation Rules

    If you are configuring a propagation that takes ALL changes from the source queue to the destination queue (ie, no selectivity requirements), you can remove the rule set from the propagation definition. This will eliminate the necessity to do ANY rule evaluation and will result in higher propagation throughput.
  • Propagation Latency

    Propagation latency is the maximum wait, in seconds, in the propagation window for a message to be propagated after it is enqueued. The default propagation latency value is 60.  Reduce the latency of the propagation schedule to 1 by issuing the following :
exec dbms_aqadm.alter_propagation_schedule(queue_name,destination,latency=>1); 

  •  Queue to Queue Propagation or Queue-to-Database link
Propagations configured prior to Oracle Database 10g Release 2 are queue-to-dblink propagations. Also, any propagation that includes a queue in a database prior to Oracle Database 10g Release 2 must be a queue-to-dblink propagation. When queue-to-dblink propagations are used, propagation will not succeed if the database link no longer connects to the owning instance of the destination queue.

When upgrading to Oracle Database Release 10.2 and above, from releases 9.2 or 10.1, drop any propagations and recreate them specifying the queue_to_queue parameter as TRUE.

In 11g use queue-to-queue propagations whenever possible. A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately.
  •  TCP related Parameter Settings

    Increase the SDU in a Wide Area Network for Better Network Performance In addition, the SEND_BUF_SIZE and RECV_BUF_SIZE parameters in the listener.ora and tnsnames.ora files increase the performance of propagation on your system. These parameters increase the size of the buffer used to send or receive the propagated messages. These parameters should only be increased after careful analysis of their overall impact on system performance.  For more details , refer to Document 780733.1 .
  • Queue Spill related Space Management

    Versions >= 10gR2.

    To improve the performance of streams after significant Queue spill activity, perform the following:
alter table aq$_<queue_table_name>_p enable row movement;
alter table aq$_<queue_table_name>_p shrink space;

For more explanation _p which store the messages that spill from memory see Document 242664.1
These commands can be issued against queue tables, spilled tables, and IOTs in versions 10gR2 and above

Versions <= 10gR1

To improve the performance of streams after significant Queue spill activity, perform the following:
alter table aq$_<queue_table_name>_p deallocate unused;

The above can be performed while streams is running as long as there is no active spilling or dequeuing of messages from the spillover table. It is highly recommended that this activity be performed in a maintenance window (ie, no streams enqueue/dequeue activity). Do not use the above statement on the Index Organized Tables (IOT) for the queue table.

To reclaim space from IOTs, do the following in a maintenance window (i.e. when streams not actively running):
- export the queue table;
- drop the queue table;
- import the queue table

 
Apply Recommendations
It is worth mentioning Apply process configuration in relation to parallelism as it is indicated that parallelism can improve performance. If parallelism is specified, this will have an affect on the derived parameters : _TXN_BUFFER_SIZE and _HASH_TABLE_SIZE.

The Apply coordinator fetches transactions from the Apply reader and hands these off to the Apply Slaves. The coordinator is able to prefetch transactions from the reader before they will be allocated to slaves and the number of transactions which can be prefetched corresponds to the _TXN_BUFFER_SIZE parameter value (i.e this effectively specifies a number of transactions and not a size). If transactions are very large , prefetching these from the reader process can put pressure on memory usage and therefore impact performance. As outlined below this is something to consider and profile if large transactions are likely in the environment. Increasing Apply parallelism increases the number of transactions which can be prefetched.

The _HASH_TABLE_SIZE is used for dependency tracking. Again, as parallelism is increased, this value is correspondingly increased.

Derived values for the above are as follows :
Parameter
Version
Value
_TXN_BUFFER_SIZE
10.1, 10.2
Derived value : 80 * Parallelism; minimum value : 80

11.1.0.6
80 * Parallelism; minimum value : 320

> 11.1.0.6
11.1.0.6 Auto tuned
_HASH_TABLE_SIZE
10.1, 10.2
Derived value : 8000 * Parallelism

10.2.0.4 (unpublished bug 5720734), 11.1.0.6
1000000

> 11.1.0.6
1000000

Note:  In 11.1.0.7 specifically, the auto tuned value for _TXN_BUFFER_SIZE may result in reduced apply throughput, in these cases set this parameter manually starting with a minimum value of 320.  The issue has been fixed in 11.2.  


Further useful pointers follow :
 
  • Controlling Apply Spill
    The number of LCRs in a transaction which trigger Apply Spill can be configured. The default is 100000. This can only be amended in 10gR2 and above. Refer to Document 365648.1 for more details.

  • Handling Very Large Transactions

    Set Hidden apply parameter _TXN_BUFFER_SIZE ONLY for Large transaction

    For 10g and where parallelism > 1, to prevent spillover from occurring when using parallelism, reduce the hidden apply parameter _TXN_BUFFER_SIZE to 10. If the transactions are very large (ie, have many row changes within a single transaction [100000's] and parallelism > 1, consider reducing the transaction buffer size even lower (for example, _TXN_BUFFER_SIZE=2)
Note:  If your environment supports only small transaction then ignore this parameter


In 11g :The parameter _TXN_BUFFER_SIZE is autotuned so this parameter should not be set. If upgrading from 10g to 11g, _TXN_BUFFER_SIZE should be unset :
exec dbms_apply_adm.set_parameter('<apply_name>', '_TXN_BUFFER_SIZE',null);

  • Recommended Parameters

    For 10.2 and above set the following apply parameters:
exec dbms_apply_adm.Set_parameter('<apply name>','parallelism','4')
exec dbms_apply_adm.Set_parameter('<apply name>','_dynamic_stmts','Y')
exec dbms_apply_adm.Set_parameter('<apply name>','_hash_table_size','1000000')
exec dbms_apply_adm.Set_parameter('<apply name>','disable_on_error','N')

Note:  The default value of buffsize is as follows : bufsize=min(320,80*parallelism).

If the size of each transaction is  typically > 100000 LCRs; i.e there are over 100,000 row changes per transaction for most transactions, then bufsize= 10+parallelism

exec dbms_apply_adm.set_parameter('<apply name>'_txn_buffer_size',bufsize)

    • Apply Parallelism

      Increase the parallelism of the apply process at the destination to match the concurrency of the source database activity : 
      • If you set parallelism > 1 on the apply process, be sure to configure the appropriate constraints at the destination site (as well as supplemental logging at the source site) to do dependency computations.
 
      • For releases lower than 10.1.0.5, be sure to specify UNCONDITIONAL supplemental logging at the source database for ANY indexed columns at the destination database that participate in Streams.
 
      • Test with parallelism values based on available cpu. Typical settings are 4, 8, 16, or 24. 16 and 24 are normally used for high transaction loading with high end machines. Values between 1 and 4 are typically used on low end machines. Mid-range system typically configure 4 or 8 apply servers.

Note:  Modify the INITRANS, PCTFREE of SYS.STREAMS$_APPLY_PROGRESS table.   The INITRANS value should be >= to the apply process parallelism.   Set PCTFREE to at least 10 :
    • _DYNAMIC_STMTS:

      For 10gR2 and above with a workload consisting of many UPDATE transactions that modify less than half of the columns for any given row, consider using the hidden apply parameter _DYNAMIC_STMTS. This parameter setting may reduce redo generation as well as improve apply performance. To set this parameter, issue : 
exec dbms_apply_adm.set_parameter ('<apply name>','_dynamic_stmts','y');


If more than half of the columns are updated in an LCR, this parameter is ignored for that particular update LCR.
    • _HASH_TABLE_SIZE:

      The following is applicable to all releases. In an mixed (Insert/UPdate/Delete) or heavy update workload that results in a large number of dependency waits (WAIT_DEPs) on replicated tables with many constraints , consider increasing the size of the dependency hash table with the hidden parameter _HASH_TABLE_SIZE. Set the value of this parameter to a large number, for example, 1 million (1000000) or 10 million (10000000). i.e :
exec dbms_apply_adm.set_parameter('<apply name>','_hash_table_size','10000000');

Additional Apply Performance Tips 
  • Are All Changes to be Applied

    If you are configuring an apply process to take ALL changes that are put into the queue (ie, no selectivity or subsetting requirements), you can remove the rule set from the apply definition. This will eliminate the necessity to do ANY rule evaluations and will result in higher apply throughput.
  • Constraints + Supplemental Logging

    Increase the parallelism of the apply process at the destination to match the concurrency of the source database. If you set parallelism > 1 on the apply process, be sure to configure the appropriate constraints at the destination site (as well as supplemental logging at the source site) to do dependency computations.

    If adding additional columns at the target database, specify DEFAULT values for those additional columns, if possible, so that a DML_HANDLER is not required for INSERTs into the table. For example, if the table at the target site includes an additional column LAST_UPDATE (ie, the LAST_UPDATE column does not exist at the source database -only at the target database) which is the date/timestamp of the most recent update, then specify a DEFAULT value of SYSDATE for the LAST_UPDATE column. This will eliminate the need to execute a dml_handler for any insert into the table. Of course, UPDATEs will require that the dml_handler be invoked.

    For complete details refer to the Oracle® Streams Replication Administrator’s Guide.
  • Conflict Detection:

    If the target database is used in a read-only mode (i.e., no DMLs are performed on the replicated tables), consider disabling conflict detection for non-key columns on replicated tables with the DBMS_APPLY_ADM.COMPARE_OLD_VALUES procedure.

    For heavy DELETE workloads, use the DBMS_STREAMS_ADM.DELETE_COLUMN procedure for the APPLY table rule to remove the non-key columns from the DELETE LCRs.
  • Apply Parallelism and INITRANS

    To minimize ITL contention issues, modify the INITRANS value for each replicated table and index so that INITRANS is equal to the apply parallelism value or higher. 

Note:  INITRANS is not connected with ASSM (Automatic Segment-Space Management).  Ensure that INITRANS >= PARALLELISM of the apply process for all objects that have LCRs applied to them in a Streams environment.

alter table <table name> initrans 16;
alter index <index name> rebuild initrans 16;

Note:  Initrans for LOBINDEXES is always 2.  See Bug:8762509 INITRANS FOR A LOB INDEX IS IGNORED.  This clause has been deprecated, if you specify this clause, then Oracle ignores it.  Oracle automatically generates an index for each LOB column and names and manages the LOB indexes internally.

Example:
CREATE TABLE test (a blob) initrans 4
LOB (a) STORE AS ( INDEX my_lob_index (initrans 4) );

select ini_trans from user_indexes where index_name = 'MY_LOB_INDEX';

INI_TRANS
----------
2

For replicated objects in manually managed tablespaces, PCTFREE should be at least 10.
alter table <table name> pctfree 20;
alter index <index name> rebuild pctfree 20;

  • Streams and Large Transaction
If possible, decrease transaction sizes to less than 1000 LCRs. Large or long transactions will affect Streams. These may result in Queue spill or Apply spill. As outlined, most of the areas which can cause issues relate to large and long running transactions which may be associated with Queue spill and Apply spill. Queue spill is more onerous than apply spill.
  • Foreign Key Constraints
Foreign Key constraints impact Apply performance. If possible, remove FK constraints and any unnecessary indexes;
  • Partition Tables
Consider partitioning the primary key index for the replicated tables to improve Apply processing, e.g :
create table t1 (
my_id integer,
integer_0001 integer,
integer_0002 integer,
integer_0003 integer,
integer_0004 integer,
integer_0005 integer,
varchar_0001 varchar(4000),
varchar_0002 varchar(4000),
varchar_0003 varchar(4000),
varchar_0004 varchar(4000),
varchar_0005 varchar(4000),
CONSTRAINT t1_my_pk PRIMARY KEY (my_id)
using index (create index t1_my_pk on t1(my_id) local))
PARTITION BY HASH (my_id) PARTITIONS 5 ;

9.2, 10.1, 10.2: For workloads that are INSERT ONLY with no dependencies

Where there are no referential integrity constraints or other logical dependencies. Configure the Apply process as follows :
<!--[if !supportLists]-->·         <!--[endif]-->_HASH_TABLE_SIZE: In an insert only workload, eliminate the dependency calculation between transactions by setting the apply hidden parameter _HASH_TABLE_SIZE to 0;
<!--[if !supportLists]-->·         <!--[endif]-->COMMIT_SERIALIZATION: In an insert only workload, turn commit serialization off by setting COMMIT_SERIALIZATION to NONE;
<!--[if !supportLists]-->·         <!--[endif]-->PARALLELISM: Increase the parallelism apply parameter to allow more apply servers;
<!--[if !supportLists]-->·         <!--[endif]-->To minimize ITL pressure issues, be sure to modify the INITRANS value for each replicated table so that INITRANS is equal to the apply parallelism value or higher. PCTFREE for replicated tables should be at least 10. (ALTER TABLE ... INITRANS 16 PCTFREE 10)
<!--[if !supportLists]-->·         <!--[endif]-->Consider partitioning the primary key index for the replicated tables to improve apply processing.

References
NOTE:238455.1 - Streams DML Types Supported and Supported Datatypes
NOTE:259609.1 - Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk
NOTE:273674.1 - Streams Configuration Report and Health Check Script
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:298877.1 - 10gR1 Streams Recommended Configuration
NOTE:313748.1 - Using Automatic Statistics Collection In A Streams Environment
NOTE:377152.1 - Best Practices for automatic statistics collection on Oracle 10g
NOTE:780733.1 - Streams Propagation Tuning with Network Parameters
http://www.oracle.com/technology/products/dataint/pdf/twp_streams_performance_11gr1.pdf
http://www.oracle.com/technology/deploy/availability/pdf/maa_wp_10gr2_streams_performance.pdf
Streams Concepts and Administration Guide
Streams Replication Administrator's Guide


Get Oracle Certifications for all Exams
Free Online Exams.com