Sunday, September 8, 2013

Building a 5 nodes Oracle EBS using RAC and shared appl top 7.

Forming the RAC database


What we have got now?
  • Grid infrastructure
  • RAC database software
  • 1+3 node Oracle EBS system
It's time to copy the EBS single node database data into the new RAC database.
I will use the 823587.1 note for transfering data from the single node EBS database into the new RAC one.
The main steps will be:

  1. Prepare the RAC database software
  2. Prepare the source database for the conversion
  3. Convert database into RAC
  4. Post database conversion steps
    1. turn on autoconfig
    2. server parameter changes
    3. create new EBS specific cluster listeners
  5. Configure application tiers for using RAC database service
  6. Post configuration steps
This post will contain the steps until the application tier configuration, all remaining steps will be in the next one.



Prepare the RAC database software

Stop the whole system, the 3 application tier, and the database too.

Copy back the original cluster oraInst.loc on db01 server. Check that this copied back oraInst.loc is pointing to the original oraInventory location.
It should looks like this:
# less /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

I will begin the work with note's 3.3 point. All other steps was already done in the previous steps.

Configure standard listener

The listeners mentioned in the 3.2.1.2 and 3.3.2 points already created and configured in the previous RAC install. Now I have to reconfigure it temporarly to the new port pool ( in current sitution I use the 20 port pool). Login as grid and run these commands:
$ su – grid
# . oraenv (+ASM1)
# cd $ORACLE_HOME/bin
# ./srvctl modify listener -l LISTENER -p 1551

Stop and start listener after the modification.

I will not use the scan listener. On the latest installation we have got many problem with the scan listener when we configure the application tier for RAC. Good old working VIP :)

Shared storage

Check again, that the RAC database on both cluster node could reach the DATA_DG and ARCH_DG ASM diskgroups. If not the rconfig will gone to error!

Create rconfig xml file

Now I am ready to create the required rconfig xml file. I will use the patch.env file as I have created at RAC installation under oracle user's home. I create working directory too: /home/oracle/1_work.

Now login with oracle into db01 and take these steps
$ su – oracle
# . /home/oracle/patch.env

Copy ConvertToRAC_AdminManaged.xml from rconfig's sampleXMLs directory into a working directory. I have copied it into /home/work/1_work directory.

Change the parameters in the xml file to new values like this example
Convert verify="ONLY"
<n:SourceDBHome>/u04/apps/PROD/db/tech_st/11.2.0.3</n:SourceDBHome>
<n:TargetDBHome>/u01/apps/PROD/db/tech_st/rac/11.2.0.3</n:TargetDBHome>
<n:SourceDBInfo SID="PROD">
<n:Credentials>
<n:User>sys</n:User>
<n:Password>put sys password here</n:Password>
<n:Node name="db01"/>
<n:Node name="db02"/>
<n:InstancePrefix>PROD</n:InstancePrefix>
<n:SharedStorage type="ASM">
<n:TargetDatabaseArea>+DATA_DG</n:TargetDatabaseArea>
<n:TargetFlashRecoveryArea>+ARCH_DG</n:TargetFlashRecoveryArea>

Create a termporary archive directory 

Because of a rconfig bug you should create an empty archive directory on both database node.
As oracle create this directory db01:
$ mkdir –p /u04/apps/PROD/db/apps_st/data/archive

Create this directory on db02 as root
$ mkdir –p /u04/apps/PROD/db/apps_st/data/archive
Change ownership to oracle:dba on the full path this directory.

Configure ASM services in the default grid listener

Login as grid into the db01 server and start netmgr tns configuration program.
$ su – grid
# . oraenv (+ASM1)
# cd $ORACLE_HOME/bin
# ./netmgr

Go through this navigation patch: Listeners->LISTENER->Database Services
And configure a new service with service name ASM and SID name ASM. (the netmgr will not accept + sign here, that's way I have to create a paritally dummy service)

Now modify the dummy listener, change ASM to +ASM1
# cd /u01/app/product/11.2.0/grid/network/admin
# vi listener.ora

Change the SID_NAME line to this
SID_NAME = +ASM1

Delete the GLOBAL_DATABASE_NAME row too.

Now copy the modified listener.ora to the second db node's. It should be here: /u01/app/product/11.2.0/grid/network/admin/listener.ora

Edit this listener.ora file on the second node as grid
Change the SID_NAME line to this
SID_NAME = +ASM2

Restart the default listener on nodes (because ew use cluster software, you could do it on any node with the grid use)
# srvctl stop listener –n erdb01
# srvctl start listener –n erdb01
# srvctl stop listener –n erdb02
# srvctl start listener –n erdb02

Make a safety backup from database files

The source database should be down for this step!

Login as root and create a tar.gz file from the data directory.
This backup tar.gz file could be very useful when any problem occured during running the rconfig command.

Check oracle user membership

Check that the oracle user is a member of the asmdba group or not. If not then add oracle to this group.

Create init.ora file on second node

Create init.ora file on the second node with oracle user. It is required by rconfig tool.
$ su – oracle
# mkdir –p /u04/apps/PROD/db/tech_st/11.2.0.3/dbs
# cd /u04/apps/PROD/db/tech_st/11.2.0.3/dbs
# vi initPROD.ora
spfile='+DATA_DG/spfilePROD_single.ora'

The above spfile will be created in the next step.

Modify init.ora and create spfile

Login into db01 as oracle, take a backup from the original init.ora. Modify it, then create the spfile file.
$ su – oracle
# . bin/PROD.env
# cd $ORACLE_HOME/dbs
# cp initPROD.ora initPROD.ora.orig
# sqlplus / as sysdba
SQL> create spfile=‘/u04/apps/PROD/db/tech_st/11.2.0.3/dbs/spfilePROD_orig.ora’ from pfile;
SQL> create spfile=‘+DATA_DG/spfilePROD_single.ora’ from pfile;
SQL> exit

# rm initPROD.ora
# vi initPROD.ora

Put this line into the new empty init.ora file.
spfile='+DATA_DG/spfilePROD_single.ora'

Modify some database parameter
# sqlplus / as sysdba
SQL> startup nomount;
SQL> alter system set local_listener= '' scope=SPFILE;
SQL> alter system set job_queue_processes=0 scope=SPFILE;
SQL> exit

Start the source database with standard EBS start script

Shutdown the running source database if you have opened it during handling tempory tablespaces.

$ su – oracle
# . bin/PROD.env 
# cd $ORACLE_HOME/appsutil/scripts/PROD_db01
# ./addbctl.sh start
# exit

Check cluster's resource and serverpool

If you are making the second try of rconfig you should check the cluster's resources and serverpool.
# su – 
# cd /u01/app/product/11.2.0/grid/bin
# ./crsctl status resource
# ./crsctl status serverpool

IF you have found any entries that belong to the new RAC database, for example PROD database should not be there right now, then delete them before you continue.
Some example command for deletions:
# su – 
# cd /u01/app/product/11.2.0/grid/bin
# ./crsctl delete resource ora.prod.db
# ./crsctl delete serverpool ora.PROD

Convert database into RAC

Now the preparation is over I could make the conversion. :)

Run reconfig in verify only mode

First run the rconfig in verify only mode as oracle user on db01 server.
# su – oracle
# . bin/PROD.env - this is still point to the old source database enviroment file!
# cd /home/oracle/1_work
# less ConvertToRAC_AdminManaged.xml

verify mode should be ONLY like this:
<n:Convert verify="ONLY">

# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3/bin -- yes this is the new RAC database software home
# ./rconfig /home/oracle/1_work/ConvertToRAC_AdminManaged.xml

If you get back "There is no return value for this step" sentence the verify only mode ranned successfully. If you got back error(s) then repair all of them before you continue. Usually and sadly the verify only mode does not discover all possible problem. It's possible that you will get error during running the rconfig in normal mode, when verify only mode said earlier everything is okay.

Run rconfig in normal mode

Backup the rconfig xml file and modify convert mode from verify to 'yes' value.

# cd /home/oracle/1_work
# cp ConvertToRAC_AdminManaged.xml ConvertToRAC_AdminManaged.xml_wonly_mode
# vi /home/oracle/1_work/ConvertToRAC_AdminManaged.xml
<n:Convert verify="YES">

Run rconfig in a totally new session with oracle user on db01 server.
$ su – oracle
# export ORACLE_HOME=/u01/apps/PROD/db/tech_st/rac/11.2.0.3
# export PATH=$ORACLE_HOME/bin:$PATH
# cd $ORACLE_HOME/bin
# ./rconfig /home/oracle/1_work/ConvertToRAC_AdminManaged.xml

If everything goes well you should get back these output rows.
Converting Database "PROD" to Cluster Database. 
Target Oracle Home: /u01/apps/PROD/db/tech_st/rac/11.2.0.3. 
Database Role: PRIMARY.
Setting Data Files and Control Files
Adding Database Instances
Adding Redo Logs
Enabling threads for all Database Instances
Setting TEMP tablespace
Adding UNDO tablespaces
Adding Trace files
Setting Fast Recovery Area
Updating Oratab
Creating Password file(s)
Configuring Listeners
Configuring related CRS resources
Starting Cluster Database
<?xml version="1.0" ?>
<RConfig version="1.1" >
<ConvertToRAC>
    <Convert>
      <Response>
        <Result code="0" >
          Operation Succeeded
        </Result>
      </Response>
      <ReturnValue type="object">
<Oracle_Home>
         /u01/apps/PROD/db/tech_st/rac/11.2.0.3
       </Oracle_Home>
       <Database type="ADMIN_MANAGED"  >
         <InstanceList>
           <Instance SID="PROD1" Node="db01"  >
           </Instance>
           <Instance SID="PROD2" Node="db02"  >
           </Instance>
         </InstanceList>
       </Database>     </ReturnValue>
    </Convert>
  </ConvertToRAC></RConfig>


Check rconfig log file under /u01/apps/PROD/db/tech_st/rac/admin/cfgtoollogs/rconfig/ 

Check database processes

Check the new RAC database processes with the standard cluster tool crs_stat and the unix ps command.
For example:
# su – 
# cd /u01/app/product/11.2.0/grid/bin
# ./crsctl status serverpool
# ./crs_stat ora.prod.db

Restart all new database processes and listener

Stop all database processes then start all of them with grid user.
$ su – grid
# . oraenv
+ASM1
# cd /u01/app/product/11.2.0/grid/bin
# srvctl stop database -d PROD
# srvctl stop listener
# srvctl stop scan_listener
# srvctl start scan_listener
# srvctl start listener
# srvctl status listener
# srvctl start database -d PROD
# srvctl status database -d PROD
# lsnrctl status listener

Optionally relocate scan listeners if they don't have started on the "right" node.

Optionally test database access from conc01 server

If you wish, you could make an extra test to check that the new RAC database accessible remotely.
Login into conc01 server and test the RAC database access.

# su – applmgr
# . bin/PROD.env
# cd $TNS_ADMIN
# cp tnsnames.ora tnsnames.ora.orig
# vi tnsnames.ora 

Put these new lines
PROD1=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=db01.company.local)(PORT=1551))
            (CONNECT_DATA=
                  (SERVER = DEDICATED)
                  (SERVICE_NAME=PROD)
                (SID=PROD1)
            )
         )

PROD2=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=db02.company.local)(PORT=1551))
            (CONNECT_DATA=
                  (SERVER = DEDICATED)
                  (SERVICE_NAME=PROD)
                (SID=PROD2)
            )
         )

# tnsping PROD1
# sqlplus system/<system password>@PROD1
SQL> select instance_name from v$instance;

# tnsping PROD2
sqlplus system/<system password>@PROD2
SQL> select instance_name from v$instance;

Post database conversion steps

I have to finish the conversion with post conversion steps. These are EBS specific steps.

Create a temporary tnsnames entry on conc01

It is a good time now for creating a tnsnames.ora entry on conc01. I will use it at later time.
Login with applmgr into conc01 and do the following. (if you previously test access of the database you should edit the tnsnames.ora file only, do not copy it again)

# su – applmgr
# . bin/PROD.env
# cd $TNS_ADMIN
# cp tnsnames.ora tnsnames.ora.orig
# vi tnsnames.ora 



Add these new lines
PROD=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=db01.company.local)(PORT=1551))
            (CONNECT_DATA=
                (SERVICE_NAME=PROD)
                (INSTANCE_NAME=PROD1)
            )
        )

Exit and test the entry
# tnsping PROD

Login with system db user.
# sqlplus system/<system password>@PROD
SQL> select instance_name from v$instance;
SQL> exit;

Create new appsutil.zip on conc01

Now I have a working connection on conc01 so I could create new appsutil zip file for setup autoconfig on database nodes. Login with applmgr into conc01 and generat appsutil zip file.
# su – applmgr
# . ./bin/PROD.env
# $AD_TOP/bin/admkappsutil.pl

After succesful file generation copy the new appsutil zip file into oracle database software home directory of both db nodes. For example with these scp commands.
# scp /u02/apps/PROD/inst/apps/PROD_conc01/admin/out/appsutil.zip oracle@db01:/u01/apps/PROD/db/tech_st/rac/11.2.0.3
# scp /u02/apps/PROD/inst/apps/PROD_conc01/admin/out/appsutil.zip oracle@db02:/u01/apps/PROD/db/tech_st/rac/11.2.0.3

Setup and run autoconfig on the first database node

Unzip appsutil.zip on the first db node. Login with oracle.
# su – oracle
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3
# unzip appsutil.zip

Copy the old jre directory from the original source database directory into the new one.
# cp -r /u04/apps/PROD/db/tech_st/11.2.0.3/appsutil/jre /u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil

Create network/directory structure
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin
# mkdir PROD1_db01

Logut and login again and initalize session
$ su – oracle
# export ORACLE_HOME=/u01/apps/PROD/db/tech_st/rac/11.2.0.3
# export LD_LIBRARY_PATH=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/lib:/u01/apps/PROD/db/tech_st/rac/11.2.0.3/ctx/lib
# export ORACLE_SID=PROD1
# export PATH=$PATH:/u01/apps/PROD/db/tech_st/rac/11.2.0.3/bin
# export TNS_ADMIN=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin/PROD1_db01

Create and edit a new tnsnames.ora file.
# vi $TNS_ADMIN/tnsnames.ora

Add these lines
PROD1=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=db01.company.local)(PORT=1551))
            (CONNECT_DATA=
                  (SERVER = DEDICATED)
                  (SERVICE_NAME=PROD)
                (SID=PROD1)
            )
        )

PROD1_LOCAL=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=db01.company.local)(PORT=1551))
        )

I have to clean the old setup in the database
# sqlplus apps/<apps password>@PROD1
SQL> exec fnd_conc_clone.setup_clean;
SQL> commit;
SQL> exit

Modify local_listener parameter in the database
# sqlplus / as sysdba
SQL> alter system set local_listener='PROD1_LOCAL' sid='PROD1';
SQL> exit

Create the new context file.
# cd $ORACLE_HOME/appsutil/bin
# ./adbldxml.pl appsuser=apps appspass=<apps password>

Parameters:
Enter Hostname of Database server: db01
Enter Port of Database server: 1551
Enter SID of Database server: PROD1
Do you want to enable SCAN addresses:N
Enter the value for Display Variable: db01:1:0

The xml file has been created by the script at here:
/u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/PROD1_db01.xml

As you have seen I don't configure on the scan address, my system will use only the VIP addresses.
Cause of the 12. point of the 823587.1 note I have to modify the s_virtual_hostname parameter in the new xml file to this value:
<host oa_var="s_virtual_hostname">db01vip</host>

Now I could run the autoconfig
# cd $ORACLE_HOME/appsutil/bin
# ./adconfig.pl

Parameters:
Enter the full file path to the Context file:
/u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/PROD1_db01.xml
Enter the APPS user password: <APPSpwd>

After succesful run of the perl script, check the autoconfig log file search for any errors, for example database connection errors at the beginning of the file.

If everything seems to be okay the go to the second node.

Setup and run autoconfig on the second database node

Unzip appsutil.zip on the first db node. Login with oracle.
# su – oracle
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3
# unzip appsutil.zip

Copy the old jre directory from the original source database directory into the new one. Because it is only on the first node I should copy the jre from it. I use this scp command, the -r parameter will copy the full directory structure:
# scp -r oracle@db01:/u04/apps/PROD/db/tech_st/11.2.0.3/appsutil/jre /u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil

Create network/directory structure
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin
# mkdir PROD2_db02

Logut and login again and initalize session. Be aware, the ORACLE_SID should be PROD2 at this time, as the TNS_ADMIN.
$ su – oracle
# export ORACLE_HOME=/u01/apps/PROD/db/tech_st/rac/11.2.0.3
# export LD_LIBRARY_PATH=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/lib:/u01/apps/PROD/db/tech_st/rac/11.2.0.3/ctx/lib
# export ORACLE_SID=PROD2
# export PATH=$PATH:/u01/apps/PROD/db/tech_st/rac/11.2.0.3/bin
# export TNS_ADMIN=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin/PROD2_db02

Create and edit a new tnsnames.ora file.
# vi $TNS_ADMIN/tnsnames.ora

Add these lines - I am at the second node, so I should add entries for PROD2.
PROD2=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=db02.company.local)(PORT=1551))
            (CONNECT_DATA=
                  (SERVER = DEDICATED)
                  (SERVICE_NAME=PROD)
                (SID=PROD2)
            )
        )

PROD2_LOCAL=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=db02.company.local)(PORT=1551))
        )

Do NOT(!) clean the old setup now, it should be cleaned only at the first node!!!

Modify local_listener parameter in the PROD2 instance.
# sqlplus / as sysdba
SQL> alter system set local_listener='PROD2_LOCAL' sid='PROD2';
SQL> exit

Create the new context file for node2.
# cd $ORACLE_HOME/appsutil/bin
# ./adbldxml.pl appsuser=apps appspass=<apps password>

Parameters - use the second node's data:
Enter Hostname of Database server: db02
Enter Port of Database server: 1551
Enter SID of Database server: PROD2
Do you want to enable SCAN addresses:N
Enter the value for Display Variable: db02:1:0

The xml file has been created by the script at here:
/u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/PROD2_db02.xml

Modify the s_virtual_hostname parameter in the new xml file to this value:
<host oa_var="s_virtual_hostname">db02vip</host>

Now I could run the autoconfig
# cd $ORACLE_HOME/appsutil/bin
# ./adconfig.pl

Parameters:
Enter the full file path to the Context file:
/u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/PROD2_db02.xml
Enter the APPS user password: <APPSpwd>

After succesful run of the perl script, check the autoconfig log file search for any errors, for example database connection errors at the beginning of the file.

If you have other nodes repeat the above steps on all nodes.

Rerun autoconfig on both nodes

First on db01 node as oracle user in an exactly new session. I have to use the new enviroment file!

$ su – oracle
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3
# . ./PROD1_db01.env
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/scripts/PROD1_db01
# ./adautocfg.sh

Check the adconfig.log file for errors.

After succesful run go to the second node. Login as oracle user in an exactly new session. I have to use the new enviroment file!

$ su – oracle
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3
# . ./PROD2_db02.env
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/scripts/PROD2_db02
# ./adautocfg.sh

Check the adconfig.log file for errors.

Stop original database and the cluster database listeners.

On the first node as grid user.
$ su – grid
# . oraenv (ORACLE_SID = +ASM1)
# srvctl stop listener
# srvctl stop database -d PROD

Optionally create new enviroment files

It is not a standard step, but I recommend to create enviroment files under oracle user's home directory. Could be useful and I will still use them.

Login as oracle into the db01 server and modify the PROD.env file at $HOME/bin. It should point now the the new cluster database enviroment file of PROD1 instance.

Login as oracle into the db02 server and create the PROD.env file at $HOME/bin. It should point now the the new cluster database enviroment file of PROD2 instance.

Modify listener parameters 

I will not use the default LISTENER as an EBS listener. Instead of I will create a custom one with name LISTENER_PROD. It is quite useful if the cluster infrastructure will expend with non EBS cluster database and don't want that this listener handle the new database, only the scan and the default listener will service the access possibility of the new database.

As the first step I had to change s_db_listener parameter on both node in the context file.
On the first node login with oracle.
$ su - oracle
# . ./bin/PROD.env

Backup the current context file and modify the parameter
# cp $CONTEXT_FILE $CONTEXT_FILE.beforeListenerChange
# vi $CONTEXT_FILE
<DB_LISTENER oa_var="s_db_listener">LISTENER_PROD</DB_LISTENER>

Do the same on the second node.
$ su - oracle
# . ./bin/PROD.env

Backup the current context file and modify the parameter
# cp $CONTEXT_FILE $CONTEXT_FILE.beforeListenerChange
# vi $CONTEXT_FILE
<DB_LISTENER oa_var="s_db_listener">LISTENER_PROD</DB_LISTENER>

I should run the autoconfig, so I am starting the default listener as grid user.
$ su – grid
# . oraenv
+ASM1
# srvctl start listener

At this time I could not use the new enviroment files so I had to login again and initialize a new session and start the instances within this session.
First on db01 server.
$ su – oracle
# export ORACLE_HOME=/u01/apps/PROD/db/tech_st/rac/11.2.0.3
# export LD_LIBRARY_PATH=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/lib:/u01/apps/PROD/db/tech_st/rac/11.2.0.3/ctx/lib
# export ORACLE_SID=PROD1
# export PATH=$PATH:/u01/apps/PROD/db/tech_st/rac/11.2.0.3/bin
# export TNS_ADMIN=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin/PROD1_db01
# sqlplus / as sysdba
SQL> startup
SQL> exit

Now on the second node
$ su – oracle
# export ORACLE_HOME=/u01/apps/PROD/db/tech_st/rac/11.2.0.3
# export LD_LIBRARY_PATH=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/lib:/u01/apps/PROD/db/tech_st/rac/11.2.0.3/ctx/lib
# export ORACLE_SID=PROD2
# export PATH=$PATH:/u01/apps/PROD/db/tech_st/rac/11.2.0.3/bin
# export TNS_ADMIN=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin/PROD2_db02
# sqlplus / as sysdba
SQL> startup
SQL> exit

Now login in again into the first node as oracle in a new session. Don't use the previous startup sessions.

On the first node
$ su – oracle
# . bin/PROD.env
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/scripts/PROD1_db01
# ./adautocfg.sh

Check the adconfig.log file for errors.

On the second node.
$ su – oracle
# . bin/PROD.env
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/scripts/PROD2_db02
# ./adautocfg.sh

Check the adconfig.log file for errors.

Ignore the "TNS-01190: The user is not authorized to execute the requested listener command" error message.

Now stop the started services as grid user.
$ su – grid
# . oraenv (ORACLE_SID = +ASM1)
# srvctl stop listener
# srvctl stop database -d PROD

Create new EBS specific cluster listeners

Because I will not use the default LISTENER I had to change it's port number.
As grid user on the first node:
$ su – grid
# . oraenv 
+ASM1
# srvctl modify listener -l LISTENER -p 1531

Now I could create and register the custom listener for EBS.
As oracle user on the first node:
$ su – oracle
# cd /u01/app/product/11.2.0/grid/bin
# ./srvctl add listener -l LISTENER_PROD -o /u01/apps/PROD/db/tech_st/rac/11.2.0.3 -p 1551
# ./srvctl setenv listener -l LISTENER_PROD -T TNS_ADMIN=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin
# ./srvctl getenv listener -l LISTENER_PROD
# ./srvctl status listener -l LISTENER_PROD

Check that ACL of the new LISTENER_PROD listener is correct for as. It should be: 
"ACL=‘owner:oracle:rwx,pgrp:oinstall:rwx,other::r--’" 

Use the following command for checking.
# ./crsctl status resource ora.LISTENER_PROD.lsnr –p

If the settings is not correct change it with crstctl modify subcommand!

Just to be sure check the new listener on db02 server.
As oracle on the second node.
$ su – oracle
# cd /u01/app/product/11.2.0/grid/bin
# ./srvctl getenv listener -l LISTENER_PROD
# ./srvctl status listener -l LISTENER_PROD
# ./crsctl status resource ora.LISTENER_PROD.lsnr –p

Now I had to create listener.ora and tnsnames.ora files under $ORACLE_HOME/network/admin. This files will just a "link" nothing else. But will not the a standard unix link, instead of I will operate with the IFILE parameter.

As oracle on the first node
$ su – oracle
# . bin/PROD.env
# vi $ORACLE_HOME/network/admin/listener.ora

Add this line:
IFILE=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin/PROD1_db01/listener.ora

Create the tnsnames.ora
# vi $ORACLE_HOME/network/admin/tnsnames.ora

Add this line:
IFILE=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin/PROD1_db01/tnsnames.ora

Do the same on the second node.
$ su – oracle
# . bin/PROD.env
# vi $ORACLE_HOME/network/admin/listener.ora

Add this line:
IFILE=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin/PROD2_db02/listener.ora

Create the tnsnames.ora
# vi $ORACLE_HOME/network/admin/tnsnames.ora

Add this line:
IFILE=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin/PROD2_db02/tnsnames.ora

Modify TNS_ADMIN cluster attribute of database.

For correct working I had to modify the TNS_ADMIN attribute of the PROD cluster database.
As oracle on any node.
$ su – oracle
# cd /u01/app/product/11.2.0/grid/bin
# ./srvctl setenv database -d PROD -T TNS_ADMIN=/u01/apps/PROD/db/tech_st/rac/11.2.0.3/network/admin

As grid user start the database and the new listener.
$ su – grid
# . oraenv
+ASM1
# srvctl start listener –l LISTENER_PROD
# srvctl start database –d PROD

Check the database and listener log file.
Optionally check that the PROD database is accessible from the conc01 application node.
As applmgr on conc01 do the following.

$ su –applmgr
# . bin/PROD.env
# sqlplus system/<system password>@PROD 
SQL> select instance_name from v$instance;
SQL> exit;

If you could not reach the database repair it before you continue.

Run autoconfig on both node.
On first node:
$ su – oracle
# . bin/PROD.env
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/scripts/PROD1_db01
# ./adautocfg.sh

Check the adconfig.log file for errors.

On second node:
$ su – oracle
# . bin/PROD.env
# cd /u01/apps/PROD/db/tech_st/rac/11.2.0.3/appsutil/scripts/PROD2_db02
# ./adautocfg.sh

Check the adconfig.log file for errors.

Restart the database and the new listener.
$ su – grid
# . oraenv
+ASM1
# srvctl stop database -d PROD
# srvctl stop listener -l LISTENER_PROD
# srvctl start listener -l LISTENER_PROD
# srvctl status listener -l LISTENER_PROD
# srvctl start database -d PROD
# srvctl status database -d PROD

Check all database and listener log files for errors.

Now the database is ready I have to configure the application tiers. Check the next post for these steps.

No comments:

Post a Comment