Wednesday, September 4, 2013

Building a 5 nodes Oracle EBS using RAC and shared appl top 5.

Begin to build the shared APPL_TOP

Now we have a well configured Oracle 11g RAC infrastructure and an installed cluster database software.
All independent tasks are finished so we are ready to begin to build the 5 node system.

First I will create a 1+3 node system, 1 node for database and 3 node for application tier (with shared appl_top). This task could be splitted into 2 subtask. The first one is to create a 2 node Oracle EBS system, the second one is add the remaing 2 application tier node and reconfigure the first application tier node.
The first task is nothing more then an ordinary EBS cloning task, the second subtask is same complexity as the first one :)

So what we will do at this time?

  • prepare application tier nodes
  • prepare source system
  • create 2 node EBS system
  • add the additional application nodes




Prepare and check the target nodes

Until now we have worked only with the database node. I will use the first one for the temporary Oracle EBS single database node.

Of course for application tier I have to have the 3 other application server. Let's prepare them.
For operating system installation and base configurations you have to use the same Oracle documents as we have used for the database nodes. Configure the storage partition and their mount points as you have seen on my previuos post's storage diagram.

Here they are the usual tables for the application tier nodes
Attributes Value/Text description
Server name conc01.company.local
IP addresses: 10.10.1.41
Operating system RedHat Enterprise Linux 5.9 (Tikanga) 64 bit
Memory 8GB
Storage capacity boot - 
swap – 8GB
/ - 50GB
/u01 – 50GB
/u02 - 150GB

Attributes Value/Text description
Server name app01.company.local
IP addresses: 10.10.1.42
Operating system RedHat Enterprise Linux 5.9 (Tikanga) 64 bit
Memory 8GB
Storage capacity boot - 
swap – 8GB
/ - 50GB
/u01 – 50GB
/u02 - 150GB

Attributes Value/Text description
Server name app02.company.local
IP addresses: 10.10.1.43
Operating system RedHat Enterprise Linux 5.9 (Tikanga) 64 bit
Memory 8GB
Storage capacity boot - 
swap – 8GB
/ - 50GB
/u01 – 50GB
/u02 - 150GB

I assigned 5 partition for each node, let me explain them through mount points:
  • boot - regular linux partition for booting
  • swap - regular linux swap partition - size depends on memory
  • / - standard operating system files - linux administrators like to create other ones, it's not problem, let them create if they want.
  • /u01 - shared partition for storing the shared APPL_TOP. The same partition should mounted to the 3 application tier node! (use NFS or any certified cluster file system. Ext3, ext4 is not accepted for a shared file system!)
  • /u02 - shared files system, which will be used by all nodes. At least for APPLCSF, but useful for any other shared standard or non standard Oracle EBS directories. (for example DIRECTORY or LIBRARY objects)

What you should check at least on each node?

  • kernel attributes in sysctl.conf
  • limits.conf - put applmgr user into it as the other one
  • hosts file (the first real not commented row should be the right IP address with the right long hostname
  • check whether 608836 patch installed or not

Other necessary preparation steps

1. Modify the default random link.
Because of hidden EBS bug (XDOLoader could work extremly slow if it isn't setted up)
# mv /dev/random /dev/random_orig
# ln -s /dev/urandom /dev/random

2. create the applmgr and oracle user and the required groups on each node. oracle user's ID and the common group's ID should be the same as on database nodes
First check if they are already created:
# less /etc/group
# less /etc/passwd

If not create groups then create users
# groupadd -g 1000 oinstall
# groupadd -g 1001 dba 

# useradd -u 1001 -g oinstall -G dba oracle
# useradd -u 1002 -g oinstall -G dba applmgr

Don't forget the oracle unix user should looks like this only on the application tier nodes!!!

Use the passwd command to add them passwords what you want. Should be the same password on both node per user.
# passwd applmgr
# passwd oracle

3. Complete the applmgr unix user's .bash_profile with the LDEMULATION setup:
# vi /home/applmgr/.bash_profile
Put these lines at the and of the file
LDEMULATION=elf_i386
export LDEMULATION

4. Check whether the oraInst.loc already created, if yes modify like the below if not then create it
# chown applmgr:dba  /etc/oraInst.loc
# chmod 664 /etc/oraInst.loc
# ls –l /etc/oraInst.loc
-rw-rw-r-- 1 applmgr dba 62 Aug 1 15:05 /etc/oraInst.loc
# less /etc/oraInst.loc
# inventory_loc=/u01/apps/PROD/oraInventory
# inst_group=dba

if oraInventory already there then move it
# mv /u01/apps/PROD/oraInventory /u01/apps/PROD/oraInventory_old
# mkdir /u01/apps/PROD/oraInventory
# chown applmgr:dba  /u01/apps/PROD/oraInventory
# chmod 774 /u01/apps/ERUZEM/oraInventory

Create the /u02/apps/PROD/inst directory
# su – applmgr
$ mkdir –p /u02/apps/PROD/inst
$ ls –l /u02/apps/PROD
drwxr-xr-x 2 applmgr dba 4096 Aug  8 12:00 inst

Prepare the source system for cloning and for moving.

Before you make any changes please check

  • there are enough free spaces in SYSAUX and SYSTEM tablespaces,
  • check usage of the temporary tablespace and datafiles (for example isn't there any false mixture configuration, like the whole database use temporary tablespace group(s) but some schema users use directly one of the temporary files instead of temporary groups)
Preclone on application tier:
# su - applmgr
# <source the application tier enviroment file>
# cd $ADMIN_SCRIPTS_HOME
# perl $PWD/adpreclone.pl appsTier

Preclone on database tier:
# su – oracle
# <source the database tier enviroment file>
# cd $ORACLE_HOME/appsutil/scripts/<context_name>
# perl $PWD/adpreclone.pl dbTier

Shutdown application tier then shutdown with normal mode the database tier. Be aware that when you shutdown the database tier none of the application tier process is running!

Make two separated backup (for example tar.gz file) from application tier and database tier with root user.
# cd /u01/apps - if base directory if accessible below this one
# tar -czvf source_db.tar.gz <SOURCE SID>/db
# tar -czvf source_apps.tar.gz <SOURCE SID>/apps <SOURCE SID>/inst

Copy the files to the common stage area (for example with scp)
# scp source_apps.tar.gz root@db01:/u02/backup/adhoc
# scp source_db.tar.gz root@db01:/u02/backup/adhoc

Just to remember: I will use PROD in this example for the target SID.

Create the 2 node EBS system

This will contain only 2 usual post clone step

  • create db tier
  • create apps tier

Create the database tier

Login into the db01 server as root and untar the db tar.gz file.
# mkdir -p /u04/apps
# chown oracle:dba /u04/apps
# cd /u04/apps
# tar –xzvf /u02/backup/adhoc/source_db.tar.gz

Check the adxdbctx.tmp file content if it is contain 5.8.3 or 5.10.0 in the PERL5LIB tag.
If not modify the line from this:
<PERL5LIB oa_var="s_perl5lib" osd="unix" default="%s_db_oh%%/%perl%/%lib%/%5.8.3:%s_db_oh%%/%perl%/%lib%/%site_perl%/%5.8.3:%s_db_oh%%/%appsutil%/%perl">%s_perl5lib%</PERL5LIB>

to this
<PERL5LIB oa_var="s_perl5lib" osd="unix" default="%s_db_oh%%/%perl%/%lib%/%5.10.0:%s_db_oh%%/%perl%/%lib%/%site_perl%/%5.10.0:%s_db_oh%%/%appsutil%/%perl">%s_perl5lib%</PERL5LIB>

adxdbctx.tmp could be find under appsutil's templates directory

Modify the ownership of the db tier files and directories
chown -R oracle:dba /u04/apps

Rename the main directory, after renaming it should be:  /u04/apps/PROD

Check and modify/create the /etc/oraInst.loc file like these:
# cp /etc/oraInst.loc /etc/oraInst.loc_cluster
# chown oracle:dba  /etc/oraInst.loc
# chmod 664 /etc/oraInst.loc
# vi /etc/oraInst.loc
inventory_loc=/u04/apps/oraInventory
inst_group=dba

# mkdir /u04/apps/oraInventory
# chown oracle:dba  /u04/apps/oraInventory
# chmod 774 /u04/apps/oraInventory

This oraInventory will used until this temporary single database is used.

Execute the EBS post clone perl script with 30 port pool
# su - oracle
# cd /u03/apps/ERUZEM/db/tech_st/11.2.0.3/appsutil/clone/bin
# perl $PWD/adcfgclone.pl dbTier

parameters:
Target System Hostname: db01
Target System Domainname: company.local
Target Instance is RAC: n
Target System Database SID: PROD
Target System Base Directory: /u04/apps/PROD
Target System utl_file_dir Directory List: /usr/tmp
Number of DATA_TOP’s on Target System: 1
Target System DATA_TOP Directory 1: /u04/apps/PROD/db/apps_st/data 
Target System RDBMS ORACLE_HOME Directory: /u04/apps/PROD/db/tech_st/11.2.0.3
Do you vwant to preserve the Display: n
Target System Display: db01:1.0
Do you want the target system to have the same port values as the source system: n
Target System Port Pool [0-99] : 30

After the post clone script finished check the post clone log, and alert log too for any error. Don't continue until there are any big problem!

Recommended post clone steps
  • create custom start/stop script in oracle's $HOME/bin directory
  • fill out the new PROD_db01_ifile.ora file in $ORACLE_HOME/dbs directory with you own parameters (for example higher SGA, higher AQ job parameters are recommended!)
  • restart the database and the listener with normal mode

Create first apps tier on conc01 node

Login into conc01 server as root user and untar the backup file of the source apps tier.

# mkdir -p /u01/apps
# cd /u01/apps
# tar –xzvf /u02/backup/adhoc/source_apps.tar.gz

Modify the ownership of the new directory and rename it.
# chown -R applmgr:dba /u01/apps
# mv /u01/apps/<SOURCE SID> /u01/apps/PROD

Execute the apps tier post clone step
# su – applmgr
$ cd /u01/apps/ERUZEM/apps/apps_st/comn/clone/bin
$ perl  $PWD/adcfgclone.pl appsTier

parameters - Take care! Instance Home Directory should be on the /u02 mount point!!!:
Target System Hostname: conc01
Target System Database SID: PROD
Target System Database Server Node: db01
Target System Database Domain Name: company.local
Target System Base Directory: /u01/apps/PROD
Target System Tools ORACLE_HOME Directory: /u01/apps/PROD/apps/tech_st/10.1.2
Target System  Web ORACLE_HOME Directory: /u01/apps/PROD/apps/tech_st/10.1.3
Target System  APPL_TOP Directory: /u01/apps/PROD/apps/apps_st/appl
Target System  COMMON_TOP Directory: /u01/apps/PROD/apps/apps_st/comn
Target System  Instance Home Directory: /u02/apps/PROD/inst     
Target System Root Service: enabled
Target System  Web Entry Point Services: enabled
Target System  Application Services: enabled
Target System  Batch Processing Services: enabled
Target System  Other Services: disabled
Do you want to preserve the Display: n
Target System Display: conc01:1.0
Do you want the target system to have the same port values as the source system: n
Target System Port Pool [0-99] : 30
Choose the values which will be set as APPLTMP values on the target node: /ust/tmp

APPLTMP is a temporary set to /usr/tmp. At later step it will reconfigure to use a common, shared directory under /u02 mount point.

Don't start up the system when ask for!

Post clone steps
1. I recommend to create start stop script in applmgr's $HOME/bin directory. Could be very useful if you create an enviroment file which source the new application tier enviroment file. In this example I will create one name PROD.env.

2. If your source system was used https then check the $CONTEXT_FILE content. Sometimes the port number configured badly after post clone. In our example check these attribute values are 4463 or not. If not then change them to it:
# su - applmgr
# . bin/PROD.env
# vi $CONTEXT_FILE
s_active_webport: 4473
s_login_page's port value: 4473
s_external_url's port value: 4473

3. modify any other parameters what you used to (for example workflow mailer settings)

4. change passwords! at least system, sys and apps passwords
On db01 server
# su – oracle
# . bin/PROD.env
# sqlplus / as sysdba
SQL> alter user system identified by <new system password>;
SQL> alter user sys identified by <new sys password>;
SQL> exit;
#orapwd file=$ORACLE_HOME/dbs/orapwPROD password=<new sys password> entries=16 force=y

On conc01 server
# su – applmgr
# . bin/PROD.env

# FNDCPASS apps/<old apps password> 0 Y system/<new system password> SYSTEM APPLSYS <new apps password>
# FNDCPASS apps/<new apps password> 0 Y system/<new system password> USER SYSADMIN <new sysadmin password>
# FNDCPASS apps/<new apps password> 0 Y system/<new system password> ALLORACLE <new alloracle schema password>


5. run autoconfig
# cd $ADMIN_SCRIPTS_HOME
# ./adautocfg.sh

6. login into the database with apps user and make the following changes:
  • modify ICX_PARAMETERS table, session_cookie_name value should be the new database instance name (PROD)
  • optionally modify WF_SYSTEMS table, value of display_name column should be a new one.
7. logout, login and start the system for check it's working well or not. Be sure to start the system in a new session window, don't use that one where you ranned the autoconfig!

Open browser window, type main URL (in our example: https://conc01.company.local:4473/ )
Login with system administrator responsibility and check the dashboard, the concurent managers and so on.

After succesfully starting the system I recommended to change the Java Color, Site Name and the homepage brand string.

No comments:

Post a Comment