Showing posts with label RAC. Show all posts
Showing posts with label RAC. Show all posts

Wednesday, September 18, 2013

Implementing a demo RAC 12c system

Hi all!

I am working on a new post series. I am trying to build a whole workable demo Oracle 12c RAC system.
The demo system will use only free or trial licencesed downloadable virtual components.

The planned system components:

  • Oracle virtualbox for virtualization
  • 2 virtual node with Solaris operating system, 
  • a ZFS virtual storage node, 
  • Oracle 12c Enterprise Edition RAC
  • Container and pluggable database instances
The new post series other goal is to show how fastly is possible to create a working Oracle 12c RAC database for demonstrating purposes of RAC, Solaris or ZFS storage functions, features.

I will begin it as soon as possible.

Zoltan


Sunday, September 15, 2013

Building a 5 nodes Oracle EBS using RAC and shared appl top 8.

Finish the configuration


What we have got now?
  • Grid infrastructure
  • RAC database software
  • 2 node RAC EBS database
  • A to be configured application tier
The already made steps:

  1. Prepare the RAC database software
  2. Prepare the source database for the conversion
  3. Convert database into RAC
  4. Post database conversion steps
    1. turn on autoconfig
    2. server parameter changes
    3. create new EBS specific cluster listeners
So I have to configure the application tier to use the new RAC database. The remaining steps are:
  1. Configure application tiers for using RAC database service
  2. Post configuration steps

Sunday, September 8, 2013

Building a 5 nodes Oracle EBS using RAC and shared appl top 7.

Forming the RAC database


What we have got now?
  • Grid infrastructure
  • RAC database software
  • 1+3 node Oracle EBS system
It's time to copy the EBS single node database data into the new RAC database.
I will use the 823587.1 note for transfering data from the single node EBS database into the new RAC one.
The main steps will be:

  1. Prepare the RAC database software
  2. Prepare the source database for the conversion
  3. Convert database into RAC
  4. Post database conversion steps
    1. turn on autoconfig
    2. server parameter changes
    3. create new EBS specific cluster listeners
  5. Configure application tiers for using RAC database service
  6. Post configuration steps
This post will contain the steps until the application tier configuration, all remaining steps will be in the next one.

Thursday, September 5, 2013

Building a 5 nodes Oracle EBS using RAC and shared appl top 6.

Add the 2 remaining node to the system


What we have got now?

  • Grid infrastructure
  • RAC database software
  • 2 node Oracle EBS system
It's time to add the 2 remaining node using the shared APPL_TOP method.
I will use the 384248.1 note during the implementation. 

Tuesday, September 3, 2013

Building a 5 nodes Oracle EBS using RAC and shared appl top 4.

Install Oracle Database Software

In the previuos step we have successfully installed the Oracle Cluster infrastructure. Now it's time to install the RAC enabled Database software.

The installation will takes these steps
  • Install the base 11g R2 version 11.2.0.3 software
  • Implement EBS - 11gR2 interoperability note's step

Sunday, September 1, 2013

Building a 5 nodes Oracle EBS using RAC and shared appl top 3.

Building the RAC infrastructure

This step is an independent step from half of the other step. I you think you could do it at later for example you could made a 1+3 node Oracle EBS system (1 node database and 3 node apps tier) with configured load balancer and SSL.

But now I am starting with this.
The main steps will be:
  1. install operating system on the 2 database node,
  2. create and configure all necessary storage partitions and mount points,
  3. check the installed operating system and configure it for Oracle RAC and EBS database,
  4. create unix users and groups,
  5. create stage area,
  6. create asm disk groups,
  7. install Oracle Clusterware 11g Release 2,
  8. install latest PSU patches

Friday, August 30, 2013

Building a 5 nodes Oracle EBS using RAC and shared appl top 2.

Requirements

The first part of this series was a good example for small pieces answers. But this will not.
I have collected all requirements what is need for implement and well configure a 5 node EBS system (with RAC and shared appl top of course)

What are these requirements?

  1. Documentations
  2. Required Software
  3. Recommended hardware configuration

Wednesday, August 28, 2013

Building a 5 nodes Oracle EBS using RAC and shared appl top 1.

Let's begin with the end. First of all I show you what is my goal, what I will reach at the end of this post series.

I like diagrams. Usually a good diagram is much better then any number of text description.
So first here it is what I want to get at the end:



Yes, you see well, this is a simple network diagram.
At the end of the post series we will have these servers in the above configuration:

  • a 2 node RAC database, (db01, db02) 
  • a 3 node Apps Tier 
    • 2 node for web and forms processing, (app01, app02)
    • 1 node for concurrent program processing (conc01)
  • and a load balancer
And yes again I haven't put on the diagram these hardwares:
  • storage
  • backup device
  • monitoring tool
I will discuss them at a later, in an other post.
In the next post I try to collect all documentations what I had to read and use during the implemenation.

Architecting viewpoint of this configuration

There could be many, many question about this configuration.  I will try to present some architecting viewpoint of this system:
  • why I have choosed this configuration,
  • what is the benefits of this system,
  • what are the risks, the disadvantages of this system,
  • what should I look for after implementing the system,
  • is there any aftermath for a project if we choose this configuration.
Let's begin the answers.

"why I have choosed this"

In short: because the customer told this one.

I am kidding of course :) The customer usually never tell me what kind of configuration do they want. Instead of they are expecting functional and non functional requirements. Both of them has effect on arhictecture plan.

Functional means what modules, what functions do they want to use. So I have to consider on them, because between function and function there are many differences from resource viewpoint. (for example batch data handling needs other resources then a simple data input transaction).

Non functional means usually hardware, base software requirements (for example supported operating system, hardware vendor, storage type etc), but it means time, operating, maintenance etc requirements (for example operating time, expected response time, maximum outage, user numbers and so on, and so on)

At this example I could talk about non functional requirements
  • 24/7 operating time
  • minimize the outages
  • hundreds of form users
  • flexible, scalable architecture
  • minimized maintenance cost
  • use benefits of virtualization
  • etc.

"what is the benefits of this system,"

The benefits? First of all, this architecture offer solutions for all non functioanl requirements above.  Why?
Some answer.
  1. The RAC, think on it. It could server 24/7 operating time with minimized outages at database level. 2 node is not so much, but it is more than one :) So any problem with one node, the database will still available, it could still serve the data request. It is a good solution for flexiblity, scalablity requirement. If you need more resource you could easily add more nodes, if the additional resource claim is just a temporarly requirement, than after the resource peek you could easily drop the new nodes.
  2. The 2+1 application tier could server many form and web users, with separated batch processing. The 2 node web tier could serv parallely the users through load balancer. The users will know only one URL, they don't have to know anything about the architure. So they don't have to manully switch between application tiers, if one nodes goes down. 
  3. Separated batch processing benefit comes out when users start many, many reports, batch data transactions at same time. In this case the web tier will don't stop, the standard user interaction will not stop, the users "only" have to wait for end of their started programs.
  4. Using of shared appl top benefits has many, some examples:
    1. If you need new application tier, you shouldn't install a new node, you have to configure it only. It's true for both purposes, not depends on that the new node what kind of services will handle.
    2. You don't need to patch all nodes, following one by one. You should patch only at once and all nodes will work with the new feature, with the new repaired function.
    3. It needs much less disk space. (I hope the new 12.2 EBS system will still support shared appl top, the 2 APPL_TOP sofware will require much more disk space if not)
  5. Of course maintance will not so simple as a 2 node EBS system, but RAC and shared APPL_TOP could minimize the extra maintenance resource requirements.

"what are the risks, the disadvantages of this system,"

Under writing...

"what should I look for after implementing the system,"

Under writing...

"is there any aftermath for a project if we choose this configuration."

Yes, of course. There aftermathes in developments, in project technical maintenance, in after go live support and so on.

Usually the development enviroment is much simplier than the configuration above. Allways talk with the project's team about the planned architecture. They have to know about it and of course they have to develop customization that work on planned system. Generally they like to "forget" the database and the application tier will not work on the same server.

The technical maintenance team life will not easier if you choose this kind of configuration. The test and live system implemantion, installation will take more times. Commonly the team could install the whole system much more tries than a simplier configuration. The backup, the restore, the disaster recovery tests will takes much more times too.

After go live, the support tasks takes longer than a simplier one. For example for any error handling on application tier you have to search error messages in much more log files than less complex configuration.