Updates from Mustafa Toggle Comment Threads | Keyboard Shortcuts

  • Mustafa 1:25 am on December 6, 2016 Permalink | Reply
    Tags: , , noarchivelog, ,   

    12c: Backup NOARCHIVELOG database using RMAN 

    In NOARCHIVELOG mode the database can only be backed up when the database is closed/mounted and in a consistent mode. We can simply do that by putting the database is MOUNT mode and initiating backup as shown below.

  • Mustafa 3:41 am on September 25, 2016 Permalink | Reply
    Tags:, , , , , , , ,   

    Howto: Apply July PSU 12c Grid and DB to an Oracle 12c 2 Node RAC cluster 

    Following my previous post, here we will be applying the July 2016 PSU (Patch:23054246) on top of an existing April 2016 PSU (Patch: 22646084)

    Steps are the same, only difference is we will be apply an PSU on top of another PSU. I will assume you will go through all the prechecks listed in the previous blog.


    After its complete, validate that the patches went through else manually run datapatch, check how to here

    Check datapatch updated the registry

    Thats all, follow the same procedure on the other nodes in the cluster.

    Watch the video below!

  • Mustafa 5:09 am on September 12, 2016 Permalink | Reply
    Tags:, , , , , ,   

    Howto: Apply 12c Grid and DB PSU to Oracle 12c 2 node RAC 

    Patching can be cumbersome if you do not follow the procedure. One of the most important document is the README that comes with the patch or patchset.

    For this write-up, I will be patching grid home version and database version to april 2016 PSU.

    Before we download the required patch, we need to verify if we have java user objects in use in the database. This helps us determine if we need the OJVM patch or not. If Multimedia, Spatial, and/or OLAP features/components are installed and used in the database then the OJVM patch will be required.

    I did not require an OJVM patch, you can further check MOS: 397701.1 for object count in 12c.

    Moving on to the actual patching, this is the link to the PSU I will be using for this activity (Patch 22646084: GRID INFRASTRUCTURE PATCH SET UPDATE (APR2016)). This PSU includes the DB PSU.

    Before you proceed any further, look at the README. It gives all the information required. Notice that it requires OPatch version or later. Using a lower version might give unfavorable results or fail the patching.

    Download the latest OPatch and select release from the release selection drop down.

    During patch installation, OPatch saves copied of all the files that will be replace by the new patch before new/patched versions of these files are loaded in $ORACLE_HOME/.patch_storage. Known as rollback files, these are important in making a patch rollback. Optionally, take a backup of $GRID_HOME/.patch_storage and $DB_HOME/.patch_storage.

    Run opatch lsinventory on all the homes that will be patched.

    Unzip the PSU Patch 22646084 to a shared location and ensure that the oinstall group has read permissions. If not, run the following command

    Create the OCM (Oracle Configuration Manager) response file. Its not required as its set to be deprecated in future releases.

    12c OCM Deprecation

    12c OCM Deprecation




    If any one off patches are already installed, you can check for any conflicts. It will check in GI and DB homes.

    The last lines of the log should show something along these lines, unless some patch needs to be rolled back.

    Now, run the opatchauto apply command to apply the PSU to grid and oracle database home

    All the logs from the opatch or opatchauto can be found in the home from where opatch is executed, like $GI_HOME/cfgtoollogs.

    Verify using the opatch lsinventory command from the GI home

    And for our database home


    Like catbundle in 11g, 12c has datapatch. Refer for MOS 1585822.1

    Datapatch determines the requisite apply/rollback actions by matching an internal repository with the patch inventory.
    Datapatch should be invoked when the database is restarted after a patching session.
    Enterprise Manager and OPatchAuto call datapatch automatically during every patching session.
    If OPatch is used to install RDBMS patches then datapatch has to be explictly called to complete any patching actions after database restart.
    Datapatch will automatically determine the set of patches that need to be installed and a set of patches that need to be rolled back.
    Datapatch ensures that a patch has been installed/rolled back in all RAC instances before to initiate any post-patch SQL actions on the database.
    In Oracle Multitenant environment datapatch will patch the root and any pluggable databases that are opened.
    In order to patch all pluggable databases, it should be ensured that before to invoke datapatch all pluggable databases are opened.
    If an unpatched pluggable database is opened in Oracle Multitenant then calling datapatch will complete the pending patch actions.

    To update DBA_REGISTRY_SQLPATCH with the PSU information run the following.


    Then run datapatch with the following format -> datapatch -apply 19303936/<UPI> -force -verbose, Where UPI is the Unique Patch ID from above. Logs should be in $ORACLE_BASE/cfgtoollogs/sqlpatch

    Then login to the database and verify

    Thats All! Follow the same procedure in all the nodes in the cluster.

    Watch the whole video of patching 12c GI and DB home in RAC environment.

  • Mustafa 8:00 pm on May 5, 2016 Permalink | Reply
    Tags: , raspberry, robot   

    Pythonic Obstacle Avoiding Robot 

    Pretty simple home built robot under $70.

    Here is how I start the robot at power up. Simply put the python code in init.d. This helps in starting and stopping the robot clearing the sensors each time.

    Start Robot at Boot


















    There are two parts to the startup of the robot.

    • Reset all sensors to clean out any garbage values.
    • The control logic that moves the robot around.


    Lastly the main control logic of the robot. I will explain the how to in a later post.

    Happy PyRobotics!

  • Mustafa 3:04 am on January 19, 2016 Permalink | Reply
    Tags: Cluster, , grid, infrastructure, OCR, OLR, ,   

    Oracle Cluster Registry (OCR) and Oracle Local Registry (OLR) 

    Oracle Cluster Registry (OCR) and Oracle Local Registry (OLR)


    Oracle Cluster Registry (OCR) and Oracle Local Registry (OLR) are two very important components of a Grid Infrastructure that help manage the cluster resources. In a nutshell, OCR is used by CRSd and OLR by the OHASd.

    Oracle Cluster Registry (OCR)

    The Oracle Clusterware (Oracle Grid Infrastructure GI stack in 11gR2) uses OCR to manage resources and node membership information. It contains the following information shared across the nodes in the cluster

    • ASM diskgroups, volumes, filesystems, and instances
    • RAC databases and instances information
    • SCAN listeners and local listeners
    • SCAN VIPs and Local VIPs
    • Nodes and node applications
    • User defined resources
    • Its own backups (/etc/oracle/ocr.loc and local backups in $ORA_CRS_HOME/cdata/<cluster_name>)

    OCR is what registry is for Windows. It contains key value pairs of various resources in the cluster. Basic information about the resources, their location, permissions, current value, type, state, etc. It also acts as a bootstrap for the Cluster Synchronization Services Daemon (CSSd) for port information, nodes and disks.

    The OCR cache is updated by the master CRSd process to make is visible to all the nodes in the cluster. Then each node uses its local copy of OCR and the local CRSd process to access it and communicate via the same local process if they need more updates from the physical OCR file. OCR also helps maintain the structure and dependencies that exist in the startup of various resources in the cluster. For example, the database cannot be started until the asm resource has come up, or the service resource doesn’t start until the database has been started.

    Like, on my test box, ocrdump shows 676 ( keys in the OCR (could be different on your system). ocrdump should be executed as root to display all the keys available. Majority of the keys in the OCRDUMPFILE deal with the CRSd.

    You can find the location of the OCRFILE using one of the tow methods below


    The OCR file is automatically backed up locally in $ORA_CRS_HOME/cdata/<cluster_name> every four hours (note timestamp of backup*.ocr files below). These backups are stored for a week and written in a round-robin fashion.

    11gR2 can have upto five mirrored copies of OCR. If one fails it doesn’t cause an outage and the failed one can be replaced.

    Oracle Local Registry (OLR)

    Oracle Local Registry is OCRs local counterpart. It stores information about the local node only, mainly related to the OHASd (as we will see below). It was introduced in 11gR2, and is not shared by any other node in the cluster. It shares the same layout or structure as the OCR. My test systems’ OLR files has about 651 keys. If you scroll through the key names below you will observe that most of the information pertains to the local resources required by OHASd (refer to the flow chart in the earlier post to see which values this file would hold). The OLR and GPnP profile are required to start the HA service stack. Generally, the following information is found in the OLR.

    • Data about Grid Plug and Play Wallet
    • Clusterware configuration
    • Version information