OCP 12C upgrade exam 1Z0-060 passed

12c OCP upgrade done and dusted!

About 80 questions, about 50 questions new version and about 30 questions are general, all versions related.

There are just few confusing questions, most of remaining are quite clear.

Another number to the ever growing list of versions. OCP 12c 11g 10g 9i

Oracle 12c upgrade process general impression

I am testing the 12c upgrades for a half a year now and I have done a lot of 10g to 11g upgrades in the past. How much better the upgrade process is comparing with older versions?

My general impression is the whole process got much better.

  • Parallel Upgrade. The biggest achievement from my point of view is that now its possible to run the upgrade in parallel. In 11g the main upgrade process could run more than 30 minutes. Now with using parallel option this time could be decreased.
<br />Analyzing file catupgrd.sql
Log files in /u01/app/oracle/product/12.1.0.2/rdbms/admin
catcon: ALL catcon-related output will be written to catupgrd_catcon_4747.lst
catcon: See catupgrd*.log files for output generated by scripts
catcon: See catupgrd_*.lst files for spool files, if any
Number of Cpus        = 4
SQL Process Count     = 0
New SQL Process Count = 4

  • Pre-Upgrade script. Now the preupgrade script not only checking for more issues, it also writes the action in the script which can be executed before the upgrade.
<br />====>> PRE-UPGRADE RESULTS for UPGR <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:
/u01/app/oracle/cfgtoollogs/UPGR/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:
/u01/app/oracle/cfgtoollogs/UPGR/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:
/u01/app/oracle/cfgtoollogs/UPGR/preupgrade/postupgrade_fixups.sql

  • Instrumentation. During the upgrade the status of progress is shown, now its possible to estimate where you are in the process.
<br />------------------------------------------------------
Phases [0-73]         Start Time:[2016_10_30 16:36:05]
------------------------------------------------------
Serial   Phase #: 0      Files: 1     Time: 252s
Serial   Phase #: 1      Files: 5
Time: 139s
Restart  Phase #: 2      Files: 1     Time: 0s
Parallel Phase #: 3      Files: 18    Time: 29s
Restart  Phase #: 4      Files: 1     Time: 1s
Serial   Phase #: 5      Files: 5     Time: 46s

  • TiemZone upgrade. The time zone upgrade before was a set of 10 steps or so, now its much simpler, just to scripts to run
DST_prepare.sql
DST_adjust.sql

 

Installing oracle 11g: ld-linux.so.2: bad ELF interpreter

Installing the oracle database software could bring you the following error:

./runInstaller: /u01/insta/database/install/.oui: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

Which indicates that 32bit version of libs are missing. At least two possible reasons for that, some 32 bit libs are really missing and we could start digging into the cause or ask your self a question, what we are trying to achieve here? Are we trying to install 32bit software on 64bit linux by mistake?

The 64bit software distribution will have 64 in the name of the zip, like:

p10404530_112030_Linux-x86-64_3of7

If there is NO 64 in the naming, then most likely you are installing 32 bit on 64 bit linux.

12c HR examples schema download

Even if you want just HR schema then  you need to download examples distribution about 800MB. Then you have to unzip it on the server and install using GUI to the oracle home into demo directory , this will consume some space and will contain bunch of other schemas too. Only after that its possible to deploy HR schema.

This process is cumbersome and time consuming if only HR schema is required. For convenience, I zipped just the HR scripts (17KB) and this could be obtained from here. To install run hr_main as usual.

note: by downloading you accept the oracle examples terms and conditions, scripts obtained this way are not supported, proceed on you own risk

RMAN incremental backups and change tracking file

The common misconception I observe is how the block change tracking file (BCTF) works. One of oracle published books on backup and recovery says:

“The Block Change Tracking File . By default, when doing an incremental backup, any datafile that has changed in any way will be backed up. This can make incremental backups take longer and will make them larger. RMAN offers the ability to just back up changed database blocks. This can make your incremental database backups much smaller and shorter. ” (Freeman, Hart – Oracle RMAN 11g Backup and Recovery)

This stating that the whole datafile will be backed up, which is not very correct. The incremental backup works on block level disregarding BCTF.

Lets do a test:

The environment is one big table called T1 with 6.5  million of records (6 499 987) and about 7GB of space.

1. Lets update fraction of records and do incremental backup without change tracking

~~~~

SELECT STATUS, FILENAME
FROM V$BLOCK_CHANGE_TRACKING;
STATUS FILENAME
———- ——————————————
DISABLED

update t1 set t='TEST1' where c>100000 and c<101000;

12987 rows updated.
commit;

Commit complete.

~~~~

The incremental backup took 7 minutes. And the size is 30MB (the whole datafile size is 7GB)

2. Lets update fraction of records again and do incremental backup with change tracking

~~~~
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE '/u01/oracle/oradata/fast_recovery_area/ED12/chtf01.dbf' REUSE;
Database altered.

SQL> update t1 set t='TEST1' where c>100000 and c commit;
Commit complete.
~~~~

We have to reconnect the rman in order to pick up the change tracking settings.
The backup took 3 minutes and the filesize still was 30MB.
CONCLUSION:
The change tracking file is a performance enhancing feature, Instead of scanning the whole datafile for changed blocks it will only scan bitmap of recorder changes from the change tracking file. As we seen in our example, the incremental backup size hasn’t changed, but the backup time improved 2 times. In both cases (without BCTF or with BCTF) The backup size was 30MB compared to 7GB table size.

Continue reading

Setting the date format in RMAN 12c

The common problem is that rman shows times in default format which is not helpful when we need to know how much time the backup takes.

RMAN> backup incremental level 0 database;
Starting backup at 24-SEP-16

The typical way of fixing it is to set the NLS_DATE_FORMAT=’YY/MM/DD HH24:MI’

However it doesn’t work in RMAN, even version 12c:

RMAN> SQL ‘ALTER SESSION SET NLS_DATE_FORMAT=”DD.MM.YYYY HH24:MI:SS”‘;
sql statement: ALTER SESSION SET NLS_DATE_FORMAT=”DD.MM.YYYY HH24:MI:SS”
RMAN> backup incremental level 0 database;
Starting backup at 24-SEP-16

The workaround is:

1. Set the NLS_DATE_FORMAT variable for the os shell

export NLS_DATE_FORMAT=’YY/MM/DD HH24:MI’
rman target / 

RMAN>
Starting backup at 16/09/24 16:39

2. Or in the scripts, use host command to produce the timestamp

RMAN> run {
2> host ‘date’;
3> backup incremental level 0 database;
4> host ‘date’;
5> }
Sat Sep 24 16:09:57 BST 2016
host command complete
Starting backup at 24-SEP-16

Adding the database to Grind Infrastructure Standalone 12c

Lets imagine one has to configure the existing database to the RAC One node/ GI Standalone. In 12c the syntax has slightly changed, the options become more meaningful.

[oracle@single1 ~]$ srvctl add -h
The SRVCTL add command adds the configuration and the Oracle Restart application to the OCR for the cluster database, named instances, named services, or for the named nodes.
Usage: srvctl add database -db -oraclehome [-domain ] [-spfile ] [-pwfile ] [-role {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY | FAR_SYNC}] [-startoption ] [-stopoption ] [-dbname ] [-instance ] [-policy {AUTOMATIC | MANUAL | NORESTART}] [-diskgroup “”]

For the database with name ED12, the command would be:

 srvctl add database -db ED12 -oraclehome /u01/oracle/product/12.1.0/dbhome_1 -spfile ‘/u01/oracle/product/12.1.0/dbhome_1/dbs/spfileED12.ora’ -policy AUTOMATIC -startoption OPEN -stopoption IMMEDIATE
[oracle@single1 ~]$ srvctl status database -d ED12
Database is not running.
[oracle@single1 ~]$ srvctl start database -d ED12
[oracle@single1 ~]$ ps -ef | grep pmon
oracle    2737     1  0 12:10 ?        00:00:00 asm_pmon_+ASM
oracle    3140     1  0 12:27 ?        00:00:00 ora_pmon_ED12
oracle    3351  2906  0 12:27 pts/0    00:00:00 grep pmon

Is the OLD syntax still working? Yes it does.

[oracle@single1 ~]$ srvctl stop database -d ED12
[oracle@single1 ~]$ srvctl remove database -d Ed12 -f
[oracle@single1 ~]$ srvctl status database -d Ed12
PRCD-1120 : The resource for database Ed12 could not be found.
PRCR-1001 : Resource ora.ed12.db does not exist
[oracle@single1 ~]$ srvctl add database -d ED12 -o /u01/oracle/product/12.1.0/dbhome_1 -p ‘/u01/oracle/product/12.1.0/dbhome_1/dbs/spfileED12.ora’ -y AUTOMATIC -s OPEN -t IMMEDIATE

[oracle@single1 ~]$ srvctl start database -d ED12
[oracle@single1 ~]$ ps -ef | grep pmon
oracle    2711     1  0 12:31 ?        00:00:00 asm_pmon_+ASM
oracle    3360     1  0 12:36 ?        00:00:00 ora_pmon_ED12
oracle    3773  2558  0 12:44 pts/0    00:00:00 grep pmon

Parallel Recompilation using ultrp.sql script

When we do major upgrades, the recompilation of all objects is required. In some cases the number of objects could be enormous,  In such scenario we use utlrp script which allows the recompilation in parallel. The script uses UTL_RECOMP package with parameter 0, if no number of threads specified. What is the number of parallel threads script will actually use?

Well, based on oracle docs it is obtained fro job_queue_processes

However, by default the job_queue_processes is 1000! So this cannot be true.

To keep story short, after some investigation, it looks like the following formula is used: CPU_COUNT*THREADS_PER_CPU. (In case these values cannot be obtained, then number of threads will be 1 )

CONCLUSION:
The recompilation could be IO bound so the careful recompilation parallelism is important. Its a good idea to check spfile parameters cpu_count and parallel_threads_per_cpu
before utlrp script execution to make sure that we do not create artificial contention.

memlock configuration on linux for 12c part I

Going through the Oracle 12c Installation docs for linux, I came across the following memlock requirement:

So what is ‘memlock’ and where this requirements are coming from?

Lets see how it works, I have 1GB (for a reason) physical memory configured and SGA_TARGET=900M on 12.1.0.2

1. Lets configure Hugepages and set memlock much lower then Hugepages configured and see whats going to happen.

[oracle@single1 ~]$ cat /etc/sysctl.conf |  grep vm
vm.nr_hugepages = 454

[oracle@single1 ~]$ ulimit -l
217728

[root@single1 ~]# grep Huge /proc/meminfo
HugePages_Total: 447
HugePages_Free: 447
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Upon the instance startup, oracle provided the following in the alertlog

Starting ORACLE instance (normal) (OS id: 2308)
Thu Sep 08 00:52:56 2016
CLI notifier numLatches:3 maxDescs:519
Thu Sep 08 00:52:56 2016
**********************************************************************
Thu Sep 08 00:52:56 2016
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)

Thu Sep 08 00:52:56 2016
Per process system memlock (soft) limit = 213M
Thu Sep 08 00:52:56 2016
Expected per process system memlock (soft) limit to lock
SHARED GLOBAL AREA (SGA) into memory: 902M
Thu Sep 08 00:52:56 2016
Available system pagesizes:
4K, 2048K
Thu Sep 08 00:52:56 2016
Supported system pagesize(s):
Thu Sep 08 00:52:56 2016
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s)
Thu Sep 08 00:52:56 2016
4K Configured 5 228357 NONE
Thu Sep 08 00:52:56 2016
2048K 447 451 5 NONE
Thu Sep 08 00:52:56 2016
RECOMMENDATION:
Thu Sep 08 00:52:56 2016
1. For optimal performance, configure system with expected number
of pages for every supported system pagesize prior to the next
instance restart operation.
Thu Sep 08 00:52:56 2016
2. Increase per process memlock (soft) limit to at least 902MB
to lock 100% of SHARED GLOBAL AREA (SGA) pages into physical memory
Thu Sep 08 00:52:56 2016
**********************************************************************

[root@single1 ~]# grep Huge /proc/meminfo
HugePages_Total: 447
HugePages_Free: 442
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

What we see now that only 5 of Hugepages are used and system is experiencing heavy trashing. The instance has never opened and I aborted it after 1 h of wait.

[oracle@single1 ED12]$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
9 5 327268 8276 76 2548 25329 5873 111710 5943 7419 3964 1 81 9 10 0
8 9 327192 8304 76 2780 31724 4704 147192 4712 7993 4184 1 86 0 13 0
12 6 327416 8208 76 2604 35900 5240 150804 5240 9309 4434 0 98 0 2 0
13 6 327312 8092 76 2876 43788 5624 120172 5640 9220 3972 1 94 0 5 0
9 5 327156 8268 76 3188 41672 5416 144988 5416 9602 4268 0 96 0 4 0
11 7 326944 8332 76 2856 42608 6332 132128 6356 9086 4061 1 94 0 5 0
14 8 327288 8072 76 2880 40352 5416 114864 5436 9221 4477 0 75 0 25 0
10 4 327464 8128 76 2608 40860 5144 125752 5240 8501 4199 1 78 0 21 0
10 7 327468 8092 80 2652 38696 3784 137976 3828 8758 3978 0 98 0 2 0
11 9 327364 8224 80 2816 40768 3692 138048 3704 8603 3874 0 91 0 9 0
12 5 327492 8472 88 2620 45792 4200 146660 4244 9484 4267 1 94 0 5 0
8 4 327396 8264 88 2896 42648 5616 124336 5616 9898 4251 1 96 0 3 0
11 5 327352 8116 84 2752 35832 5616 124496 5640 8382 4228 0 92 0 8 0
10 7 327272 8148 88 2700 35352 6628 117264 6628 8418 4152 1 96 0 3 0

This simple experiment shows that:

1. Setting the memlock incorrectly makes all Hugepages not to be utilised by the database. This is just a waist of memory, no other process are able to use them. In case of memory shortage this could lead to the trashing of the system (as in our case, where its only 1GB of memory and 900MB of are Hugepages)

2. Also starting from 11.2.0.3 oracle should use all available Hugepages even if number of hugepaes configured are less then shole SGA. It looks like it is required that all configured hugepages should be able to be locked into memory.

3. Therefore, the proper memlock configuration is very important  and should be correlated to the number of hugepages and not to the total amount of physical memory.

Shell limits after 12c preinstall rpm

Just a quick note.

Recently I’ve been using oracle-rdbms-server-12cR1-preinstall package to perform all required pre-reqs for 12c.  This should install required packages, modify kernel parameters and user limits.
Before that I had custom shell limits, like number of open files set to 2048.
After the rpm execution, I’ve noticed that my user limits have changed

ulimit -n
1024

I expected that and went to limits.conf to change them back,
however when I looked at limits.conf all limits were set as before the package.

cat /etc/security/limits.conf | grep nofile
oracle soft nofile 2048
oracle hard nofile 65536

It appears that the rpm package places the following

/etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf

Which overrides the /etc/security/limits.conf
After changing the oracle-rdbms-server-12cR1-preinstall.conf, everything is back to expected..

ulimit -n
2048

CONCLUSION:
The  oracle-rdbms-server-12cR1-preinstall package places shell limits into /etc/security/limits.d directory.