Feed aggregator

Top 10 skills of 2025. Which ones would you like to improve on?

OracleApps Epicenter - Fri, 2020-10-23 11:24
I find it quite surprising that ‘people’ related skills only feature in 1 of the top 10 skills for 2025. That’s contrary to what I had believed would be the case. Interesting stats published this week by World Economic Forum for Top 10 skills of 2025 as below. 1 Analytical thinking/innovation 2 Active learning and […]
Categories: APPS Blogs

When Upgrading DB Don’t Trust Doc Alone

Michael Dinh - Wed, 2020-10-21 19:18

What’s up Doc!

So there I was, reading documentation and planning upgrade but still not perfect.

DBUA Command-Line Syntax for Active and Silent Mode

changeUserTablespacesReadOnly does not show from help but exists in documenation.


$ which dbua

$ dbua -help
Usage: dbua [<flag>] [<option>]
Following are the possible flags:
-createPartialBackup – Flag to create a new offline partial RMAN backup by setting the user tablespaces in R/O mode.
-disableParallelUpgrade – Flag to disable the parallel execution of database upgrade.
-executePreReqs – Flag to execute the pre-upgrade checks alone for the specified database.
-sid | -dbName
-help – Shows this usage help.
-ignorePreReqs – Ignore error conditions in pre-upgrade checks.
-keepEvents – Flag to keep the configured database events during upgrade.
-silent – This flag allows you to carry on configuration in silent mode.
-sid | -dbName
-skipListenersMigration – Flag to skip the listener migration process as part of the database upgrade.

Following are the possible options:
[-asmsnmpPassword – <Specify ASMSNMP user password>]
[-backupLocation – <Specify directory to backup your database before starting the upgrade>]
[-createGRP – <true | false> To create a guaranteed restore point when database is in archive log and flashback mode.]
[-createListener – <true | false> To create a listener in newer release Oracle home specify listenrName:lsnrPort.]
[-dbName – <Specify Database Name>]
[-oracleHome – <Specify the Oracle home path of the database>]
[-sysDBAUserName – <User name with SYSDBA privileges>]
[-sysDBAPassword – <Password for sysDBAUserName user name>]
[-dbsnmpPassword – <Specify DBSNMP user password>]
[-disableUpgradeScriptLogging – <true | false> This command disables the detailed log generation for running SQL scripts during the upgrade process. By default this is enabled. To enable the log generation, don’t specify this command.]
[-emConfiguration – <DBEXPRESS | CENTRAL | BOTH | NONE>]
[-dbsnmpPassword – <Specify DBSNMP user password>]
[-emPassword – <Specify EM admin user password>]
[-emUser – <Specify EM admin username to add or modify targets>]
[-emExpressPort – <Specify the port where EM Express will be configured>]
[-omsHost – <Specify EM management server host name>]
[-omsPort – <Specify EM management server port number>]
[-asmsnmpPassword – <Specify ASMSNMP user password>]
[-ignoreScriptErrors – <true | false> Specify this flag for ignoring ORA errors during custom scripts.]
[-initParam – <Specify a comma separated list of initialization parameter values of the format name=value,name=value>]
[-initParamsEscapeChar – <Specify escape character for comma when a specific initParam has multiple values. If the escape character is not specified backslash is the default escape character>]
[-excludeInitParams – <Specify a comma separated list of initialization parameters to be excluded.>]
[-keepDeprecatedParams – <true | false> To retain deprecated parameters during database upgrade.]
[-localListenerWithoutAlias – To set LOCAL_LISTENER without TNS Alias.]
[-listeners – <To register the database with existing listeners, specify listeners by comma separated listenerName:Oracle Home. Listeners from lower release home are migrated to newer release home. Specifying -listeners lsnrName1,lsnrName2 or -listeners lsnrName1:<Oracle home path>,-listeners lsnrName2:<Oracle home path>, DBUA searches specified listeners from GI home (if configured), target home and source home>]
[-localRacSid – <Specify the local System Identifier of the cluster database if the cluster database is not registered in OCR>]
[-logDir – <Specify the path to a custom log directory>]
[-newGlobalDbName – <Specify New Global Database Name. This option can only be used for Oracle Express Edition upgrade>]
[-newSid – <Specify New System Identifier. This option can only be used for Oracle Express Edition upgrades>]
[-newInitParam – <Specify a comma separated list of initialization parameter values of the format name=value,name=value. Use this option to specify parameters that are allowed only on the target Oracle home>]
[-initParamsEscapeChar – <Specify escape character for comma when a specific initParam has multiple values. If the escape character is not specified backslash is the default escape character>]
[-oracleHomeUserPassword – <Specify Oracle Home user password>]
[-pdbs – <Specify a comma separated list with the names of the pluggable databases (PDB) that will be upgraded. Specify ALL to select all or NONE to select none of the pluggable databases for upgrade>]
-sid | -dbName
-sid – <Specify System Identifier>
[-oracleHome – <Specify the Oracle home path of the database>]
[-sysDBAUserName – <User name with SYSDBA privileges>]
[-sysDBAPassword – <Password for sysDBAUserName user name>]
-dbName – <Specify Database Name>
[-oracleHome – <Specify the Oracle home path of the database>]
[-sysDBAUserName – <User name with SYSDBA privileges>]
[-sysDBAPassword – <Password for sysDBAUserName user name>]
[-pdbsWithPriority – <Specify a comma separated list of pluggable databases (PDB) to be upgraded along with its corresponding priorities (being 1 the top priority) of the format <pdb name>:<upgrade priority>,<pdb name>:<upgrade priority> >]
-sid | -dbName
-sid – <Specify System Identifier>
[-oracleHome – <Specify the Oracle home path of the database>]
[-sysDBAUserName – <User name with SYSDBA privileges>]
[-sysDBAPassword – <Password for sysDBAUserName user name>]
-dbName – <Specify Database Name>
[-oracleHome – <Specify the Oracle home path of the database>]
[-sysDBAUserName – <User name with SYSDBA privileges>]
[-sysDBAPassword – <Password for sysDBAUserName user name>]
[-performFixUp – <true | false> Enable or disable fix ups for the silent upgrade mode.]
[-postUpgradeScripts – <Specify a comma separated list of SQL scripts with their complete pathnames. These scripts will be executed at the end of the upgrade>]
[-preUpgradeScripts – <Specify a comma separated list of SQL scripts with their complete pathnames. These scripts will be executed before the upgrade>]
[-recompile_invalid_objects – <true | false> Recompile invalid objects as part of the upgrade.]
[-upgrade_parallelism – <Specify number of CPU’s to be used for parallel upgrade>]
[-upgradeTimezone – <true | false> Upgrade the timezone files of the database.]
[-upgradeXML – <Specify the path to the existing pre-upgrade XML file> This option only applies to in-place database upgrades.]
[-useExistingBackup – <true | false> To restore database using existing RMAN backup.]
[-useGRP – <Specify the name of the existing guaranteed restore point> To restore the database using a specified guaranteed restore point.]


Even when -createListener show as valid syntax, using -createListener is not recognized.
$ cat run_dbua.sh

/app/product/ -silent \
-sid db01 \
-oracleHome /app/product/ \
-useGRP upgrade19c \
-recompile_invalid_objects TRUE \
-upgradeTimezone TRUE \
-emConfiguration NONE \
-skipListenersMigration \
-createListener false \  --- failed
-upgrade_parallelism 8

./run_dbua.sh: line 10: -createListener: command not found
This works.
/app/product/ -silent -sid db01 -skipListenersMigration -oracleHome /app/product/ -recompile_invalid_objects true -upgradeTimezone true -emConfiguration NONE -upgrade_parallelism 4 -createListener false

What am i missing?

Hopefully, you will have better luck than I did.

Update: there as a space from line above thanks to https://twitter.com/VincePoore

$ grep -r '[[:blank:]]$' run_dbua.sh
-skipListenersMigration \

$ grep -r '[[:blank:]]$' run_dbua.sh | wc -l

Column Groups

Jonathan Lewis - Wed, 2020-10-21 06:14

Here’s an odd little detail about the statistics of column groups. At first glance it’s counter-intuitive but it’s actually an “obvious” (once you’ve thought about it for a bit) consequence of the approximate_ndv() algorithm for gathering stats.

I’ll present it as a question:

I have a table with two columns: flag and v1. Although the column are not declared as non-null neither holds any nulls. If there are 26 distinct values for flag, and 1,000,000 distinct values for v1, what’s the smallest number of distinct values I should see if I create the column group (flag, v1) ?

The question is, of course, a little ambiguous – there’s the number of distinct values that the column (group) holds and the number that a fresh gather of statistics reports it as holding. Here are the stats from a test run of a simple script that creates, populates and gathers stats on my table:

select  column_name, num_distinct 
from    user_tab_cols 
where   table_name = 'T1'

COLUMN_NAME                      NUM_DISTINCT
-------------------------------- ------------
FLAG                                       26
ID                                    1000000
V1                                     999040
SYS_STUQ#TO6BT1REX3P1BKO0ULVR9         989120

There are actually 1,000,000 distinct values for v1 (it’s a varchar2() representation of the id column), but the approximate_ndv() mechanism can have an error of (I believe) up to roughly 1.3%, so Oracle’s estimate here is a little bit off.

The column group (represented by the internal column defonition SYS_STUQ#TO6BT1REX3P1BKO0ULVR9) must hold (at least) 1,000,000 distinct values – but the error in this case is a little larger than the error in v1, with the effect that the number of combinations appears to be less than the number of distinct values for v1!

There’s not much difference in this case between actual and estimate, but there test demonstrates the potential for a significant difference between the estimate and the arithmetic that Oracle would do if the column group didn’t exist. Nominally the optimizer would assume there were 26 million distinct values (though in this case I had only created 1M rows in the table and the optimizer would sanity check that 26M).

So, although the difference between actual and estimate is small, we have to ask the question – are there any cases where the optimizer will ignore the column group stats because of a sanity check that “proves” the estimate is “wrong” – after all it must be wrong if the num_distinct is less than the num_distinct of one of the components. Then again maybe there’s a sanity check that only ignores the column group if the estimate is “wrong enough”, but allows for small variations.

I mention this only because an odd optimizer estimate has shown up recently on the Oracle-L mailing list, and the only significant difference I can see (at present) is that a bad plan appears for a partition where this column group anomaly shows up in the stats, but a good plan appears when the column group anomaly isn’t present.


If you want to recreate the results above, here’s the model I’ve used (tested on and

rem     Script:         column_group_stats_5.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem     Last tested 

execute dbms_random.seed(0)

create table t1
with generator as (
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
        chr(65 + mod(rownum,26))        flag,
        rownum                          id,
        lpad(rownum,10,'0')             v1
        generator       v1,
        generator       v2
        rownum <= 1e6   -- > comment to avoid WordPress format issue
order by

select  column_name, num_distinct 
from    user_tab_cols 
where   table_name = 'T1'

                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns(v1, flag) size 1'
select  column_name, num_distinct 
from    user_tab_cols 
where   table_name = 'T1'

Footnote 2:

As an interesting little statistical quirk, if I defined the column group as (flag, v1) rather than (v1, flag) the estimate for the column group num_distinct was 1,000,000.

Partner Webcast – Delivering your Application Solutions on Autonomous Database

Cloud has changed customer expectations and software buying preferences causing partners to adapt their deployment, development and customer support models. Increasingly customers consider &...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How can I do a variable "in list"

Tom Kyte - Wed, 2020-10-21 01:06
I have a simple stored procedure, that I would like to have a passed in string(varchar2) for used in select from where col1 in (var1) in a stored procedure. I've tried everything but doesn't work. Followed is my proc. Thanks CREATE OR REPLACE PROCEDURE WSREVSECT_5 pSectNos varchar2, pRetCode OUT varchar2 ) AS nCount number; BEGIN SELECT count(fksrev) into nCount FROM SREVSECT WHERE sectno IN (pSectNos ) /* as in 'abc', 'xyz', '012' */ ; pRetCode:=to_char(ncount); End;
Categories: DBA Blogs

Select for update statement too slow

Tom Kyte - Wed, 2020-10-21 01:06
Hi Connor, Chris, I have a FOR UPDATE SQL used to lock certain rows in ORDERS table but it seems to be bit slow (takes around 1 min). I tried getting plan from dbms_xplan.display_awr. Could you please give me some lead from your past experience and I can look for the any SQL tuning stuff. <code> SELECT PT.ORDER_ID FROM STAGING_001 PN JOIN GTT_TAB IDS ON IDS.MSG_ID = PN.MSG_ID, XMLTABLE ( 'hsbcEnvelope/hsbcMessageBody/pymtTran' PASSING PN.XML_MSG COLUMNS REF_001 VARCHAR2 (50 CHAR) PATH 'REF_001', REF_002 VARCHAR2 (50) PATH 'REF_001', REF_003 VARCHAR2 (10 CHAR) PATH 'REF_001') PMT, ORDERS PT WHERE 1 = 1 AND ( ( PMT.REF_002 IS NOT NULL AND PMT.REF_001 IS NOT NULL AND PMT.REF_002 = PT.REF_002 AND PT.REF_001 = PMT.REF_001 AND NVL (PMT.REF_003, :B1) = PT.REF_003) OR ( PMT.REF_002 IS NOT NULL AND PMT.REF_002 = PT.REF_002 AND NVL (PMT.REF_003, :B1) = PT.REF_003) OR ( PMT.REF_001 IS NOT NULL AND PT.REF_001 = PMT.REF_001 AND NVL (PMT.REF_003, :B1) = PT.REF_003) ) FOR UPDATE OF PT.ORDER_ID NOWAIT; </code> <code> ----------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ----------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 27043 (100)| | | | | 1 | FOR UPDATE | | | | | | | | | 2 | BUFFER SORT | | | | | | | | | 3 | CONCATENATION | | | | | | | | | 4 | NESTED LOOPS | | 1003 | 1935K| 11972 (1)| 00:00:01 | | | | 5 | NESTED LOOPS | | 2940 | 3930K| 210 (1)| 00:00:01 | | | | 6 | NESTED LOOPS | | 10 | 13630 | 13 (0)| 00:00:01 | | | | 7 | INDEX FAST FULL SCAN | SYS_C006227 | 10 | 130 | 2 (0)| 00:00:01 | | | | 8 | TABLE ACCESS BY INDEX ROWID | STAGING_001 | 1 | 1350 | 2 (0)| 00:00:01 | | | | 9 | INDEX UNIQUE SCAN | PK_STG_INT | 1 | | 1 (0)| 00:00:01 | | | | 10 | XPATH EVALUATION | | | | | | | | | 11 | TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| ORDERS | 1 | 607 | 4 (0)| 00:00:01 | ROWID | ROWID | | 12 | INDEX RANGE SCAN | IDX_PT_REF_001 | 1 | | 3 (0)| 00:00:01 | | | | 13 | NESTED LOOPS | | 1011 | 1950K| 14172 (1)| 00:00:01 | | | | 14 | NESTED LOOPS ...
Categories: DBA Blogs

Complete transient session for testing

Tom Kyte - Wed, 2020-10-21 01:06
We have looked at dbms_flashback and dbms_wm but both don't deliver what we actually want: Is there a possibility to make an entire session transient? We would like to use this for our testing scenario's: 1. Start the session in "transient mode" 2. execute a lot of pl/sql with commits, rollbacks and savepoints 3. drop/stop the session ....and everything is magically back to before point 1. dbms_flashback.ENABLE_AT_SYSTEM_CHANGE_NUMBER is almost there by showing the data at a specified scn, but then you have to reverse every table that is hit by the -third party- code by hand. Not feasible in our environment I'm not very optimistic, but perhaps I missed a new capability of the DB. Is there a way?
Categories: DBA Blogs

Query never finish without clear reason

Tom Kyte - Wed, 2020-10-21 01:06
Some times an aleatory select statement stop working without a clear reason. Some times the statement is inside procedure, some times it is executed directly from ODAC FOR .NET 4 Then only thing in common it was always executed by ODAC client, so I never got this problem from one of my jobs ! When I check gv$sesion the session is active When I check plan using DBMS_XPLAN.DISPLAY_CURSOR I it is the best one When I simulate via pl/sql the query works fine with the very same plan When I kill the session and execute again from the ODAC , the same problem happen When I kill the session, SYS.DBMS_SHARED_POOL.PURGE(ADDRESS || ', ' || HASH_VALUE, 'C'), and execute again from the ODAC then BINGO the problem is SOLVED. ...however we know that it will happen again ... What kind of evidence am I missing? did you ever see this kind of behavior?
Categories: DBA Blogs

SQL Profile example when best plan not clear

Bobby Durrett's DBA Blog - Tue, 2020-10-20 12:54

I resolved another production performance issue with a SQL Profile yesterday. I have several posts about SQL Profiles, so I do not want to be redundant, but this case was a little different because it was not clear that I had a better plan. I want to document the challenge that I had deciding if I had the best plan and show the resolution.

On September 21 when I think I was on vacation or otherwise not in the office there was a big performance issue on an older production database. The on-call DBA identified the sql_id of the top SQL statement as 30q69rbpn7g75. But he and an application support developer together could not connect that SQL statement back to the long running reports that were impacting the business. Eventually the issue went away later that night. Here is some of the execution history from that original issue:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75      1463081877 21-SEP-20 AM                5         420798.922         144724    209959.863                  0                      0                      0           4824516.8           710748.4                            0                    870
30q69rbpn7g75      1732425686 21-SEP-20 AM               13         66.9426923     56.1538462    7.25523077                  0                      0                      0          6410.23077         249.769231                            0             414.538462
30q69rbpn7g75       592872749 21-SEP-20 AM                1           4144.437           3240       955.246                  0                      0                      0               63878              29725                            0                     89
30q69rbpn7g75      4175108531 21-SEP-20 AM               11         172328.731     92788.1818    62448.1775                  0                      0                      0             3463219         466919.273                            0             610.090909
30q69rbpn7g75      2823019537 21-SEP-20 AM               19         332354.362     212357.895     22580.778                  0                      0                      0          11145610.8         163882.105                            0             303.526316
30q69rbpn7g75      2908690533 21-SEP-20 AM                1          23377.169          13070     11680.972                  0                      0                      0             1582917              89479                            0                    258
30q69rbpn7g75       291832905 21-SEP-20 AM                2         40314.0255          15940      24490.36                  0                      0                      0             1831813             128525                            0                    248
30q69rbpn7g75      1467059382 21-SEP-20 AM                1          20179.636           5760     16155.407                  0                      0                      0              124599              84761                            0                    780
30q69rbpn7g75      1033740578 21-SEP-20 AM                1            1728.49           1570         2.906                  0                      0                      0                1525                 35                            0                     12
30q69rbpn7g75      4175108531 21-SEP-20 AM                8         147782.833        59617.5    65356.3268                  0                      0                      0             2280007          245985.25                            0                    402
30q69rbpn7g75      3938646712 21-SEP-20 AM                2         139722.393          55905     86105.482                  0                      0                      0             6080269             616766                            0                 1143.5
30q69rbpn7g75      2823019537 21-SEP-20 PM               48         238332.678     138706.875    19077.4738                  0                      0                      0          6928661.85         99573.2708                            0             145.395833
30q69rbpn7g75      2823019537 21-SEP-20 PM               64         147520.373     80835.1563    19092.0985                  0                      0                      0          4148771.28         106131.016                            0              79.890625
30q69rbpn7g75      2823019537 21-SEP-20 PM               58         180185.939     113102.931    14365.2987                  0                      0                      0          5926129.21         123920.569                            0             22.0344828
30q69rbpn7g75      2823019537 21-SEP-20 PM               37         307432.645     201436.216    22904.6901                  0                      0                      0          10204978.9         158950.973                            0             201.243243
30q69rbpn7g75      2823019537 21-SEP-20 PM               28         465140.082     326940.357    30687.9033                  0                      0                      0          16715547.3             263153                            0             460.571429
30q69rbpn7g75      2823019537 21-SEP-20 PM               14         934982.157     690958.571    41595.1995                  0                      0                      0          34940770.8         365038.357                            0             243.285714
30q69rbpn7g75      2823019537 21-SEP-20 PM               14         818768.534     640054.286    42596.9506                  0                      0                      0          33547406.9         451864.786                            0                  471.5
30q69rbpn7g75      2823019537 21-SEP-20 PM                4         2329248.39        2013515    75722.5718                  0                      0                      0           104343531         1027683.25                            0                    859
30q69rbpn7g75      2823019537 21-SEP-20 PM                1         4006478.22        3707840     88265.422                  0                      0                      0           186157328            1082000                            0                   2744
30q69rbpn7g75      2823019537 21-SEP-20 PM                1         1818375.63        1771470     20586.628                  0                      0                      0            88206433             374924                            0                   2692
30q69rbpn7g75      2823019537 21-SEP-20 PM                2            1742051        1399440     41061.122                  0                      0                      0          68750135.5             335797                            0                 1479.5
30q69rbpn7g75      2823019537 21-SEP-20 PM                0         3552963.71        3183770    142948.208                  0                      0                      0           154159601             633488                            0                      0

This output is from my sqlstat.sql script. Notice how earlier in the day there are several plans with widely varying elapsed times. Plan hash value 1732425686 averages 66 millisconds during the hour ending at 10 am. I do not know why the query has so many different plans. 2823019537 was the bad plan and it got locked in throughout the afternoon and into the night.

Later in that same week I reviewed the plans and found that the longest running plans used certain indexes on the top table and the shorter ones used another index. I looked at the “Segments by Logical Reads” section of an AWR report during the problem time and found that the top segment was index X6_WFCTOTAL. I used by getplans.sql to get all the plans for SQL id 30q69rbpn7g75 and found that the fastest ones used range scans against index X1_WFCTOTAL and the slow ones did range or skip scans against indexes X5_WFCTOTAL or X6_WFCTOTAL. So I picked one plan, 382022017, and used coe_xfr_sql_profile.sql to force 30q69rbpn7g75 to always run with plan 382022017 which used index X1_WFCTOTAL. Here is some execution history of the plan I picked:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75       382022017 24-SEP-20 AM                2          1713.2625            865         5.805                  0                      0                      0                2135               76.5                            0                    141
30q69rbpn7g75       382022017 24-SEP-20 AM                2            355.329            120      265.0765                  0                      0                      0                8183             1324.5                            0                    673
30q69rbpn7g75       382022017 24-SEP-20 AM                1            190.386            180        28.981                  0                      0                      0               17505               1759                            0                   1878
30q69rbpn7g75       382022017 24-SEP-20 PM                4          217.37625            200        20.723                  0                      0                      0            21009.25             392.25                            0                   1865
30q69rbpn7g75       382022017 24-SEP-20 PM                5           507.1578            114      432.2858                  0                      0                      0              7076.6              936.2                            0                  407.8
30q69rbpn7g75       382022017 24-SEP-20 PM                3         47.5793333             10    35.1866667                  0                      0                      0          504.333333         81.3333333                            0                     54
30q69rbpn7g75       382022017 24-SEP-20 PM                1            313.107            110       229.071                  0                      0                      0                8178                399                            0                    396
30q69rbpn7g75       382022017 25-SEP-20 AM                3         30.7433333     16.6666667    15.0446667                  0                      0                      0                 927                 89                            0             110.666667

This plan was averaging less than 1800 milliseconds. But after putting in the supposedly better plan it seemed to have worse execution times that other plans. Also, the plan hash value was different than 382022017. Here is the execution history from when I put this SQL Profile in place:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75       653445232 28-SEP-20 PM                1          92139.015          26930     61332.901                  0                 20.998                      0             2202073            2197909                            0                    171
30q69rbpn7g75       653445232 28-SEP-20 PM                9         83373.8866     25152.2222    56386.2852                  0             14.6918889                      0          2102264.11            2065530                            0             2613.88889
30q69rbpn7g75       653445232 28-SEP-20 PM                5         101830.312          29264    70157.0998                  0                 9.1714                      0           2439219.6          2432260.2                            0                    348
30q69rbpn7g75       653445232 28-SEP-20 PM                9          89705.846     26606.6667    61716.6917                  0             14.2046667                      0          2212464.67         2196829.33                            0             825.111111
30q69rbpn7g75       653445232 28-SEP-20 PM                2          91173.826          26425     63832.838                  0                22.1385                      0             2181959          2195820.5                            0                  310.5
30q69rbpn7g75       653445232 28-SEP-20 PM                4         90240.0368          26430    62587.1345                  0                 16.558                      0             2200555          2197764.5                            0                 168.75
30q69rbpn7g75       653445232 28-SEP-20 PM                3         107541.972     26816.6667     75418.071                  0                 16.164                      0             2193977         2193579.33                            0             39.3333333
30q69rbpn7g75       653445232 29-SEP-20 AM                1          92416.895          26410     66108.169                  0                 21.449                      0             2194591            2193764                            0                    158
30q69rbpn7g75       653445232 29-SEP-20 AM                1         103773.265          26510     75455.905                  0                 13.887                      0             2200242            2198725                            0                    122
30q69rbpn7g75       653445232 29-SEP-20 AM               13         74637.6784     20391.5385    51139.8206                  0             16.9292308                      0          1686133.69         1684935.15                            0             24.2307692
30q69rbpn7g75       653445232 29-SEP-20 AM               10         105894.074          28882    72971.0734                  0                20.1169                      0           2418827.8          2417314.8                            0                     56
30q69rbpn7g75       653445232 29-SEP-20 AM                4         89408.2108          26080    61537.7755                  0                10.0275                      0             2174791            2169846                            0                 421.75
30q69rbpn7g75       653445232 29-SEP-20 AM                7         71644.7906     20574.2857    48465.1234                  0                  8.331                      0             1697739         1694385.43                            0             232.857143
30q69rbpn7g75       653445232 29-SEP-20 AM                2         113993.942          32580    72589.2515                  0                 6.2465                      0             2672441            2667206                            0                    145
30q69rbpn7g75       653445232 29-SEP-20 AM                6         99793.2442          28600    69181.7687                  0             4.65783333                      0          2394135.83         2387505.17                            0                    246
30q69rbpn7g75       653445232 29-SEP-20 AM               10         94787.9044          28367    65535.8735                  0                10.6895                      0           2353904.6          2336951.3                            0                  476.8
30q69rbpn7g75       653445232 29-SEP-20 PM                5           89167.86          26462     61499.462                  0                14.4808                      0           2200557.8            2195895                            0                  201.8

Notice first that the plan hash value is 653445232 instead of 382022017. I usually see things like this when the plan has system generated table temporary table names, but I have not seen that in the plan. There must be something like that going on. Either that on the SQL Profile just locks in a slightly different plan. Anyway, 653445232 is the plan caused by the SQL Profile. Notice how the average elapsed time hangs around 90,000 to 100,000 milliseconds. But 382022017 has elapsed times under 1800 milliseconds. Seeing these results, I dropped the SQL Profile. Kind of like a doctor who swears to “do no harm” I did not want to slow down queries that were running very efficiently in my attempt to prevent another system slowdown or outage like we had on September 21. I dropped the SQL Profile on September 29.

Then yesterday, October 19, the problem recurred. This time the bad plan was 3908549093 and used a skip scan on X6_WFCTOTAL. I seem to see a lot of bad plans with skip scans. Here was the execution history before I put the SQL Profile back in:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75      3908549093 19-OCT-20 AM               16         832180.951      450528.75    68650.1103                  0                      0                      0          21204921.8           495900.5                            0                    562
30q69rbpn7g75      3908549093 19-OCT-20 PM               33         392068.144     194336.364    47412.7197                  0                      0                      0          9271475.06         221593.545                            0             309.454545
30q69rbpn7g75      3908549093 19-OCT-20 PM                4         3543778.15        1793980    261653.391                  0                      0                      0          82176276.8            1088971                            0                 1036.5

Between 12 and 1 pm the system was really bogged down with executions of 30q69rbpn7g75 taking almost an hour, 3543778 milliseconds. So, I put the SQL Profile back in. I had the script sitting there from the first time I tried it. These are all kept in the sqlt/utl directory. After putting it in things cleared up and the backlog of reports slowly emptied. I think we killed one long running session and had the user rerun the report with the SQL Profile in place.

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75       653445232 19-OCT-20 PM               27         106287.519     26936.2963     61603.699                  0             7.41166667                      0          2226367.89         2204705.19                            0             1358.25926
30q69rbpn7g75      3908549093 19-OCT-20 PM                0         9380042.72        6465310    308289.185                  0                      0                      0           294469425            1581492                            0                   7067
30q69rbpn7g75       653445232 19-OCT-20 PM               80         103522.356       26259.25    67322.2418                  0             21.1012125                      0           2159835.1         2142347.26                            0               914.2625
30q69rbpn7g75       653445232 19-OCT-20 PM               31         99252.0065          26990    66069.6995                  0             24.5769355                      0          2227363.45         2213021.26                            0             885.709677
30q69rbpn7g75       653445232 19-OCT-20 PM                5         94587.0244          26988    64440.3338                  0                24.1514                      0           2223326.4          2204672.6                            0                 1553.6
30q69rbpn7g75       653445232 19-OCT-20 PM                5         93301.2074          26690    65105.9732                  0                14.0782                      0           2213653.6            2203033                            0                  736.4
30q69rbpn7g75       653445232 19-OCT-20 PM                3         101117.233     27193.3333    72020.9813                  0             15.4373333                      0          2225218.33            2207644                            0                   1623
30q69rbpn7g75       653445232 20-OCT-20 AM                1           92061.35          26550     64953.945                  0                 22.245                      0             2210157            2206170                            0                    235
30q69rbpn7g75       653445232 20-OCT-20 AM                1          92872.242          26470     66092.822                  0                 11.999                      0             2208305            2206231                            0                    158
30q69rbpn7g75       653445232 20-OCT-20 AM                2         88107.2095          26075     61670.129                  0                17.2175                      0           2205332.5            2203981                            0                    116
30q69rbpn7g75       653445232 20-OCT-20 AM                1          91007.493          26210     64276.474                  0                  9.972                      0             2208516            2206310                            0                    177
30q69rbpn7g75       653445232 20-OCT-20 AM                4         101878.314          26940     65491.475                  0                 30.476                      0          2210945.25         2204828.75                            0                  461.5
30q69rbpn7g75       653445232 20-OCT-20 AM               10         97441.3635          26496    67549.4579                  0                  3.178                      0           2197412.4          2192467.5                            0                  478.1
30q69rbpn7g75       653445232 20-OCT-20 AM                5         59362.9672          17038    41573.7714                  0                  7.767                      0           1416804.4            1416061                            0                   10.2
30q69rbpn7g75       653445232 20-OCT-20 AM                6         108681.505     30798.3333    75082.4997                  0             10.5146667                      0          2558854.67         2549531.83                            0                    405
30q69rbpn7g75       653445232 20-OCT-20 AM                1         186021.778          53440    131017.985                  0                 18.976                      0             4285650            4202806                            0                    814

But with the SQL Profile in place it still in consistently around 90,000 milliseconds even today. There are no executions under 1800 milliseconds like before. It is a puzzle, but it got us through the issue. I am planning to leave this in place to prevent another production outage, but I suspect that these reports may be running longer than normal in many cases. But at least they are consistent and the business needs are being met.

So, this is another example of a SQL Profile to the rescue in a production database performance issue. But this was different because it was not clear that the one plan was always the best one. I went so far as to put it in and take it out and then put it in again. I ended up leaving it in because it resolved a performance issue that we had twice and do not want to have in the future. Hopefully any less than optimal performance outside of these peak times will be worth it since we are preventing an outage with significant impact to the business.


Categories: DBA Blogs

Generate string based on pattern

Tom Kyte - Tue, 2020-10-20 07:06
Hi Chirs, Connor, Could you please help or suggest a way to generate string based on pattern Pattern - <b>^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$</b> I need to generate millions of string like <b>f9f8c8e2-0b20-4160-8f74-e836f4661fc5</b> matches with the pattern. e.g. <code>SELECT * FROM dual WHERE REGEXP_LIKE('f9f8c8e2-0b20-4160-8f74-e836f4661fc5', '^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$');</code>
Categories: DBA Blogs

Update rows when cursor returns no data

Tom Kyte - Tue, 2020-10-20 07:06
Purchase_Mas contains data of purchase master and payment_amt contains data of payment received from party. Cursor c2 does not return any value when not found in payment but still i want to some calculation happen even data not found in payment table. run following script and when you fire a query " select * from purchase_mas_tmp ; " Look at ( ID =5 and party code=12 ) and ( ID = 6 and party code= 14 ) when payment amount not found in cursor c2 but i want to os_amt display as 10000 for 5 and 20000 for 6 id of payment. so how its possible <code>create table PURCHASE_MAS ( id NUMBER, party_code NUMBER, total_pcs NUMBER, total_amt NUMBER, purchase_date DATE, reg_flg CHAR(1), discount_amt NUMBER ); create table PAYMENT ( id NUMBER, party_code NUMBER, payment_date DATE, payment_amt NUMBER ); create global temporary table PURCHASE_MAS_TMP ( id NUMBER, party_code NUMBER, total_pcs NUMBER, total_amt NUMBER, purchase_date DATE, reg_flg CHAR(1), payment_date DATE, payment_amt NUMBER, os_amt NUMBER, discount_amt NUMBER ) on commit preserve rows; insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (5, 12, 456, 10000, to_date('01-08-2018', 'dd-mm-yyyy'), 'Y', 100); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (6, 14, 878, 20000, to_date('21-08-2018', 'dd-mm-yyyy'), 'N', 200); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (1, 11, 457, 30000, to_date('11-08-2018', 'dd-mm-yyyy'), 'Y', 300); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (2, 12, 658, 40000, to_date('10-08-2018', 'dd-mm-yyyy'), 'Y', 400); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (3, 11, 1454, 50000, to_date('07-08-2018', 'dd-mm-yyyy'), 'Y', 500); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (4, 13, 1254, 60000, to_date('18-08-2018', 'dd-mm-yyyy'), 'N', 600); insert into PAYMENT (ID, PARTY_CODE, PAYMENT_DATE, PAYMENT_AMT) values (1, 11, to_date('01-09-2018', 'dd-mm-yyyy'), 2500); insert into PAYMENT (ID, PARTY_CODE, PAYMENT_DATE, PAYMENT_AMT) values (2, 12, to_date('12-09-2018', 'dd-mm-yyyy'), 3000); insert into PAYMENT (ID, PARTY_CODE, PAYMENT_DATE, PAYMENT_AMT) values (3, 11, to_date('11-09-2018', 'dd-mm-yyyy'), 30000); insert into PAYMENT (ID, PARTY_CODE, PAYMENT_DATE, PAYMENT_AMT) values (4, 13, to_date('21-09-2018', 'dd-mm-yyyy'), 400); declare cursor c1 is select id, party_code, total_pcs, total_amt, purchase_date, reg_flg , discount_amt from purchase_mas; cursor c2 is select id, party_code, sum(payment_amt) payment_amt from payment group by id, party_code ; begin for i in c1 loop insert into purchase_mas_tmp (id, party_code, total_pcs, total_amt, purchase_date, reg_flg,discount_amt) values (i.id, i.party_code, i.total_pcs, i.total_amt, i.purchase_date, i.reg_flg, i.discount_amt); end loop; for i in c2 loop update purchase_mas_tmp tbl set payment_amt = nvl(i.payment_amt,0), os_amt = tbl.total_amt - nvl(tbl.discount_amt,0) - nvl(i.payment_amt,0) where id = i.id and party_code = i.party_code ; end loop; end; -- select * from purchase_mas_tmp ; /* drop table PURCHASE_MAS purge ; drop table PAYMENT purge ; drop table purchase_mas_tmp purge ; */</code>
Categories: DBA Blogs

How to return header and data using sys_refcursor in oracle pl sql

Tom Kyte - Tue, 2020-10-20 07:06
I want to return headers of column along with data while returning result using refcursor. <code> create table fetch_header_rows ( company_name varchar2(500), Company_id number, ammount number(20,8), data_commnets varchar2(500) ); insert into fetch_header_rows values('company1' , 1, 128.80,'test data1'); insert into fetch_header_rows values('company2' , 2, 129.80,'test data1'); insert into fetch_header_rows values('company3' , 3, 128,'test data1'); insert into fetch_header_rows values('company4' , 4, 100.80,'test data1'); create or replace procedure SP_fetch_header_rows(data_fetch out sys_refcursor ) as begin open data_fetch for select * from fetch_header_rows; end; </code> Here we are fetching cursor result in file. hence we required header as a first row in data. ( in current scenario we have more than 150 columns)
Categories: DBA Blogs

From DBA to DBI

Pakistan's First Oracle Blog - Mon, 2020-10-19 18:48

Recently Pradeep Parmer at AWS had a blog post about transitioning from DBA to DBI or in other words from database administrator to database innovator. I wonder what exactly is the difference here as any DBA worth his or her salt is an innovator in itself.

Administering a database is not about sleepily issuing backup commands or in terms of Cloud managed databases clicking here and there. Database administration has evolved over time just like other IT roles and is totally different what it was few years back. 

Regardless of the database engine you use, you have to have a breadth of knowledge about operating systems, networking, automation, scripting, on top of database concepts. With managed database services in cloud like AWS RDS or GCP Cloud SQL or Big Query many of the skills have become outdated but new ones have sprung up. That has always  been the case with DBA field. 

Taking the example of Oracle; what we were doing in Oracle 8i became obsolete in Oracle 11g and Oracle 19c  is a totally different beast. Oracle Exadata, RAC, various types of DR services, fusion middleware are in itself a new ballgame with every version. 

Even with managed database services, the role of DBA has become more involved in terms of migrations and then optimizing what's running within the databases from stopping the database costs going through the roof.

So the point here is that DBAs have always been innovators. They have always been trying to find out new ways to automate the management and healing of their databases. They always are under the pressure to eke out last possible optimization out of their system and that's still the case even if those databases are supposedly managed by cloud providers. 

With purpose built databases which are addressed different use case for different database technology the role of DBA has only become more relevant as they have to evolve to address all this graph, in-memory, and other cool nifty types of databases.

We have always been innovators my friend. 

Categories: DBA Blogs

DBMS_ASSERT returning the ORA-44002: invalid object name exception for existing database links and directories

Tom Kyte - Mon, 2020-10-19 12:46
Hi, In my procedure I'm trying to check whether a database link and a directory exist. If they don't I wanted to display a nice message about them needing to be created etc. I thought of using dbms_assert.sql_object_name, this seems to do the trick for tables, views, functions but not for database links or directories. Here is my test case (not my actual procedure, but I have the same issue) The table, view, function return the name / don't error when running the dbms_assert part. The database link, directory returns "ORA-44002: invalid object name" -- table <code>create table test_tbl (x number); select table_name from user_tables where table_name = 'TEST_TBL'; select sys.dbms_assert.sql_object_name('TEST_TBL') from dual;</code> -- view <code>create view test_vw as select * from test_tbl; select view_name from user_views where view_name = 'TEST_VW'; select sys.dbms_assert.sql_object_name('TEST_VW') from dual;</code> -- function <code>create or replace function test_f return date is dt date; begin dt := sysdate; return dt; end; select object_name from user_objects where object_name = 'TEST_F'; select sys.dbms_assert.sql_object_name('TEST_F') from dual</code>; -- database link <code>create database link test_link connect to user123 identified by user123 using 'dwh'; select db_link from user_db_links where db_link = 'TEST_LINK'; select sysdate from dual@test_link; select sys.dbms_assert.sql_object_name('test_link') from dual; select sys.dbms_assert.sql_object_name('TEST_LINK') from dual;</code> -- directory <code>create directory test_dir as '/apps1/oradata/big_dump'; select directory_name from all_directories where directory_name = 'TEST_DIR'; select sys.dbms_assert.sql_object_name('test_dir') from dual; select sys.dbms_assert.sql_object_name('TEST_DIR') from dual;</code> Thanks Nick
Categories: DBA Blogs

Oracle TNS poison attack vulnerability

Tom Kyte - Mon, 2020-10-19 12:46
Hi Team We are running Non-RAC Oracle and the TNS poison attack vulnerability (Oracle Security Alert for CVE-2012-1675 - https://www.oracle.com/security-alerts/alert-cve-2012-1675.html) looks affected this version of Oracle. What we have done: * Have applied the latest version of Oracle critical patches update (July/2015) on What we are expecting: * We hope applying the latest CPU (July/2015) for can fix it and no further actions required. My question is: * Do we still need to follow the steps in <Using Class of Secure Transport (COST) to Restrict Instance Registration (Doc ID 1453883.1)> mentioned in Oracle Security Alert for CVE-2012-1675 to fix this issue? Best Regards
Categories: DBA Blogs

Oracle Processes consuming 100% CPU

Tom Kyte - Mon, 2020-10-19 12:46
The server machine on which the DB resides has 32 CPUs. (2 sockets * 8 cores per socket * 2 threads per core) I notice many oracle processes (both non-background and background) consuming high (sometimes 100%) of the CPU. Now, this CPU % is only for 1 CPU out of the total 32 in the server. And once a process hits 100% of CPU consumption, the process cannot take resources from other CPUs. (please correct me if I am wrong) Right now, THREADED_EXECUTION parameter is set to FALSE. I was thinking, if we can enable multi threading in the Database, then, may be the process that is hitting 100% and is looking for more CPU can take it from the other CPUs. Is this a good idea? If yes, then how should we enable multi threading in the DB and what is the possible impact on the DB? Please Note: This is a single instance DB (non-RAC) OS: SUSE Linux Enterprise Server 12 Thanks in Advance, Abhishek, Kolkata, India
Categories: DBA Blogs

SQL Server: Generating SQL script using PowerShell and Template file

Yann Neuhaus - Sun, 2020-10-18 11:46

In this blog post, I will share with you a small PowerShell script I did recently.

I have noticed that my customer performs a very repetitive and time-consuming task almost every day.
New columns are added to tables on their business-critical database and they need to maintain SQL scripts file with all the ALTER TABLE statements for each new column.

For every new column, my customer copy-pastes the following SQL Script and then change parts of it.

* New column 
*            Schema:       Order
*            Table:        TestTable2     
*            Column:       ColumnName1    
* History    
*            Date:         18/10/2020 
*            User:         Steven Naudet 

       SELECT * 
       FROM sys.tables AS t 
       JOIN sys.[columns] AS c ON t.[object_id] = c.[object_id]
       JOIN sys.schemas AS s ON s.[schema_id] = t.[schema_id]  
       WHERE 1=1 
       AND s.name = 'Order'  
       AND t.name = 'TestTable2' 
       AND c.name = 'ColumnName1' 
       PRINT 'Altering table Order.TestTable2 adding column [ColumnName1]' ; 
       ALTER TABLE [Order].TestTable2 
       ColumnName1 NOT NULL; 

* End New column ColumnName1  

The highlighted lines are manually edited by my customer every time there’s a new column to be added to the database, which can occur 20 times per week.
I decided to write a PowerShell function to do this task faster so my customer can work on more interesting things instead.

The idea is to use a Template file for the SQL Script. The file is similar to the SSMS templates.
The PowerShell script modifies the template and as output sends the SQL to Clipboard using Set-Clipboard.
Consecutive calls to the function will add the SQL commands after one another in the Clipboard. This way my customer can just Paste the generated SQL script to his SQL source control tool.

You can see the script in action with the GIF below.

PowerShell Script in action GIF

Here is the script.

function New-AddColumnSQL {

    param (
        [Parameter(Mandatory=$true)][string] $Schema,
        [Parameter(Mandatory=$true)][string] $Table,
        [Parameter(Mandatory=$true)][string] $Column,
        [Parameter(Mandatory=$true)][string] $Type,
        [Parameter(Mandatory=$false)][string] $defaultValue,
        [Parameter(Mandatory=$false)][switch] $isNotNull = $false,
        [Parameter(Mandatory=$false)][string] $User = 'Steven NAUDET'

    $TemplateFile = 'Z:\scripts\TemplateAddColumn.sql'

    $Clipboard = Get-Clipboard
    # Clear Clipboard if first call to the function
    if ($Clipboard -like '*Altering table*') {
        $returnMessage = 'SQL Script appended to Clipboard'
    } else {
        $returnMessage = 'SQL Script pasted to Clipboard'
        Set-Clipboard -Value $null

    $ColumnDef = $Type

    # NOT NULL
    if($isNotNull) { 
        $ColumnDef = $ColumnDef + ' NOT'
    $ColumnDef = $ColumnDef + ' NULL'

    # DEFAULT value
    if($defaultValue) { 
        $ColumnDef = $ColumnDef + ' DEFAULT ' + $defaultValue

    $SQLscript = Get-Item -Path $TemplateFile | Get-Content
    $SQLscript = $SQLscript.Replace('<Date>', (Get-Date -UFormat "%d/%m/%Y"))
    $SQLscript = $SQLscript.Replace('<SchemaName>', $Schema)
    $SQLscript = $SQLscript.Replace('<TableName>', $Table)
    $SQLscript = $SQLscript.Replace('<ColumnName>', $Column)
    $SQLscript = $SQLscript.Replace('<UserName>', $User)
    $SQLscript = $SQLscript.Replace('<ColumnDefinition>', $ColumnDef)

    Set-Clipboard $SQLscript -Append

    return $returnMessage


There’s probably a lot of room for improvement for this code but the goal of this blog post is to show you how handy PowerShell can be. It can help you save a lot of time.
I took about 1 hour to write this code and I’m sure my customer will save more than that every month.

Cet article SQL Server: Generating SQL script using PowerShell and Template file est apparu en premier sur Blog dbi services.

Oracle Database Appliance vs Oracle Cloud Infrastructure

Yann Neuhaus - Sun, 2020-10-18 05:33

Oracle Database Appliances are very popular these days. And not only among new customers for this kind of engineered systems. Almost all customers already using old generation ODAs are renewing their infrastructure by choosing again ODAs, meaning that the solution is good enough and probably better than anything else. But now, public clouds are a real alternative to on-premise servers, and Oracle Cloud Infrastructure is a solid competitor vs Amazon and Azure public clouds. So what’s the best solution for your databases, ODA or OCI? Let’s do the match.

Round 1 – Cost

Yes, it is important. You will need to buy ODAs and you will need a budget for that. Nothing new regarding this platform, it requires an investment. ODA is cheaper since light models are available, but if you need significant amount of storage, it comes at a cost. But hopefully, the cost is quite similar to another x86 platform, and the ODA doesn’t have these hidden costs due to additional work for troubleshooting compatibility issues.
Cost works differently on OCI. Basically, you will pay for servers, storage, services on a monthly basis. No initial investment is needed, and that is one of the advantages of OCI. However, don’t expect the “TCO” to be lower than acquiring your own hardware. I do not mean that cloud solutions are expensive, but the cost will be quite similar to an on-premise solution after some years. Going to the cloud is mainly changing your mind about what’s an infrastructure. Is it servers you manage on your own or is it a platform for running your information system?
There is no winner in this round, you will only know after several years which solution would have been the less expensive.

Winner: none

Round 2 – Deployment speed

ODA allows fast deployment of a new database infrastructure. Actually, it’s the best on-premise solution regarding that point. And it’s a serious advantage over DIY platforms. Being able to create your first database the same day you open the box is quite nice. But OCI is even better, because at this very moment we are talking now, your future servers are already available, Terabytes of storage are waiting for you, and databases are almost there, few clicks away from now. If you’re looking for fast deployment, OCI is an easy winner.

Winner: OCI

Round 3 – Security

Everybody is talking about security. Is my database safer in the cloud than in my own datacenter? Actually, it’s quite hard to tell. For sure, OCI is a public cloud, meaning that your database can be reached from virtually everywhere. But you will probably build strong security rules to protect your cloud infrastructure. You will use IPSec VPN between OCI and your on-premise site, or a FastConnect channel to dedicate a link between your on-premise equipment and OCI avoiding data to transit through the internet. Putting your database in the cloud is not less secure than giving remote connection on your infrastructure to your employees or providers. Furthermore, databases in OCI are stored using encryption, even with Standard Edition and without the need for Advanced Security option.
On ODA, you database is in your network, meaning not on something public and meaning less visible. This is good, but again, only if you have good security rules inside your company.

Winner: none

Round 4 – Performance

ODA is a strong performer, especially the X8-2M model. With up to 75TB of NVMe SSD, it’s quite tough to achieve better performance with anything else. Yes you could grab few MB/s more or ms less with few other solutions, but do you really think that your users will see the difference? No. And what about OCI? OCI rely on SSD storage only, that’s a very good start. And do they offer NVMe? Yes, for sure. Bare metals servers (BM Dense I/O) provide up to 51TB of RAW capacity based on 8 NVMe drives. And something tells me that these servers are actually nearly the same as ODA X7-2Ms. So expect similar performance on both solutions.

Winner: none

Round 5 – License management

No doubt that on-demand capacity of Enterprise licenses is one of the key feature of the ODA. You can start with only 1 Enterprise license on each ODA, and increase the number of licenses when you need more resources. A kind of fine tuning for the licenses.

On OCI, you can choose to bring your own license you bought long time ago, and keep your investment for later if for some reason you would like to go back to on-premise infrastructure. Or you can choose to include the license fees into the monthly fees. With the first option, you manage your licenses as you always did, and should be careful when you increase the cloud resources dedicated to your databases (mainly the oCPUs). With the second option, you don’t have to manage your licenses anymore: you don’t need to buy them, pay the yearly support, or review them regularly because all is included with your OCI database services. It’s simply a great feature.

Winner: OCI

Round 6 – Simplicity

ODA and OCI share the same goal: simplify your database infrastructure. ODA is simplifying by providing the best automation available for deploying complex Oracle stack. And when you come from an existing on-premise infrastructure, migration to ODA will be quite easy. OCI looks even more simplifying, but if you will not have to work on the servers, you’ll have to think about how to implement your infrastructure. Which subnet for my databases? Should I also move my application servers? What kind of network connectivity with my on-premise environment? Which kind of database service fits my needs?

If you’re starting from scratch with Oracle databases, it’s probably more simple to go directly to OCI. If you’re migrating from an existing on-premise environment, it’s probably more simple to replace your existing servers with ODAs. No winner here.

Winner: none

Round 7 – Control

For some customers, being able to control their infrastructure is vital. On public clouds, you will not have control on everything, because someone will do a part of the maintenance job, mostly automated tasks. And this is for some other customers something they don’t want to manage. On ODA, you control everything on your server: first, it’s not mandatory to connect it to the internet. Updates on ODA cannot be automated and will be applied manually through good old zipfiles, and in case of serious problems, ODA is fast to redeploy. So if you need to have total control over your infrastructure, the ODA is the best solution for you.

OCI is only a good solution if you already planned to lose some control, for obvious workload reasons.

Winner: ODA

Round 8 – Resilience

Disaster recovery solutions were not so common 10 years ago. People were relying on tape backups, were confident about this solution and were believing they would be able to restore the tape “somewhere”, without asking them where actually was “somewhere”. At best, the old servers were kept for disaster recovery usage, in the same rack.
This has definitely changed, and now disaster recovery is part of each new infrastructure design. And regarding the software side of the database, this is something mature and highly reliable (with Data Guard or Dbvisit standby). The most complex part being to design the split into multiple datacenters (2, most of the time). Implementing that cleverly, avoiding Single Point Of Failure that could wipe out the efforts to achieve high resiliency, being a tough challenge. ODA is a server like others, and you will have to do the same amount of work to design a high resilient infrastructure.

Cloud providers have been thinking about disaster recovery since the very beginning. The datacenters are spread all around the world, and each one has separate availability domains (isolated building blocks), allowing multiple levels of disaster recovery scenarios. Furthermore, storage and backups naturally embed this high resilience. And as everyone will use the same mechanisms, you can trust OCI regarding resilience.

As a conclusion, it’s nearly impossible to reach the level of resilience of OCI on your on-premise ODA infrastructure, that must be said…

Winner: OCI

What about the other solutions?

For sure, it still possible to build your own database infrastructure with classic servers. But do you really have time for that?
EXADATA is also a nice solution if you need such a beast for multi-TB databases with high number of transactions or fastest BI platform. And now it can bring you both the advantages of OCI and appliance with the Cloud@customer mode. Oracle brings the server in your datacenter, and you only pay for it monthly as if you were using it in the cloud.
Hybrid solution with a mix of ODA of OCI could also fit your needs but you’ll have to manage both technologies, and that’s not so smart. Unless you need this kind of solution for the transition to the cloud…


Is ODA better than OCI? Is OCI better than ODA? Both solutions are smart choices and none will disappoint you if you achieve to leverage the advantages and avoid the constraints of each one. On OCI, you will benefit from immediate availability of the resources, fast provisioning, flexibility, no-brainer license management. With ODA, you will keep your database infrastructure at home, and you will have strong performance and full control over your servers, including for the cost. Choosing between these two solutions is only a matter of strategy, and this does not only concern the DBA.

Cet article Oracle Database Appliance vs Oracle Cloud Infrastructure est apparu en premier sur Blog dbi services.

Followup with Database runInstaller applyRU Failed Me

Michael Dinh - Sat, 2020-10-17 22:57

This is a followup Database runInstaller applyRU Failed Me

I finally figured out my error which I should have seen from the beginning and better error reporting would have helped.

Can you guess what’s wrong?

unzip -qod $ORACLE_HOME /vagrant_software/p6880880_190000_LINUX.zip; echo $?
unzip -qod $ORACLE_HOME /vagrant_software/p6880880_190000_Linux-x86-64.zip; echo $?

The wrong platform for opatch was used.
Here is what should have been deployed.

[oracle@ol7-112-dg1 ~]$ unzip -qod $ORACLE_HOME /vagrant_software/p6880880_190000_Linux-x86-64.zip; echo $?
[oracle@ol7-112-dg1 ~]$
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
29585399;OCW RELEASE UPDATE (29585399)
29517242;Database Release Update : (29517242)
OPatch succeeded.
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version:
OPatch succeeded.
[oracle@ol7-112-dg1 ~]$

[oracle@ol7-112-dg1 ~]$ echo $ORACLE_HOME
[oracle@ol7-112-dg1 ~]$

### This failed:
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
/u01/app/oracle/product/ line 839: [: too many arguments
/u01/app/oracle/product/ line 839: [: too many arguments
Java (1.7) could not be located. OPatch cannot proceed!
OPatch returns with error code = 1
[oracle@ol7-112-dg1 ~]$

### This works but why?
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch version -jdk $ORACLE_HOME/jdk
OPatch Version:
OPatch succeeded.
[oracle@ol7-112-dg1 ~]$

### Here is java version and noticed it's 64-Bit
[oracle@ol7-112-dg1 bin]$ $ORACLE_HOME/jdk/bin/java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
[oracle@ol7-112-dg1 bin]$

$ORACLE_HOME/runInstaller -applyRU /home/oracle/patch/31305339 should now work.


Subscribe to Oracle FAQ aggregator