Feed aggregator

PRAGMA SERIALLY_REUSABLE implications in a callback service

Tom Kyte - Wed, 2020-08-12 13:06
<b></b>Scenario: Oracle Recipe Tool / Microservices JNDI : jdbc/SOAXAOPS PLSQL : schema.pkg1.procedure This is the entry point to an on-premise DB package. It can be called from cloud and non-cloud services. Now whenever I would compile schema.pkg1 The callback will give below error : ORA-04065: not executed, altered or dropped package body "schema.pkg1" ORA-06508: PL/SQL: could not find program unit being called: "schema.pkg1" This error can be bypassed if a DDL was issued in this schema or existing stale connections are explicitly killed. So I added PRAGMA SERIALLY_REUSABLE; to schema.pkg1 based on below url ======================================================================== https://stackoverflow.com/questions/1761595/frequent-error-in-oracle-ora-04068-existing-state-of-packages-has-been-discarde https://docs.oracle.com/en/cloud/paas/integration-cloud/database-adapter/resolve-error-ora-04068-existing-state-packages-has-been-discarded.html https://docs.oracle.com/cd/E11882_01/appdev.112/e25519/packages.htm#LNPLS99977 In addition integrations team did this modification: Under Connection Properties: Test Connections on Reserve: Yes Test Table Name: SQL begin dbms_session.modify_package_state(dbms_session.reinitialize); end; Seconds to trust Idle Pool Connection: 0 This solved initial error of ORA-04065 , ORA-06508 However it gave 1-off error : ORA-06508: PL/SQL: could not find program unit being called This was at the line in schema.pkg1 which was giving call to schema.pkg2 Now schema.pkg2 is not having PRAGMA SERIALLY_REUSABLE; In a request set of 2 requests... first one faced this err and subsequent requests have been fine so far... (This is UAT environment) ============== Question is : ============== When I recompile pkg (HOT patch) in UAT can it re happen. Do I need to add this pragma to all nested packages. What are the pros and cons of using this Pragma apart from trigger and SQl prompt usage as mentioned on Oracle documentation. Is there an alternate way to deal with these errors for callback services.
Categories: DBA Blogs

Change local_temp_tablespace to shared TEMP

Tom Kyte - Wed, 2020-08-12 13:06
Hi, I am trying to find the downside of setting local_temp_tablespace to TEMP tablespace which is a shared temp. The reason is because of a bug, if local_temp_tablespace is NULL and dba_users.spare9 is NULL, then Oracle assigns SYSTEM tablespace as local_temp_tablespace when I issue alter user command. For example, if a user AGUPTA has spare9 as NULL in DBA_USERS and local_temp_tablespace is currently NULL and I issue the command to change password: <code>alter user AGUPTA identified by xpS2Z^4%g%0h;</code> Then, the local_temp_tablespace for AGUTPA changes to SYSTEM. This is not good. Mike Dietrich has a blog post about it. So, we did a small test and found that if we switch all users who have NULL for local_temp_tablespace to use TEMP tablespace, then the issue does not appear. The local_temp_tablespace stays at TEMP when changing password. So, my question is: Is there a downside to changing every user's local_temp_tablespace to shared TEMP? Thanks
Categories: DBA Blogs

ORA_ROWSCN

Hemant K Chitale - Wed, 2020-08-12 05:28

 As a follow up to my previous post on SCN_TO_TIMESTAMP, here is a demo of the ORA_ROWSCN function.

I have two different sessions and two different tables where I insert one row each.  I then delay the COMMIT in each session.


This is the first session :

oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 12 18:02:47 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Wed Aug 12 2020 18:02:31 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

18:02:47 SQL> create table table_a
18:02:51 2 (table_name varchar2(32), insert_scn number, insert_timestamp timestamp);

Table created.

18:03:10 SQL> insert into table_a
18:03:14 2 select 'TABLE_A', current_scn, systimestamp from v$database
18:03:32 3 /

1 row created.

18:03:33 SQL> select * from table_a
18:03:37 2 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP
-------------------------------- ---------- ---------------------------------------------------------------------------
TABLE_A 6580147 12-AUG-20 06.03.33.263180 PM

18:03:38 SQL>
18:05:16 SQL> !sleep 120

18:07:21 SQL>
18:07:26 SQL> commit;

Commit complete.

18:07:28 SQL>


And this is the second session :

oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 12 18:04:27 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Wed Aug 12 2020 18:03:32 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

18:04:27 SQL> create table table_b
18:04:36 2 (table_name varchar2(32), insert_scn number, insert_timestamp timestamp);

Table created.

18:04:46 SQL> insert into table_b
18:04:51 2 select 'TABLE_B',current_scn, systimestamp from v$database;

1 row created.

18:05:03 SQL> select * from table_b
18:05:09 2 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP
-------------------------------- ---------- ---------------------------------------------------------------------------
TABLE_B 6581390 12-AUG-20 06.05.03.813011 PM

18:05:10 SQL>
18:05:24 SQL> !sleep 30

18:06:00 SQL>
18:06:07 SQL>
18:06:13 SQL> commit;

Commit complete.

18:06:16 SQL>


So, the second session, against TABLE_B did the INSERT after the first session but issued a COMMIT before the first session.  (TABLE_B has a higher INSERT_SCN and INSERT_TIMESTAMP than TABLE_A).

Let's see what ORA_ROWSCN shows :
< br />
SQL> select table_name, insert_scn, insert_timestamp, scn_to_timestamp(ora_rowscn)
2 from table_a
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP
-------------------------------- ---------- ---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ORA_ROWSCN)
---------------------------------------------------------------------------
TABLE_A 6580147 12-AUG-20 06.03.33.263180 PM
12-AUG-20 06.07.26.000000000 PM


SQL> select table_name, insert_scn, insert_timestamp, scn_to_timestamp(ora_rowscn)
2 from table_b
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP
-------------------------------- ---------- ---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ORA_ROWSCN)
---------------------------------------------------------------------------
TABLE_B 6581390 12-AUG-20 06.05.03.813011 PM
12-AUG-20 06.06.14.000000000 PM


SQL>


The actual INSERT into TABLE_B was after that in TABLE_A  (higher INSERT_SCN and INSERT_TIMESTAMP)  but SCN_TO_TIMESTAMP of the ORA_ROWSCN implies that the row in TABLE_B is earlier than that in TABLE_A !

SQL> select table_name, insert_scn, insert_timestamp, ora_rowscn
2 from table_a
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP ORA_ROWSCN
-------------------------------- ---------- --------------------------------------------------------------------------- ----------
TABLE_A 6580147 12-AUG-20 06.03.33.263180 PM 6586905

SQL> select table_name, insert_scn, insert_timestamp, ora_rowscn
2 from table_b
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP ORA_ROWSCN
-------------------------------- ---------- --------------------------------------------------------------------------- ----------
TABLE_B 6581390 12-AUG-20 06.05.03.813011 PM 6584680

SQL>


The actual SCN recorded is that of the COMMIT time, *not* the INSERT time.

A database session gets an SCN for the Transaction it does when it COMMITs.
So, even though the INSERT into TABLE_A was earlier, it has a higher SCN simply because the COMMIT was issued later.


Does it matter if I use the ROWDEPENDENCIES extended attribute for the table  ? Without ROWDEPENDENCIES, ORA_ROWSCN actually uses the SCN in the block header -- irrespective of when each row in the block was inserted / updated.
In my scenario, I had a new table with only 1 row, so there would be no difference.

Nevertheless, I repeat the experiment with ROWDEPENDENCIES.


oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 12 18:17:57 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Wed Aug 12 2020 18:17:47 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

18:17:57 SQL> create table table_a
18:18:10 2
18:18:10 SQL> create table table_a
18:18:14 2 (table_name varchar2(32), insert_scn number, insert_timestamp timestamp) rowdependencies;

Table created.

18:18:31 SQL> insert into table_a
18:18:40 2 select 'TABLE_A', current_scn, systimestamp from v$database
18:18:50 3 /

1 row created.

18:18:51 SQL>
18:20:11 SQL> !sleep 60

18:21:13 SQL>
18:21:15 SQL> commit;

Commit complete.

18:21:16 SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>



and


oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 12 18:19:30 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Wed Aug 12 2020 18:19:04 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

18:19:30 SQL> create table table_b
18:19:33 2 (table_name varchar2(32), insert_scn number, insert_timestamp timestamp) rowdependencies;

Table created.

18:19:40 SQL> insert into table_b
18:19:52 2 select 'TABLE_B',current_scn, systimestamp from v$database;

1 row created.

18:20:00 SQL>
18:20:16 SQL> !sleep 30

18:20:49 SQL>
18:20:51 SQL> commit;

Commit complete.

18:20:52 SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>



resulting in :


SQL> select table_name, insert_scn, insert_timestamp, ora_rowscn
2 from table_a
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP ORA_ROWSCN
-------------------------------- ---------- --------------------------------------------------------------------------- ----------
TABLE_A 6612380 12-AUG-20 06.18.51.562927 PM 6618886

SQL> select table_name, insert_scn, insert_timestamp, ora_rowscn
2 from table_b
3 /

TABLE_NAME INSERT_SCN INSERT_TIMESTAMP ORA_ROWSCN
-------------------------------- ---------- --------------------------------------------------------------------------- ----------
TABLE_B 6614592 12-AUG-20 06.20.00.141122 PM 6617807

SQL>


Bottom line : A row that is inserted (or updated) earlier can still have a higher SCN (and, therefore, show a higher SCN_TO_TIMESTAMP)  simply because the user or the application program issued the COMMIT later.   Even an application or batch job may run multiple queries or DMLs before finally issuing a COMMIT.


Categories: DBA Blogs

Make sure you use base64 encoding on a Key Vault Secret for an sFTP Azure Data Factory Linked Service Connection

Jeff Moss - Wed, 2020-08-12 04:58

A quick post on setting up an sFTP Linked Service Connection in Azure Data Factory such that it uses a Key Vault for the SSH Key.

A friend of mine had tried setting this up but was getting the following error when testing the new Linked Service:

The Linked Service was using a Key Vault to obtain the SSH Key to be used in the connection. The SSH Key had been uploaded as a Secret to the Key Vault using code similar to the following:

az keyvault secret set --name sshkey --vault-name akv-dev --file test.ssh --description "Test SSH Key"

After reading through the documentation on the az keyvault secret set call I noticed this:

So, the default is not base64 but utf-8.

We modified the az call to something like this:

az keyvault secret set --name sshkey --vault-name akv-dev --file test.ssh --description "Test SSH Key" --encoding base64

i.e. with the addition of the –encoding base64 part and then it worked fine.

how to import sequence

Tom Kyte - Tue, 2020-08-11 18:46
export is done on table level using 9.2.0.1 exp user/password tables=emp,foo file=test.dmp during import sequences never got imported. This is the default behavior of oracle. I would appreciate if you please advise on the followings: 1. How to import sequences in table level export? 2. How to get the same sequence value at the time of export? Thanks
Categories: DBA Blogs

Update in Oracle DB

Tom Kyte - Tue, 2020-08-11 18:46
Hi dear AskTOM team. Have a great day to everyone. I have some confusion about UPDATE TABLE statement in Oracle DB 12cr2. Let's assume we have 3 users: U1; U2; U3; U1 has a table called TEST_1, and U2 and U3 both have UPDATE privilege on that table. My question is that: <b>If U2 and U3 try to update same rows in that particular table at the same time what will happen? How Oracle will control such kind of processes?</b> Thanks beforehand!
Categories: DBA Blogs

Ranking based on time break

Tom Kyte - Tue, 2020-08-11 18:46
i want to make rankning of trucks exit based on break more than 1 hour like eg below TRUCK EXIT T1 10:00 PM T2 10:05 PM T3 12:00 PM T4 12:05 PM T5 12:10 PM T6 12:20 PM Result should be like...... leaving break more than 1 hours gaps 10:00 10:05 1 12:00 12:20 2
Categories: DBA Blogs

Converting XML to JSON using Apex

Tom Kyte - Tue, 2020-08-11 18:46
Hello Everyone, There is a table(xxln_vs_publish_stg) has xmltype column(xml_data) which stores XML data. I have a requirement to convert XML data to json data. For that, I am using apex_json.write to convert. While executing below logic for changing it, I am getting Error as: ORA-20987: APEX - JSON.WRITER.NOT_OPEN - Contact your application administrator. Can you please help what is it I am doing which is wrong. <code> DECLARE l_xml sys.xmltype; l_amount BINARY_INTEGER := 32000; l_buffer RAW(32000); l_pos INTEGER := 1; l_stage NUMBER; content CLOB; content_blob BLOB; content_length NUMBER; BEGIN SELECT xml_data INTO l_xml FROM xxln_vs_publish_stg WHERE xml_data IS NOT NULL AND ROWNUM < 2; content := xmltype.getclobval(l_xml); xxln.convert_clob_to_blob(content, content_blob); content_length := dbms_lob.getlength(content_blob); dbms_output.put_line(content_length); apex_json.initialize_clob_output; IF dbms_lob.getlength(content_blob) < 32000 THEN apex_json.write(content); ELSE WHILE l_pos < content_length--DBMS_LOB.GETLENGTH(v_output_file_blob) LOOP dbms_lob.read(content_blob, l_amount, l_pos, l_buffer); apex_json.write(content); l_pos := l_pos + l_amount; END LOOP; END IF; dbms_output.put_line(apex_json.get_clob_output); apex_json.free_output; END; </code>
Categories: DBA Blogs

Options to quickly access large portions of rows

Tom Kyte - Tue, 2020-08-11 18:46
Hello, Tom. We have a fact table, that is partitioned by day and stores the last 90 days of data. Sometimes users of the application can change the status of record from 'ACTIVE' to 'CANCELED'. There are a lot of heavy analytical queries against that table that include full scans but only consider the 'ACTIVE' records. The number of 'CANCELED' record can wary greatly over time, from 5% to 60%. Right now it has 37 million active ones, and 67 million canceled, so my full scan could be 3 times faster. My question is: what is the best option to quickly access all the active records? B-tree index won't help, because there are too many rows to retrieve. Bitmap index seems to be a bad choice, since there are a lot of DML operations. I wanted to try subpartitioning by list and move the rows to the 'CANCELED' subpartition, but I immediately have concerns: There are 7 indexes on the table now. Moving a lot of rows between sections would require a lot of time and could potentially fill up the undo if someone decides to change the status of tens of millions of rows at a time(users can do and will do that). Since the table is partitioned by day, any blank space left after row movent in sections older than today won't be reused or reclaimed and a full scan will take just a much time. That makes the whole idea almost useless. I am afraid that shrinking the entire table could fill up the undo segment. I don't have an environment close in specs to our PROD environment, so I can't even really test my concerns with undo. Unfortunately we can't upgrade to 12.2 for a few more months, so move online is not availbable. Is there another option that I am missing or should I just run shrink space partition by partition on a daily basis?
Categories: DBA Blogs

Dynamically passing sequence name to get currval

Tom Kyte - Tue, 2020-08-11 18:46
I am trying to get the currval of all the user sequences in the schema. When I run the below sql it gives me invalid sql statement. I am not sure if the below is the right way to achieve it. Please advise. Assumption: The current value of the sequences are already set in the session. <code> set serveroutput on; declare sq number; sqnm varchar2(50); stmt varchar2(1000); cursor sqnc is (select sequence_name from user_sequences); begin for row in sqnc loop sqnm := row.sequence_name; stmt := 'SELECT' || sqnm ||'.currval into' || sq || 'from dual'; execute immediate stmt; dbms_output_put_line(sqnm || ' ' ||sq); end loop; end; </code>
Categories: DBA Blogs

Oracle has any feature similar to "Always Encrypted" that is offered by SQL server?

Tom Kyte - Tue, 2020-08-11 18:46
Hello, It would be great if you can help me here. Can you please share if Oracle has any feature similar to the "Always Encrypted" feature offered by SQL server? Link pasted at end has information on "Always Encrypted". I understand that Oracle offers data redaction to mask data. However, my understanding is that users with high authorization can bypass it. Oracle also offers Vault to control data access. However, there still will be Oracle users that can see the data in clear. It would be really helpful if you can share some pointers. Thanks, AB ------------------------------------------------------------------------------------------------------------------------------- Link: https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15 Text from this link: Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data and can view it, and those who manage the data but should have no access. By ensuring on-premises database administrators, cloud database operators, or other high-privileged unauthorized users, can't access the encrypted data, Always Encrypted enables customers to confidently store sensitive data outside of their direct control. This allows organizations to store their data in Azure, and enable delegation of on-premises database administration to third parties, or to reduce security clearance requirements for their own DBA staff.
Categories: DBA Blogs

This is just a quick blog entry to

Kevin Closson - Tue, 2020-08-11 12:06

This is just a quick blog entry to direct readers to an Amazon Web Services blog post regarding Oracle Licensing options when deploying Oracle Database in AWS.

Licensing Options for Oracle Database Deployments in Amazon Web Services

 

SCN_TO_TIMESTAMP

Hemant K Chitale - Tue, 2020-08-11 09:23
A quick demo of SCN_TO_TIMESTAMP in 19c

 
oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Aug 11 21:59:56 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Mon Aug 10 2020 16:08:38 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select scn_to_timestamp(5389994) from dual;

SCN_TO_TIMESTAMP(5389994)
---------------------------------------------------------------------------
11-AUG-20 09.53.44.000000000 PM

SQL>
SQL> select scn_to_timestamp(5389994-100000) from dual;

SCN_TO_TIMESTAMP(5389994-100000)
---------------------------------------------------------------------------
12-JUL-20 11.19.13.000000000 PM

SQL>
SQL> select scn_to_timestamp(32720) from dual;
select scn_to_timestamp(32720) from dual
*
ERROR at line 1:
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1


SQL>


If you query for an older SCN, you would get an ORA-08181 error.  What is an "older SCN" ?  

Technically, Oracle frequently inserts new rows into SYS.SMON_SCN_TIME and deletes older rows.  This is the table that is queried by the SCN_TO_TIMESTAMP function.  So, if you query for an SCN no longer present in the table, you get an ORA-08181 error.

Does Oracle insert every SCN into this table ? Of course not !  Else there would have been more 5million rows in the table in my database.  It periodically inserts rows.  When you run the SCN_TO_TIMESTAMP function, you get an approximate timestamp  -- an estimate that Oracle derives from reading "nearby" rows.  

Do not ever assume that SCN_TO_TIMETAMP returns an Exact Timestamp for that SCN.

For a range of potential SCNs, you can query V$ARCHIVED_LOG for FIRST_TIME (which is still in DATE format, not TIMESTAMP) and FIRST_CHANGE# (which is the first SCN recorded for that ArchiveLog).


Categories: DBA Blogs

Quick Intro to BOTO3

Pakistan's First Oracle Blog - Mon, 2020-08-10 03:37

 I just published my very first tutorial video on youtube which lists down a quick introduction to AWS BOTO3 with a step by step walkthrough of a simple program. Please feel free to subscribe to my channel. Thanks. You can find video here.

Categories: DBA Blogs

Real Time SQL Monitor using SQL Developer 20.2

Hemant K Chitale - Mon, 2020-08-10 03:27
Here are a few screenshots of using the Real Time SQL Monitor in SQL Developer 20.2 against a 19c database.  I use the MONITOR hint explicitly in the SQL statements to force them to be visible in the SQL Monitor.

Note : The "B" after the "18" and "20" for I/O requests in the first two screenshots is *not* "Billion"




This is an INSERT statement

This shows the Execution Plan of the INSERT statement 

Here is  more complicated query with Parallel Execution  (all 3 panes : Plan Statistics, Plan and Parallel Execution)


 

Categories: DBA Blogs

Ansible Configuration Management Tool

Online Apps DBA - Mon, 2020-08-10 01:46

Ansible is the most widely used tool for Configuration Management in the industry since it is very simple to use yet powerful enough to automate complex multi-tier IT application environments. Check out the blog at https://k21academy.com/devops21 to know more about Ansible and other Configuration management tools. This blog post covers: Configuration Management Configuration Management Tools […]

The post Ansible Configuration Management Tool appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Jenkins Overview and Installation Pre-requisites

Online Apps DBA - Mon, 2020-08-10 01:37

Jenkins, originally developed for continuous integration, is the most widely adopted solution for software process automation, continuous integration, and continuous delivery. Check out the blog at https://k21academy.com/devops20 to know more about Jenkins and its concepts. This blog post covers: Jenkins Overview Jenkins Features Installation Pre-requisites Jenkins Concepts and much more. Begin your journey towards becoming […]

The post Jenkins Overview and Installation Pre-requisites appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

EBS R12.2 Upgrade Frequently Asked Questions

Online Apps DBA - Sun, 2020-08-09 03:00

Can you upgrade from DB 11g to 19c directly? Is there any direct path to upgrade to R12.2.9 from 11i? If you are performing the EBS upgrade to R12.2 then you might have been facing all these questions. Well, we have got a series of FAQs that our trainees have faced during their upgrade practice. […]

The post EBS R12.2 Upgrade Frequently Asked Questions appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Tips To Prepare AZ-301 Exam Microsoft Certified Azure Architect Design

Online Apps DBA - Sun, 2020-08-09 02:47

Want to clear AZ-301 Architect Design Certified Exam? Worried about how to and where to start preparing? Have a look at this blog at https://k21academy.com/az30112, this will assist you to pass the AZ-301 Architect Design Certified Exam in an efficient way from the beginning to the end in the precise and most brief way. This […]

The post Tips To Prepare AZ-301 Exam Microsoft Certified Azure Architect Design appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

DDL Script of partition in oracle

Tom Kyte - Fri, 2020-08-07 23:06
Hi Tom, How to get DDL scripts of table partition and index partition in oracle. Thanks, Leon.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator