Skip to content

19c DataPump Features

1. MAX_DATAPUMP_JOBS_PER_PDB database parameter

 MAX_DATAPUMP_JOBS_PER_PDB database parameter is introduced to give DBAs more control the number of jobs that can be started in a database environment. This parameter is valid for both Multitenant and non-Multitenant environment.

In a database where MAX_DATAPUMP_JOBS_PER_PDB is set to N and where N DataPump jobs are running at the same time, the attempt to start a new N+1 DataPump job will report ORA-39391, as shown below:

SQL> show parameter MAX_DATAPUMP_JOBS_PER_PDB

NAME                                 TYPE        VALUE
———————————— ———– ——————————
max_datapump_jobs_per_pdb            string      100

– for Multitenant, in CDB$ROOT:
SQL> alter system set MAX_DATAPUMP_JOBS_PER_PDB=2 container=all;
System altered.

– for non-Multitenant:
SQL> alter system set MAX_DATAPUMP_JOBS_PER_PDB=2;
System altered.

– two DataPump jobs are started and running:

> expdp system/oracle@<PDB_service_name> directory=dptest dumpfile=test1.dmp full=yes

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Starting “SYSTEM”.”SYS_EXPORT_FULL_01″:  system/********@<PDB_service_name> directory=dptest dumpfile=test1.dmp full=yes
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
—–> still running

> expdp system/oracle@<PDB_service_name> directory=dptest dumpfile=test1.dmp full=yes

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Starting “SYSTEM”.”SYS_EXPORT_FULL_02″:  system/********@cdb1_pdb1 directory=dptest dumpfile=test2.dmp full=yes
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
—–> still running

– launching the third DP job in a database with MAX_DATAPUMP_JOBS_PER_PDB=2, will report:

expdp system/oracle@<PDB_service_name> directory=dptest dumpfile=test3.dmp full=yes

Export: Release 19.0.0.0.0 – Development on Tue Oct 9 07:37:24 2018
Version 19.1.0.0.0

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_FULL_03 for user SYSTEM
ORA-06512: at “SYS.KUPV$FT”, line 1142
ORA-06512: at “SYS.KUPV$FT”, line 1744
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPV$FT_INT”, line 1099
ORA-39391: maximum number of Data Pump jobs (2) exceeded
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 86
ORA-06512: at “SYS.KUPV$FT_INT”, line 969
ORA-06512: at “SYS.KUPV$FT”, line 1646
ORA-06512: at “SYS.KUPV$FT”, line 1103

 

2. MAX_DATAPUMP_PARALLEL_PER_JOB database parameter

MAX_DATAPUMP_PARALLEL_PER_JOB is introduced to control the number of parallel workers that can be used for an individual Data Pump job. This is an improvement to implement resource usage policies in DataPump.

In a database with MAX_DATAPUMP_PARALLEL_PER_JOB set to N, even if parallel>N is specified for the DataPump job, only N number of Worker processes will be started, as shown below:

EXAMPLE 1
———
SQL> alter system set MAX_DATAPUMP_PARALLEL_PER_JOB=1 container=all;
System altered.

– start an expdp job with parallel=7:

> expdp system/password@<PDB_service_name> directory=dptest dumpfile=test2.dmp full=yes parallel=7 logtime=all metrics=yes
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
09-OCT-18 07:47:14.242: Starting “SYSTEM”.”SYS_EXPORT_FULL_01″:  system/********@<PDB_service_name> directory=dptest dumpfile=test2.dmp full=yes parallel=7 logtime=all metrics=yes
09-OCT-18 07:47:15.402: W-1 Startup took 1 seconds
09-OCT-18 07:47:28.374: W-1 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
09-OCT-18 07:47:31.271: W-1 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
09-OCT-18 07:47:35.128: W-1 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
09-OCT-18 07:47:38.190: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
09-OCT-18 07:47:40.581: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
09-OCT-18 07:47:40.874: W-1      Completed 72 INDEX_STATISTICS objects in 0 seconds
09-OCT-18 07:47:41.082: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
09-OCT-18 07:47:41.119: W-1      Completed 3 INDEX_STATISTICS objects in 1 seconds
………
09-OCT-18 07:53:43.432: W-1 . . exported “TC_USER5”.”TAB9″                           5.046 KB       1 rows in 0 seconds using direct_path
09-OCT-18 07:53:45.494: W-1      Completed 1 DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 2 seconds
09-OCT-18 07:53:45.504: W-1      Completed 62 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 1 seconds
09-OCT-18 07:53:45.510: W-1      Completed 18 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 16 seconds
09-OCT-18 07:53:45.516: W-1      Completed 120 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 9 seconds
09-OCT-18 07:53:49.806: W-1 Master table “SYSTEM”.”SYS_EXPORT_FULL_01″ successfully loaded/unloaded
09-OCT-18 07:53:49.866: ******************************************************************************
09-OCT-18 07:53:49.867: Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
09-OCT-18 07:53:49.872:   <PATH>/test2.dmp
09-OCT-18 07:53:49.942: Job “SYSTEM”.”SYS_EXPORT_FULL_01″ successfully completed at Tue Oct 9 07:53:49 2018 elapsed 0 00:06:47

– v$session shows only one worker process started:
select to_char(sysdate,’YYYY-MM-DD HH24:MI:SS’) “DATE”, s.program, s.sid,
from   v$session s, v$process p, dba_datapump_sessions d
where  p.addr=s.paddr and s.saddr=d.saddr and UPPER (s.program) LIKE ‘%DM0%’ or UPPER (s.program) LIKE ‘%DW0%’);

DATE                PROGRAM                                       SID
——————- ————————————– ———-
2018-10-09 07:52:27 oracle@den01znf (DW00)                        65

EXAMPLE 2
———
– in a database with MAX_DATAPUMP_PARALLEL_PER_JOB set to 4:
SQL> alter system set MAX_DATAPUMP_PARALLEL_PER_JOB=4 container=all;
System altered.

– any DataPump job can have up to maximum 4 Workers, even if the job is started with parallel>4:

> expdp system/oracle@<PDB_service_name> directory=dptest dumpfile=test2.dmp full=yes parallel=7 logtime=all  metrics=yes
Export: Release 19.0.0.0.0 – Development on Tue Oct 9 08:19:49 2018
Version 19.1.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
09-OCT-18 08:20:02.544: Starting “SYSTEM”.”SYS_EXPORT_FULL_01″:  system/********@<PDB_service_name> directory=dptest dumpfile=test2.dmp full=yes parallel=7 logtime=all metrics=yes
09-OCT-18 08:20:03.709: W-1 Startup took 1 seconds
09-OCT-18 08:20:07.888: W-2 Startup took 0 seconds
09-OCT-18 08:20:07.951: W-3 Startup took 0 seconds
09-OCT-18 08:20:08.006: W-2 Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
09-OCT-18 08:20:08.013: W-2      Completed 1 MARKER objects in 0 seconds
09-OCT-18 08:20:08.062: W-3 Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
09-OCT-18 08:20:08.063: W-3      Completed 1 MARKER objects in 0 seconds
09-OCT-18 08:20:09.914: W-4 Startup took 0 seconds
…………………
09-OCT-18 08:20:33.919: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
09-OCT-18 08:20:35.033: W-2      Completed 4 INC_TYPE objects in 4 seconds
09-OCT-18 08:20:35.194: W-1      Completed 22 TYPE objects in 3 seconds
09-OCT-18 08:20:35.566: W-1 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
09-OCT-18 08:20:35.867: W-1      Completed 2 PROCACT_SYSTEM objects in 0 seconds
09-OCT-18 08:20:37.741: W-3 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
09-OCT-18 08:20:38.376: W-3      Completed 23 PROCOBJ objects in 3 seconds
09-OCT-18 08:20:39.165: W-3 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
09-OCT-18 08:20:40.948: W-3      Completed 2 PROCACT_SYSTEM objects in 2 seconds
09-OCT-18 08:20:42.752: W-1 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
………..
09-OCT-18 08:23:30.475: W-3 . . exported “TC_USER5”.”TAB8″                           5.046 KB       1 rows in 0 seconds using direct_path
09-OCT-18 08:23:30.592: W-3 . . exported “TC_USER5”.”TAB9″                           5.046 KB       1 rows in 0 seconds using direct_path
09-OCT-18 08:23:31.715: W-1 . . exported “WMSYS”.”WM$METADATA_MAP”                       0 KB       0 rows in 0 seconds using external_table
09-OCT-18 08:23:32.737: W-4 Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
09-OCT-18 08:23:32.829: W-4      Completed 2 MATERIALIZED_VIEW objects in 52 seconds
09-OCT-18 08:23:34.962: W-4      Completed 1 DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 3 seconds
09-OCT-18 08:23:34.974: W-4      Completed 62 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 3 seconds
09-OCT-18 08:23:34.980: W-4      Completed 18 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 31 seconds
09-OCT-18 08:23:34.989: W-4      Completed 120 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 24 seconds
09-OCT-18 08:23:39.255: W-4 Master table “SYSTEM”.”SYS_EXPORT_FULL_01″ successfully loaded/unloaded
09-OCT-18 08:23:39.303: ******************************************************************************
09-OCT-18 08:23:39.304: Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
09-OCT-18 08:23:39.307:   <PATH>/test2.dmp
09-OCT-18 08:23:39.424: Job “SYSTEM”.”SYS_EXPORT_FULL_01″ successfully completed at Tue Oct 9 08:23:39 2018 elapsed 0 00:03:48

– v$session shows only four DataPump worker processes started:
select to_char(sysdate,’YYYY-MM-DD HH24:MI:SS’) “DATE”, s.program, s.sid,
from   v$session s, v$process p, dba_datapump_sessions d
where  p.addr=s.paddr and s.saddr=d.saddr and UPPER (s.program) LIKE ‘%DM0%’ or UPPER (s.program) LIKE ‘%DW0%’);

DATE                PROGRAM                                       SID
——————- ————————————– ———-
2018-10-09 08:20:21 oracle@den01znf (DW00)                        168
2018-10-09 08:20:21 oracle@den01znf (DW01)                         50
2018-10-09 08:20:21 oracle@den01znf (DW02)                        164
2018-10-09 08:20:21 oracle@den01znf (DW03)                         41

Note: There is no warning or error reported when the DP job is started with a number of parallels higher than the value defined by MAX_DATAPUMP_PARALLEL_PER_JOB.

3. Exclude ENCRYPTION Clause on Import With DataPump Parameter: OMIT_ENCRYPTION_CLAUSE

19c introduces a new transform parameter, OMIT_ENCRYPTION_CLAUSE, which directs Data Pump to suppress any encryption clauses associated with objects using encrypted columns. This new transform parameter is only allowed for non-transportable import jobs. OMIT_ENCRYPTION_CLAUSE applies to materialized view, table, and tablespace objects and will enable objects which were utilizing encrypted columns in the source to get created in a target database environment where encryption attributes are not supported.

DataPump transform parameter has been enhanced with this new option to facilitate the migration from any source database with TDE to a target database without TDE.
For example, in the Oracle Public Cloud environment data is encrypted by default using Transparent Data Encryption (TDE) and the encrypted tablespace feature. However, this environment does not support the encrypted column feature. This presents a problem when using Data Pump to migrate a non-cloud database into the Cloud when the source database has objects which have encrypted columns. During import DataPump will always include any associated encryption attribute syntax when creating objects in the target database. Consequently these objects fail to create in the target database due to the invalid/unsupported encryption syntax. To facilitate the creation of these objects (tables and materialized views), the new transform parameter allows users to suppress encryption related syntax during import. For this kind of migrations, OMIT_ENCRYPTION_CLAUSE instructs DataPump to suppress any encryption clauses associated with objects using encrypted columns.

The command line for the Data Pump Import utility (impdp) is enhanced to expose the new transform parameter, OMIT_ENCRYPTION_CLAUSE. This is accomplished by adding a new keyword to the existing command line parameter TRANSFORM:

TRANSFORM=OMIT_ENCRYPTION_CLAUSE: [ Y | N ]

Y – Generated create object ddl will NOT include any encryption attribute clauses.
N – Generated create object ddl will include related encryption attribute clauses (default).

If the value is specified as Y, then any encryption attribute clause will be omitted from the create object ddl. The default is N.

This new transform parameter can additionally be set through the programmatic interface, DBMS_METADATA.SET_TRANSFORM_PARAM:

DBMS_METADATA.SET_TRANSFORM_PARAM(job_handle, ‘OMIT_ENCRYPTION_CLAUSE’, true);

Example:

– source 12.1.0.2 with TDE enabled and 19.1 target database
– in source database, create an encrypted table and populate with:

SQL> administer key management set key identified by Oracle_123 with backup;
keystore altered.
SQL> create table scott.t1 (name varchar2(100) encrypt using ‘AES256’);
Table created.
SQL> insert into scott.t1 values(‘AAA’);
1 row created.
SQL> commit;
SQL> select dbms_metadata.get_ddl(‘TABLE’,’T1′,’SCOTT’) from dual;

DBMS_METADATA.GET_DDL(‘TABLE’,’T1′,’SCOTT’)
——————————————————————————–
CREATE TABLE “SCOTT”.”T1″
(    “NAME” VARCHAR2(100) ENCRYPT USING ‘AES256’ ‘SHA-1’
) SEGMENT CREATION IMMEDIATE
……

– generate the export dump file for the table with encypted clause in the definition:

> expdp system/<password> dumpfile=test.dmp directory=DATA_PUMP_DIR tables=scott.t1
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″:  system/******** dumpfile=test.dmp directory=DATA_PUMP_DIR tables=scott.t1
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
……..
. . exported “SCOTT”.”T1″                                5.054 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded

– run impdp with transform=OMIT_ENCRYPTION_CLAUSE:N

> impdp system/oracle directory=dptest dumpfile=test.dmp remap_schema=scott:test transform=OMIT_ENCRYPTION_CLAUSE:N
Import: Release 19.0.0.0.0 – Development on Tue Oct 9 02:45:56 2018
Version 19.1.0.0.0
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
import done in WE8DEC character set and AL16UTF16 NCHAR character set
export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
Warning: possible data loss in character set conversions
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″:  system/******** directory=dptest dumpfile=test.dmp remap_schema=scott:test transform=OMIT_ENCRYPTION_CLAUSE:N
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:”TEST”.”T1″ failed to create with error:
ORA-28365: wallet is not open

Failing sql is:
CREATE TABLE “TEST”.”T1″ (“NAME” VARCHAR2(100 BYTE) ENCRYPT USING ‘AES256’ ‘SHA-1’) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST …) TABLESPACE “USERS”

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ completed with 1 error(s) at Tue Oct 9 02:46:09 2018 elapsed 0 00:00:06

– run impdp with transform=OMIT_ENCRYPTION_CLAUSE:Y

> impdp system/<password> directory=dptest dumpfile=test.dmp remap_schema=scott:test transform=OMIT_ENCRYPTION_CLAUSE:Y
Import: Release 19.0.0.0.0 – Development on Tue Oct 9 02:46:45 2018
Version 19.1.0.0.0
Username: system/oracle
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Warning: possible data loss in character set conversions
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″:  system/******** directory=dptest dumpfile=test.dmp remap_schema=scott:test transform=OMIT_ENCRYPTION_CLAUSE:Y
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “TEST”.”T1″                                 5.054 KB       1 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at Tue Oct 9 02:47:36 2018 elapsed 0 00:00:35

bash-4.2$ sqlplus

SQL*Plus: Release 19.0.0.0.0 – Development on Tue Oct 9 03:58:25 2018
Version 19.1.0.0.0
Enter user-name: / as sysdba
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Version 19.1.0.0.0

SQL> select dbms_metadata.get_ddl(‘TABLE’,’T1′,’TEST’) from dual;
DBMS_METADATA.GET_DDL(‘TABLE’,’T1′,’TEST’)
——————————————————————————–
CREATE TABLE “TEST”.”T1″
(    “NAME” VARCHAR2(100)   <——————————- encryption attribute clause is not there after impdp with transform=OMIT_ENCRYPTION_CLAUSE:Y
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 …..

4. Test Mode for Transportable Tablespaces With DataPump Parameter: TTS_CLOSURE_CHECK

With 19c, DBAs can more easily determine how long an export will take and discover unforeseen issues not reported by the closure check. DataPump Export command line parameter TTS_CLOSURE_CHECK can be used to specify that tablespaces are not required to be read-only during the transportable tablespace operation. The resulting dump file is flagged as “unavailable for import”.
Test Mode for Transportable Tablespaces performs a metadata-only export test using Transportable Tablespaces or Full Transportable Export/Import. It also removes the limitation on the source database tablespaces to be read only mode. The DataPump Utility requires that tablespaces be set read-only prior to an export transportable tablespaces operation. Setting tablespaces read-only ensures that the data is in a consistent state. In addition, a transportable tablespace export operation will conduct a closure check of the transportable tablespace set. However, there are cases when the following information is needed before starting a transportable operations with DataPump:
– how long it will take, and
– whether there might be unforeseen issues not caught by the closure check

The ability to limit the length of time that the tablespaces are in read-only mode and the ability to obtain time estimates of the DataPump transportable export operation is now supported with the command line DataPump parameter : TTS_CLOSURE_CHECK.

TTS_CLOSURE_CHECK

Purpose :  Specifies the level of closure check to be performed as part of the transportable export operation.

Syntax and Description 

TTS_CLOSURE_CHECK = [ ON | OFF | FULL | TEST_MODE ]

ON – Indicates that closure check be performed to ensure that the transportable tablespace set contains no references outside the set.

OFF – Indicates that the no closure check be performed. The user is responsible for verifying the transportable tablespace set containment.

FULL – Indicates that full multi-directional closure check be performed to ensure that the no remote references out of or into the transportable tablespace set.

TEST_MODE – Indicates that the tablespaces are not required to be in Read-Only mode. This option is for testing purposes only; to provide timing requirements of a transportable tablespace export operation. The resulting export dump file is not available for use by Data Pump Import.

Notes:
1/ ON, OFF, and FULL are mutually exclusive.  TEST_MODE is a Data Pump Export option only.
2/ Running a transportable operation with the TTS_CLOSURE_CHECK TEST_MODE option allows the tablespaces to remain read-write and provides the requested timing information. The resulting Data Pump Export dump file is not available for use by Data Pump Import.
3/ The time DataPump takes to conduct the closure check can be long, and at times unnecessary, especially when it is known that the transportable tablespace set is contained.  Running a Data Pump transportable operation with the TTS_CLOSURE_CHECK OFF option will decrease the time required for DataPump transportable operations to complete. The main performance attribute of this feature is to by-pass a time consuming step for a transportable operation.
4/ Skipping the closure check should decrease the time for the transportable export operation to complete which increase availability. The ability to obtain timing requirements for the transportable export operation with the tablespaces in read-write mode also increases availability.
5/ The TTS_CLOSURE_CHECK parameter value can be set using the existing procedure DBMS_DATAPUMP.SET_PARAMETER. The following disables all closure check and enables test mode:
SYS.DBMS_DATAPUMP.SET_PARAMETER(jobhdl, ‘TTS_CLOSURE_CHECK’, DBMS_DATAPUMP.KU$_TTS_CLOSURE_CHECK_OFF+DBMS_DATAPUMP.KU$_TTS_CLOSURE_CHECK_TEST);

Example:

– running a transportable operation for a read-write tablespace is allowed with TTS_CLOSURE_CHECK=test_mode:

SQL> select status,tablespace_name from dba_tablespaces where tablespace_name=’USERS’;

STATUS    TABLESPACE_NAME
——— ——————————
ONLINE    USERS

> expdp system/<password> directory=dptest dumpfile=test_tts.dmp transport_tablespaces=users TTS_CLOSURE_CHECK=test_mode

Export: Release 19.0.0.0.0 – Development on Thu Oct 11 02:55:00 2018
Version 19.1.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Starting “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″:  system/******** directory=dptest dumpfile=test_tts.dmp transport_tablespaces=users TTS_CLOSURE_CHECK=test_mode
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
<path>/test_tts.dmp
Dump file set is unusable. TEST_MODE requested.
******************************************************************************
Datafiles required for transportable tablespace USERS:
<OS_path>/t_user1.f
Job “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully completed at Thu Oct 11 02:57:06 2018 elapsed 0 00:01:57

Note : The log file will contain the informational message: “Dump file set is unusable. TEST_MODE requested.”

5. New values for TRANSPORTABLE DataPump Import parameter: KEEP_READ_ONLY and NO_BITMAP_REBUILD

The method used by DataPump to handle transportable tablespaces import was changed starting with 12.1.  Before version 12, the DataPump would always leave the tablespaces as read-only but there were two problems with this.

  • First, if the version of the time zone file of the export database did not match the version of time zone file for the import database, then all tables with time zone data types in the transportable space would be dropped because they could have incorrect values with the new time zone file.
  • Second, the way the data pump tracked free space in a transportable tablespace did not scale for large tablespaces (we had customers where an export would take hours to gather information about free space). To fix these performance issues, the datapump changed to rebuilding the bitmap of free space for at import time.

However, there are some customers whose processes require the tablespace to remain read-only during and after the import.  If the data files for the tablespace were read-only, then the DataPump would not attempt the update, but that is an inconvenience for many customers.  For other customers, the time spent trying to rebuild the tablespace bitmaps was too long and they would rather not do it or do it at another time.

The TRANSPORTABLE parameter for impdp has two new options:

  • KEEP_READ_ONLY – this option never changes the tablespace to read/write. The bitmaps for free space are not updated and, if the export and import time zone files are different version, any table with time zone columns in the transportable tablespace is dropped.  Note that the free space information is only used when the database needs to create a new storage segment in the tablespace, which can’t happen until the tablespace is set to read/write.  If the bitmaps are not updated, then space used by tables or indexes that were not part of the transportable operation will not be reclaimed.
  • NO_BITMAP_REBUILD – the data files are set to read write so that time zone data can be updated, but the bitmap for free space is not rebuilt. Again, this means that unused space in the tablespace will not be reclaimed until the bitmaps are rebuilt.

The following Data Pump Import utility (impdp) command–line parameter is modified in 19c:

TRANSPORTABLE

Purpose : Specifies whether the transportable option should be used during a table mode import or a full mode import.  KEEP_READ_ONLY and NO_BITMAP_REBUILD will be added as
TRANSPORTABLE parameter values. KEEP_READ_ONLY and NO_BITMAP_REBUILD are valid for transportable mode import operations.

Syntax and Description 

TRANSPORTABLE = [ NEVER |  ALWAYS | KEEP_READ_ONLY | NO_BITMAP_REBUILD ]

KEEP_READ_ONLY – Indicates that the transportable data files are to remain in READ ONLY mode. KEEP_READ_ONLY allows the data files to be plugged into additional compatible database for read-only access.

NO_BITMAP_REBUILD – Indicates that the transportable data files header bitmaps are not to be rebuilt. Not reclaiming unused data segments reduces the time of the import operation. Bit maps can be rebuilt using dbms_space_admin.tablespace_rebuild_bitmaps.

Notes:
1/ When TRANSPORTABLE=KEEP_READ_ONLY is specified, because the data files remain in READ ONLY mode, tables containing TSTZ column data cannot be updated and are dropped from the import. In addition, data file bitmaps cannot be rebuilt.
2/ When TRANSPORTABLE=NO_BITMAP_REBUILD is specified, the data files may be placed into READ WRITE mode, if necessary, in order to update tables containing TSTZ column data.
3/ The TRANSPORTABLE parameter value can be set using the existing procedure DBMS_DATAPUMP.SET_PARAMETER. Setting TRANSPORTABLE with string values is limited to ‘NEVER’ or ‘ALWAYS’:
SYS.DBMS_DATAPUMP.SET_PARAMETER(jobhdl,’TRANSPORTABLE’,’ALWAYS’);
SYS.DBMS_DATAPUMP.SET_PARAMETER(jobhdl, ‘TRANSPORTABLE’,’NEVER’);
The new TRANSPORTABLE options are set using the new numeric bitmask values:
DBMS_DATAPUMP.KU$_TTS_NEVER  is the value 1
DBMS_DATAPUMP.KU$_TTS_ALWAYS  is the value 2
DBMS_DATAPUMP.KU$_TTS_KEEP_READ_ONLY  is the value 4
DBMS_DATAPUMP.KU$_TTS_NO_BITMAP_REBUILD is the value 8
The following enables KEEP_READ_ONLY for transportable job:
SYS.DBMS_DATAPUMP.SET_PARAMETER(jobhdl, ‘TRANSPORTABLE’,DBMS_DATAPUMP.KU$_TTS_ALWAYS+DBMS_DATAPUMP.KU$_TTS_KEEP

 

Example 1:
> impdp system/<password> directory=dptest dumpfile=test_tctab transport_datafiles=<OS_path>/tc_tctab1.f TRANSPORTABLE=KEEP_READ_ONLY

Import: Release 19.0.0.0.0 – Development on Fri Oct 12 03:29:23 2018
Version 19.1.0.0.0

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Master table “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″:  system/******** directory=dptest dumpfile=test_tctab transport_datafiles=<OS_path>/tc_tctab1.f TRANSPORTABLE=KEEP_READ_ONLY
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully completed at Fri Oct 12 03:29:59 2018 elapsed 0 00:00:34

Example 2:
> impdp system/<password> directory=dptest dumpfile=test_tctab transport_datafiles=<OS_path>/tc_tctab1.f TRANSPORTABLE=NO_BITMAP_REBUILD

Import: Release 19.0.0.0.0 – Development on Fri Oct 12 05:04:31 2018
Version 19.1.0.0.0

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Master table “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″:  system/******** directory=dptest dumpfile=test_tctab transport_datafiles=<OS_path>/tc_tctab1.f TRANSPORTABLE=NO_BITMAP_REBUILD
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully completed at Fri Oct 12 05:05:08 2018 elapsed 0 00:00:33