Data Pump: Oracle Database 12c Release 2 (12.2) New Features
Oracle Database – Enterprise Edition – Version 22.214.171.124 and later
Oracle Database – Standard Edition – Version 126.96.36.199 and later
Information in this document applies to any platform.
This Note provides an Overview on all Data Pump related New Features of the Oracle Database 12.2 Release
The intended audience of this article are database administrators planning to utilize new features of Data Pump.
Oracle Data Pump Parallel Import of Metadata
The PARALLEL parameter for Oracle Data Pump, which previously applied only to data, has been extended to include metadata import operations. The performance of Oracle Data Pump import jobs has improved by enabling the use of multiple processes working in parallel to import metadata.
Oracle Data Pump jobs now take less time because parallel import of metadata is added to this release.
Oracle Data Pump Parallel Export of Metadata
The PARALLEL parameter for Oracle Data Pump, which previously applied only to data, is extended to include metadata export operations. The performance of Oracle Data Pump export jobs is improved by enabling the use of multiple processes working in parallel to export metadata.
Oracle Data Pump jobs now require shorter down time during migration and shorter elapsed time for export operations.
New Options for Substitution Variables in Oracle Data Pump File Names
Choices for substitution (wildcard) variables are now available for Oracle Data Pump dump file names. The new choices include date or time values, a larger range for numeric values, and system generated unique file names.
Substitution variables improve file management for Oracle Data Pump dump files and enables you to take advantage of higher degrees of parallel processing without manually specifying individual file names.
Renaming Data Files During Import
New syntax is added to let users specify new file names, or file name transforms, for the data files in a transportable tablespace job.
Renaming data files during import lowers management overhead by eliminating the need to execute ALTER TABLESPACE statements and rename or move files manually after the import is complete.
TRUST_EXISTING_TABLE_PARTITION Flag for DATA_OPTIONS Parameter of Import
A new TRUST_EXISTING_TABLE_PARTITION flag is added to the DATA_OPTIONS parameter for import. This option tells the Data Pump to load partition data in parallel into existing tables. This is done as part of a migration when the metadata is static and can be moved before the databases are taken offline to migrate the data. Moving the metadata separately minimizes downtime. If the DBA uses this mechanism and if other attributes of the database are the same (for example, character set), then the data from the export database goes to the same partitions in the import database.
This new TRUST_EXISTING_TABLE_PARTITION flag enables data from multiple partitions to be loaded in parallel into a preexisting table which reduces the import time.
GROUP_PARTITION_TABLE_DATA Flag for DATA_OPTIONS Parameter of Export
A new GROUP_PARTITION_TABLE_DATA flag is added to the DATA_OPTIONS parameter for export. This option tells the Data Pump to unload all table data in one operation rather than unload each table partition as a separate operation. Then, the definition of the table does not matter at import time. Import sees the partition of data that is loaded into the entire table.
The new GROUP_PARTITION_TABLE_DATA flag enables data for all partitions to be loaded at once and in parallel. This reduces the time to import the table data.
Data Verification Option to Import With VALIDATE_TABLE_DATA
A new VALIDATE_TABLE_DATA flag on the Data Pump Import DATA_OPTIONS parameter verifies the format number and date data types in table data columns. Use this option when importing a dump file from an untrusted source to prevent issues that can occur because data is corrupt in the dump file. Because of the overhead involved in validating data, the default is that data is no longer validated on import.
This verification protects the database from SQL injection bugs from bad data. Oracle Corporation recommends using this option when importing a dump file from an untrusted source.
ORACLE_DATAPUMP and ORACLE_LOADER Access Driver Supports New File Format
Oracle Loader for Hadoop (OLH) uses files that are written and read by the ORACLE_DATAPUMP access driver. The problem with the current file format is that the first two blocks of the file have to be updated after the rest of the file is written. The Hadoop file system used by OLH is write-once, so the header blocks cannot be updated. This feature adds a new file format that can be written and read by the ORACLE_DATAPUMP access driver in the Hadoop file system. This file format adds two trailer blocks to the file with the header information so that the beginning of the file does not need to be updated after the rest of the file is written. The new Hadoop trailer format can be enabled for a dump file using the new HADOOP_TRAILERS access parameter.
The new file format enables OLH to run faster since it no longer needs to write file data multiple times to avoid updating the first two header blocks in the file.
Compress data before sending it over the network with: ENABLE_NETWORK_COMPRESSION
A new ENABLE_NETWORK_COMPRESSION option (for direct-path network imports only) on the Data Pump DATA_OPTIONS parameter tells Data Pump to compress data before sending it over the network.
When ENABLE_NETWORK_COMPRESSION is specified, Data Pump compresses data on the remote node before it is sent over the network to the target database, where it is decompressed. This option is useful if the network connection between the remote and local database is slow because it reduces the amount of data sent over the network.
Support for Tables With LONG Data Types in Data Pump Network Mode Operations
Data Pump normally moves table data in a network import by using the INSERT AS SELECT SQL statement. However, INSERT AS SELECT cannot be used to move LONG columns, therefore, data for those tables are not moved. For cases where it is possible, the Data Pump uses OCIDirPathUnload to unload the data from the remote database and then uses OCIDirPathLoad to load the table data into the target database. OCIDirPath allows the Data Pump to move tables with most data types including LONGs. For tables that contain data types that cannot be moved with OCIDirPath, the Data Pump continues to use INSERT AS SELECT. If a table has both a LONG column and one of the data types that OCIDirPath cannot move, then network import cannot move that table.
This support also makes network import available to more users.
Metadata Transforms Available for Oracle Data Pump in New Views
This feature adds views to Oracle Database that provide information about what metadata transforms are available for different modes of Oracle Data Pump and for the metadata API.
Currently, transforms are documented with Oracle Data Pump and the metadata API, but DBAs prefer to query the database to get that information. This feature ensures that documentation for a feature is maintained as part of creating a transform, and that this information is available using interfaces such as SQL*Plus.
Adding Oracle Data Pump and SQL*Loader Utilities to Instant Client
This feature adds SQL*Loader, expdp, impdp, exp, and imp to the tools for instant client.
Now you can run these utilities on machines that do not have a complete Oracle Database installation.
Database Migrations: Support LLS Files Generated by DB2 Export Utility
The DB2 export utility unloads table data into text files with the option to unload LOB data, either character or binary, into a separate file. When LOB data is unloaded into a separate file, DB2 writes a Lob Locator Specifier (LLS) into the data file. The LLS contains the file name, offset and length of the LOB data in the data file. This feature adds a clause for SQL*Loader control file and ORACLE_LOADER access parameters to enable you to indicate that a field in the data file is an LLS field. SQL*Loader and ORACLE_LOADER use the information in the field to read the data for the LOB column.
This feature aids in migrating data from DB2 to Oracle Database.