Thursday, December 1, 2016

ORA-39127: unexpected error from call to "SYS"."DBMS_JVM_EXP_PERMS"."GRANT_SYSPRIVS_EXP"

While doing an export (expdp) following error was encountered.
Processing object type DATABASE_EXPORT/ROLE
ORA-39127: unexpected error from call to "SYS"."DBMS_JVM_EXP_PERMS"."GRANT_SYSPRIVS_EXP"
ORA-44002: invalid object name
ORA-06512: at "SYS.DBMS_ASSERT", line 383
ORA-06512: at "SYS.DBMS_METADATA", line 9408
The reason for this is the java components are removed but DBMS_JVM_EXP_PERMS package remains in the database.
SQL> select comp_name,status from dba_registry order by 2;

COMP_NAME                                STATUS
---------------------------------------- --------
Oracle Database Java Packages            REMOVED
JServer JAVA Virtual Machine             REMOVED
Oracle XDK                               REMOVED
Oracle Real Application Clusters         VALID
Oracle Database Catalog Views            VALID
Oracle Workspace Manager                 VALID
Oracle XML Database                      VALID
Oracle Database Packages and Types       VALID
To fix this, remove the DBMS_JVM_EXP_PERMS package from the database. Refer 1095533.1 for exact steps. Removing this package doesn't make any changes to component status. Java components will be shown as removed. However the packages left behind during the removal process is cleaned up.
Afterwards expdp continue without any issue.



Useful metalink note
DataPump Export (EXPDP) Failed On Identifier SYS.DBMS_JVM_EXP_PERMS Must Be Declared [ID 1095533.1]

Related Posts
ORA-39127: unexpected error from call to export_string :=WMSYS.LT_EXPORT_PKG.SCHEMA_INFO_EXP
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.SCHEMA_INFO_EXP while Exporting

Tuesday, November 1, 2016

Plugging a Non-CDB (pre-12c) into CDB Using Transportable DB (TDB) / Tablespaces (TTS)

In previous post it was shown how a 12c non-CDB could be plugged in as a PDB. This post shows steps for plugging a pre-12c DB (anything pre-12c is by default non-CDB) as a PBD using transportable DB (TDB) and tablespace (TTS) methods. Both methods have similar steps, only difference being in TBD entire DB is transported while in TTS method only a set of tablespaces plugged in. In this case enterprise edition RAC DB (called ent114) of 11.2.0.4 version will be plugged into a single instance CDB using TDB/TTS methods.

Both methods requires creating a PDB before the transporting could begin. The tablespaces are attached to this "pre-existing" PDB. In this case it is called PDB114.
SQL> create pluggable database pdb114 admin user admin identified by admin;
SQL> alter pluggable database pdb114 open;
Following information and steps are common for both methods. The 11.2 database has following tablespaces. The user defined tablespaces are in bold.
SQL> select TABLESPACE_NAME,BLOCK_SIZE from dba_tablespaces order by 1;

TABLESPACE_NAME                BLOCK_SIZE
------------------------------ ----------
EXAMPLE                              8192
NEWTBS                               8192
SQLT                                 8192
SYSAUX                               8192
SYSTEM                               8192
TBS2K                                2048
TBS32K                              32768
TEMP                                 8192
TEST                                 8192
UNDOTBS1                             8192
UNDOTBS2                             8192
USERS                                8192
There are two tablespaces with non-default block sizes. Before the transport create non-default memory pools in the CDB as well.
Secondly also check if the endieness of the source and target systems are the same. If they are different, conversions is required before transported data files could be attached.
SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;  

PLATFORM_NAME        ENDIAN_FORMAT
-------------------- --------------
Linux x86 64-bit     Little
This is where the steps common for both TDB and TTS ends. Next is the steps specific to TDB method.

Plugging pre-12c DB using TDB
Put the user defined tablespaces to read only mode.
alter tablespace EXAMPLE read only;
alter tablespace NEWTBS read only;
alter tablespace SQLT read only;
alter tablespace TBS2K read only;
alter tablespace TBS32K read only;
alter tablespace TEST read only;
alter tablespace USERS read only;

SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM                         ONLINE
SYSAUX                         ONLINE
UNDOTBS1                       ONLINE
TEMP                           ONLINE
UNDOTBS2                       ONLINE
USERS                          READ ONLY
EXAMPLE                        READ ONLY
TEST                           READ ONLY
SQLT                           READ ONLY
NEWTBS                         READ ONLY
TBS2K                          READ ONLY
TBS32K                         READ ONLY
While the user defined tablespaces are in read only mode run a full export of the DB with transportable = always and version = 12 options. This option is supported on standard edition databases. Following expdp command is used in this case and tail end of the export log output is shown as well.
expdp system full=y dumpfile=ent114.dmp directory=EXEC_DIR transportable=always version=12 logfile=ent114.log

******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /usr/local/exdata/ent114.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  +DATA/ent11g2/datafile/example.268.785857285
Datafiles required for transportable tablespace NEWTBS:
  +DATA/ent11g2/datafile/newtbs.273.888232727
Datafiles required for transportable tablespace SQLT:
  +DATA/ent11g2/datafile/sqlt.269.826463789
Datafiles required for transportable tablespace TBS2K:
  +DATA/ent11g2/datafile/tbs2k.272.921767965
Datafiles required for transportable tablespace TBS32K:
  +DATA/ent11g2/datafile/tbs32k.271.921768047
Datafiles required for transportable tablespace TEST:
  +DATA/ent11g2/datafile/test.274.888250337
Datafiles required for transportable tablespace USERS:
  +DATA/ent11g2/datafile/users.264.785694801
Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at Mon Sep 5 14:52:08 2016 elapsed 0 00:05:34
The data files listed for transport must be copied out of the ASM disk group. There are many ways to do this, easiest could be to use cp with asmcmd. One example is shown here
ASMCMD> cp TBS2K.272.921767965 /home/grid/backup/
copying +DATA/ent11g2/datafile/TBS2K.272.921767965 -> /home/grid/backup/TBS2K.272.921767965
Once all the data files required for transport is copied out of ASM, the tablespaces made read only earlier could be made read write again.
alter tablespace EXAMPLE read write;
alter tablespace NEWTBS read write;
alter tablespace SQLT read write;
alter tablespace TBS2K read write;
alter tablespace TBS32K read write;
alter tablespace TEST read write;
alter tablespace USERS read write;
Next step is moving of data files copied out of ASM eariler to the actual location where PDB reside. In this case the PDB reside on a remote server and data file location for the PDB is
/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile
Therefore the data files are copied over the same location.
scp * oracle@192.168.0.99:/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/
Create a database directory in the PDB and copy the dump file created by the expdp to location refereed by the DB directory. This will be used during the import. Since there's no conversion needed due to endian format being the same the import can go ahead without any additional work. The parameter file used for import is shown below. The TRANSPORT_DATAFILES has comma separated list of datafiles with their new paths.
cat import.par
FULL=Y
DUMPFILE=ent114.dmp
DIRECTORY=ora_dump
TRANSPORT_DATAFILES=
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/EXAMPLE.268.785857285',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/NEWTBS.273.888232727',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/SQLT.269.826463789',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TBS2K.272.921767965',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TBS32K.271.921768047',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TEST.274.888250337',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/USERS.264.785694801'
LOGFILE=ent114imp.log
The DB is imported using the parameter file. The objects that already exists could result in warnings however the import completes successfully.
impdp system@pdb114 parfile=import.par
At the end of the import the user defined tablespaces are plugged in and online.
SQL> SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES ONLINE
NEWTBS                         YES ONLINE
SQLT                           YES ONLINE
TBS2K                          YES ONLINE
TBS32K                         YES ONLINE
TEST                           YES ONLINE
USERS                          YES ONLINE
All users in the pre-12c DB are created as local users of the PDB. In the default database properties are not affected, for example in the pre-12c DB the default DB tablespace was users but after the transport the PDB still retain it's original default tablespace.
SQL> SELECT PROPERTY_NAME,PROPERTY_VALUE FROM database_properties where property_name like 'DEFAULT%';

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ----------------
DEFAULT_TEMP_TABLESPACE        TEMP
DEFAULT_PERMANENT_TABLESPACE   SYSTEM
At this stage the PDB is ready for use with pre-12c DB plugged in.



Plugging pre-12c DB using TTS
This method uses transportable tablespaces to copy the user defined tablespaces to the PDB location and plug them to the PDB created. Same user defined tablespaces mentioned in earlier methods are used in this case as well. Before tablespaces could be transported verify they are self contained. DBMS_TTS.TRANSPORT_SET_CHECK could be used for this, however unlike in previous post strict or full containment check is also performed. Oracle admin guide says "for strict or full containment, you must additionally set the TTS_FULL_CHECK parameter to TRUE. The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace Point-in-Time Recovery (TSPITR) is one such case where dependent objects must be fully contained or fully outside the transportable set".
SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('EXAMPLE,NEWTBS,SQLT,TBS2K,TBS32K,TEST,USERS', TRUE,TRUE);
Check for any violations
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
no rows selected
If there's no violation proceed with the next steps. Similar to earlier method it's important to check if the source and the target both have the same endian.
Put the user defined tablespace to read only mode and run the export. The parameter file content is shown below.
cat tts.par
dumpfile=tts114.dmp
logfile=tts114.log
directory=EXEC_DIR
transport_tablespaces=EXAMPLE,NEWTBS,SQLT,TBS2K,TBS32K,TEST,USERS
transport_full_check=y

expdp system parfile=tts.par
End of the export the output log will list the data files for transport
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** parfile=tts.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /usr/local/exdata/tts114.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  +DATA/ent11g2/datafile/example.268.785857285
Datafiles required for transportable tablespace NEWTBS:
  +DATA/ent11g2/datafile/newtbs.273.888232727
Datafiles required for transportable tablespace SQLT:
  +DATA/ent11g2/datafile/sqlt.269.826463789
Datafiles required for transportable tablespace TBS2K:
  +DATA/ent11g2/datafile/tbs2k.272.921767965
Datafiles required for transportable tablespace TBS32K:
  +DATA/ent11g2/datafile/tbs32k.271.921768047
Datafiles required for transportable tablespace TEST:
  +DATA/ent11g2/datafile/test.274.888250337
Datafiles required for transportable tablespace USERS:
  +DATA/ent11g2/datafile/users.264.785694801
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Mon Sep 5 16:36:07 2016 elapsed 0 00:01:41
Same as previous method copy the data files out of ASM. Once copied out put the tablespace to read write mode. Transfer the data files copied out of ASM to PDB's data file location (same as earlier method).
On the PDB create a DB directory and transfer the exported dump file. Unlike the TDB method, under TTS method before the import the users must exist in the PDB. If not import will fail
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29342: user ASANGA does not exist in the database
Once all requried users are created in the PDB begin the import. The content of the import parameter file is given below. The TRANSPORT_DATAFILES reflect the actual file paths of the data files.
cat ttsimp.par
DUMPFILE=tts114.dmp
DIRECTORY=tts_dir
TRANSPORT_DATAFILES=
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/EXAMPLE.268.785857285',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/NEWTBS.273.888232727',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/SQLT.269.826463789',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TBS2K.272.921767965',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TBS32K.271.921768047',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TEST.274.888250337',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/USERS.264.785694801'
LOGFILE=tts114imp.log

impdp system@pdb114 parfile=ttsimp.par
After the import the tbalespaces will be plugged but read only mode.
SQL>  SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES READ ONLY
NEWTBS                         YES READ ONLY
SQLT                           YES READ ONLY
TBS2K                          YES READ ONLY
TBS32K                         YES READ ONLY
TEST                           YES READ ONLY
USERS                          YES READ ONLY
Change the read only mode to read write
alter tablespace EXAMPLE read write;
alter tablespace NEWTBS read write;
alter tablespace SQLT read write;
alter tablespace TBS2K read write;
alter tablespace TBS32K read write;
alter tablespace TEST read write;
alter tablespace USERS read write;

SQL>  SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES ONLINE
NEWTBS                         YES ONLINE
SQLT                           YES ONLINE
TBS2K                          YES ONLINE
TBS32K                         YES ONLINE
TEST                           YES ONLINE
USERS                          YES ONLINE
At this stage the PDB is ready for use with pre-12c DB plugged in.

Related Posts
Remote Cloning of a PDB
Move a PDB Between Servers
Plugging a SE2 non-CDB as an EE PDB Using File Copying and Remote Link

Saturday, October 15, 2016

Change in 12c JDBC Behavior - setDate & getDate Does Not Truncate Timestamp

The 12c driver (12.1.0.2, ojdbc7.jar) certified for JDK7 and JDK8 does not truncate or set to "00:00:00" the time component when called with setDate and getDate methods. Same could be observed for 12c driver for JDK 6 as well (ojdbc6.jar). The test code given at the end of the post could be used to demonstrate this change in behavior compared to 11g2 (11.2.0.4, ojdbc6.jar certified for JDK6, JDK7 and JDK8) driver. Various driver/database/JDK compatible matrix could be found here.
The test java code runs on JDK 8 and database is a 12.1.0.2 CDB. The only thing changes between tests is the JDBC driver.
First the test is run with 11.2 (11.2.0.4) driver. The output (truncated here) will have no time component on the values inserted or returned by setDate and getDate methods. Timestamp method is used to check what is stored in the table. The output is of the following form: the method (getDate or getTimestamp) used, the driver used to insert the value and value returned by get method used.
Date 11.2.0.4.0 2016-10-12 00:00:00
Timestamp 11.2.0.4.0 2016-10-12 00:00:00.0
There's no issues to reading these values with a 12.1 (12.1.0.2, ojdbc7.jar) driver, the output will be the same (run the test commenting calls to delete and insert methods)
Date 11.2.0.4.0 2016-10-12 00:00:00
Timestamp 11.2.0.4.0 2016-10-12 00:00:00.0
Changing the driver to 12c results in date being inserted with a time component (run test by un-commenting the previously commented methods).
Date 12.1.0.2.0 2016-10-12 12:26:12
Timestamp 12.1.0.2.0 2016-10-12 12:26:12.0
Depending on the application logic upgrade to 12c driver could cause issues due to this change in behavior. In such cases going back to 11.2 driver may work in some cases, as getDate would truncate the time component. But the actual value stored has the time component, shown here in the getTimestamp method, so the behavior is not entirely reversible by reverting to the 11.2 driver.
Date 12.1.0.2.0 2016-10-12 00:00:00
Timestamp 12.1.0.2.0 2016-10-12 12:26:12.0
There's few bug reports on MOS related this issue (Bug 19297927, Bug 20551186, 1944845.1). But the final outcome seem to be that this deliberate (Bug 17766200) and expected behavior in 12c driver (2177909.1) and that oracle documentation isn't reflecting it (Bug 18124680).



Therefore, if the use of 12c driver is a must then oracle has few workarounds and patches for this. One option is to use a calendar object and set the time components to 0 (1944845.1). Other is to apply patch 21161279 (2177909.1). The patch 21161279 supersedes patch 19297927 which only patched the setDate method. So getDate will return values inserted with time component. Once the patch 21161279 is applied following JVM option must be set in order for the set/get Date method to behave as 11.2 driver's methods. Patch has no effect unless this option is set
-Doracle.jdbc.DateZeroTime=true
Once patched and JVM option is set the run the test with 12.1 driver. The time components will be set to 0.
Date 12.1.0.2.0 2016-10-12 00:00:00
Timestamp 12.1.0.2.0 2016-10-12 00:00:00.0
This behavior and output values are same as 11.2 driver.
Same patch and JVM option could be used for 12.1 driver for JDK 6 as well (ojdbc6.jar).

Useful metalink notes
getDate() Gets Time After Upgrading To 12c [ID 2177909.1]
JDBC 12c Adds Timestamp to java.sql.Date After Upgrading From 11.2.0.4 [ID 1944845.1]
Bug 19297927 : CHANGE OF BEHAVIOR IN JDBC 12.1 DUE TO BUG 14389749 CAUSES QUERIES TO FAIL
Bug 17766200 : GETDATE AND SETDATE DO NOT TRUNCATE TIME IN JDBC 12C
Bug 20551186 : GETDATE DIFFERS WITH THE JDBC DRIVER VERSION 11G AND 12C
Bug 18124680 : GETDATE AND SETDATE DO NOT TRUNCATE TIME IN JDBC 12C BUT DOC STATES OTHERWISE
Bug 17228297 : JDBC DRIVER OJDBC*.JAR 12.1.0.1.0 SPECIFICATION VIOLATION REGRESSION

Related Post
java.sql.SQLException: Could not commit with auto-commit set on When Using 12c JDBC Driver

Test Table DDL
create table datetest (version varchar2(20), indate date);
Java Test Code
public class OJDBC7DateTest {

    public static void main(String[] args) throws SQLException {

        OracleDataSource ds = new OracleDataSource();
        ds.setUser("asanga");
        ds.setPassword("asa");
        ds.setURL("jdbc:oracle:thin:@192.168.0.66:1521/onepdb");

        Connection con = ds.getConnection();
        con.setAutoCommit(false);

        System.out.println("Auto commit status : " + con.getAutoCommit());
        DatabaseMetaData meta = con.getMetaData();

        System.out.println("Driver Name " + meta.getDriverName());
        System.out.println("Driver Version " + meta.getDriverVersion());
        System.out.println("Driver Major Version " + meta.getDriverMajorVersion());
        System.out.println("Driver Minor Version " + meta.getDriverMinorVersion());
        System.out.println("Database Major Version " + meta.getDatabaseMajorVersion());
        System.out.println("Database Minor Version " + meta.getDatabaseMinorVersion());

        System.out.println("delete");
        delete(con);
        
        System.out.println("insert");
        insert(con,meta.getDriverVersion());
        
        System.out.println("select");
        select(con);
              
        con.close();
    }

    public static void select(Connection con) throws SQLException {

        SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");

        PreparedStatement ps = con.prepareStatement("select * from datetest order by 1");
        ResultSet rs = ps.executeQuery();
        while (rs.next()) {

            System.out.println("Date " + rs.getString(1) + " " + sdf.format(rs.getDate(2)));
            System.out.println("Timestamp " + rs.getString(1) + " " + rs.getTimestamp(2));
        }
        rs.close();
        ps.close();

    }

    public static void insert(Connection con, String version) throws SQLException {

        PreparedStatement ps = con.prepareStatement("insert into datetest values(?, ?)");
        Date cd = new Date(System.currentTimeMillis());
        ps.setString(1, version);;
        ps.setDate(2, cd);
        ps.execute();
        con.commit();
        ps.close();

    }
    
    public static void delete(Connection con) throws SQLException {

        PreparedStatement ps = con.prepareStatement("delete from datetest");
        ps.execute();
        con.commit();
        ps.close();

    }
}