Friday, April 26, 2019

How to increase PROCESSES initialization parameter:

1.    Login as sysdba
    sqlplus / as sysdba
   
2. Check Current Setting of Parameters
    sql> show parameter sessions
    sql> show parameter processes
    sql> show parameter transactions

3.    If you are planning to increase "PROCESSES" parameter you should also plan to increase "sessions and "transactions" parameters
    A basic formula for determining  these parameter values is as follows:
   
        processes=x
        sessions=x*1.1+5
        transactions=sessions*1.1
       
4.    These paramters can't be modified in memory. You have to modify the spfile only (scope=spfile) and bounce the instance.
    sql> alter system set processes=500 scope=spfile;
    sql> alter system set sessions=555 scope=spfile;
    sql> alter system set transactions=610 scope=spfile;
    sql> shutdown abort
    sql> startup

Tuesday, April 23, 2019

Datapump with Database Link examples

Create user "DEMO_2019_DANTEST_CHRIS"


12:51:35 SQL> spool create_user.log
12:51:35 SQL> CREATE USER "DEMO_2019_DANTEST_CHRIS"  PROFILE "DEFAULT"
12:51:35   2      IDENTIFIED BY "7dmwK!2U" DEFAULT TABLESPACE "DATA_DEMO2019DANTESTCHRIS"
12:51:35   3      TEMPORARY TABLESPACE "TEMP"
12:51:35   4      ACCOUNT UNLOCK;
User created.
Elapsed: 00:00:00.83
12:51:36 SQL> GRANT CREATE PROCEDURE TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.01
12:51:36 SQL> GRANT CREATE SEQUENCE TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.00
12:51:36 SQL> GRANT CREATE SESSION TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.00
12:51:36 SQL> GRANT CREATE TABLE TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.01
12:51:36 SQL> GRANT CREATE VIEW TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.00
12:51:36 SQL> GRANT UNLIMITED TABLESPACE TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.00
12:51:36 SQL> GRANT EXECUTE ON  "SYS"."DBMS_JOB" TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.03
12:51:36 SQL> GRANT EXECUTE ON  "SYS"."DBMS_LOCK" TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.02
12:51:36 SQL> GRANT "CONNECT" TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.01
12:51:36 SQL> GRANT "RESOURCE" TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.01
12:51:36 SQL> GRANT CREATE JOB TO "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.
Elapsed: 00:00:00.00
12:51:36 SQL>

Grant create database link to "DEMO_2019_DANTEST_CHRIS"


SQL> grant create database link to "DEMO_2019_DANTEST_CHRIS";

Create database link 


SQL> conn "DEMO_2019_DANTEST_CHRIS"
Enter password:
Connected.

SQL> create database link DEMO2019TRAINING connect to DEMO_2019_TRAINING identified by "4canK!9P" using 'PROD_12';
Database link created.

SQL> select 1 from dual@DEMO2019TRAINING;
         1
----------
         1

Grant read, write on data pump directory in my case DEMO


SQL> grant read, write on directory DEMO to "DEMO_2019_DANTEST_CHRIS";
Grant succeeded.

Create parameter file


cat DEMO_2019_DANTEST_CHRIS_imp.par
userid=DEMO_2019_DANTEST_CHRIS/"7dmwK!2U"
directory=DEMO
logfile=DEMO_2019_DANTEST_CHRIS_imp.log
network_link=DEMO2019TRAINING
remap_schema=DEMO_2019_TRAINING:DEMO_2019_DANTEST_CHRIS
remap_tablespace=DATA_DEMO2019TRAINING:DATA_DEMO2019DANTESTCHRIS
remap_tablespace=INDX_DEMO2019TRAINING:INDX_DEMO2019DANTESTCHRIS
logtime=all
parallel=8

Initiate IMPORT


Import: Release 12.2.0.1.0 - Production on Tue Apr 23 13:52:51 2019
Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
23-APR-19 13:52:53.476: Starting "DEMO_2019_DANTEST_CHRIS"."SYS_IMPORT_SCHEMA_01":  DEMO_2019_DANTEST_CHRIS/******** parfile=DEMO_2019_DANTEST_CHRIS_imp.par
23-APR-19 13:52:54.075: Estimate in progress using BLOCKS method...
23-APR-19 13:53:04.685: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
23-APR-19 13:53:06.375: Total estimation using BLOCKS method: 8.156 GB
23-APR-19 13:53:07.193: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
23-APR-19 13:53:16.042: Processing object type SCHEMA_EXPORT/TABLE/TABLE
23-APR-19 13:55:22.251: . . imported "DEMO_2019_DANTEST_CHRIS"."ENROLLMENT_ROLLUP"  485237 rows
23-APR-19 13:55:22.253: . . imported "DEMO_2019_DANTEST_CHRIS"."BEN_GAPS"       2247146 rows
23-APR-19 13:55:23.303: . . imported "DEMO_2019_DANTEST_CHRIS"."CHASE_VISIT"     903479 rows
23-APR-19 13:55:23.336: . . imported "DEMO_2019_DANTEST_CHRIS"."RX_IN_20190404031035" 1131979 rows
23-APR-19 13:55:24.267: . . imported "DEMO_2019_DANTEST_CHRIS"."ENROLLMENT"      278306 rows
23-APR-19 13:55:26.494: . . imported "DEMO_2019_DANTEST_CHRIS"."VISIT_IN_20190404031035" 3388950 rows
23-APR-19 13:55:26.610: . . imported "DEMO_2019_DANTEST_CHRIS"."MEMBER_SUMMARY_GRP"  151913 rows
23-APR-19 13:55:27.012: . . imported "DEMO_2019_DANTEST_CHRIS"."MEMBER_SUMMARY"  151913 rows

- - - - - - - -
- - - - - -- - -
- - - - - - - -





Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (Doc ID 2118136.2)

https://support.oracle.com/epmos/faces/DocumentDisplay?parent=SrDetailText&sourceId=3-19758136241&id=2118136.2

Don't Forget System Level Statistics

If you do any work with performance tuning, you know the importance of Object Level statistics. Oracle uses this information to estimate how many rows will be returned by different steps in the plan. These estimates help the Optimizer form what it thinks is the best plan.

There are numerous subtopics that can have large impacts on SQL Statements. One statistics related item that is often overlooked is System Level Statistics.

When I talk about system level statistics I am referring to three different things:
1. Fixed object statistics
2. Data Dictionary statistics
3. System stats.

If these items are not kept up to date, then you can see some performance degradation. Oracle has several blog posts detailing the importance of these statistics. These statistics should be updated following upgrades to the database software. The system statistics should be updated following any hardware changes.


Fixed Object Statistics

This is different from user level object statistics. For example if you are querying the EMP and DEPT tables then you usually will want to have accurate statistics on the tables which reflect the data distribution of values in the columns of those tables. I say usually because there may be exceptions to this rule.

You can check the status of the fixed object statistics using the following query:
SELECT owner
     , table_name
     , last_analyzed
FROM   dba_tab_statistics
WHERE  table_name IN
             (SELECT name
              FROM   v$fixed_table
              WHERE  type = 'TABLE'
             )
ORDER BY last_analyzed NULLS LAST;

Fixed Objects are the X$ objects defined in the database. These objects are not owned by regular users, so gathering stats on user level objects will not impact these objects. There is a different command,

EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS

that will gather stats on these objects. The statistics should be run while the system is performing a normal workload. If there are significant updates to the workload then the statistics should be regathered. Also, the statistics should be regathered following any upgrades.

For more information about the importance of fixed objects can be seen here:
https://blogs.oracle.com/optimizer/entry/fixed_objects_statistics_and_why


Data Dictionary Statistics

Similar to the fixed objects, data dictionary statistics are also used by the system to build plans for system queries. If these are not kept up to date you will see some performance issues related to queries against those objects.

You can check the status of data dictionary statistics with the following query:
SELECT owner
     , table_name
     , last_analyzed
FROM   dba_tab_statistics
WHERE  owner='SYS'
ORDER BY last_analyzed NULLS LAST;

The command to update these statistics is DBMS_STATS.GATHER_DICTIONARY_STATS.

At a minimum this should be run after every database upgrade. To keep these statistics up to date, you may choose to run the gather command more frequently such as every quarter.

For additional information you can also look at the My Oracle Support Document, How to Gather Statistics on Objects Owned by the 'SYS' User and 'Fixed' Objects (Doc ID 457926.1)



System Statistics

The system statistics collect information about the hardware, in particular CPU performance and I/O performance. The Oracle optimizer will use this information to determine the cost of a statement. If this data is not accurate then it can result in suboptimal plans. This data is collected with the command 

EXEC DBMS_STATS.GATHER_SYSTEM_STATS

As part of any hardware changes or database upgrades, these statistics should be recollected. Note that if you are running on Exadata, you should use the command 

EXEC DBMS_STATS.GATHER_SYSTEM_STATS('EXADATA').

To check the system statistics use the following query

select    * from    sys.aux_stats$;

These fixed object and system statistics provide a foundation that is used by the optimizer to impact many different statements. If you are collecting user level objects but not keeping the other statistics up to date then you are putting yourself at risk for poor performance.

Listing inactive database accounts without auditing turned on

I sometimes get asked for information on when users last logged into a database so that the application owner can identify inactive accounts. This is especially useful if they are licensed by the number of users.

This is fine if you are on V12:

select username,last_login from dba_users;

USERNAME             LAST_LOGIN
--------             ----------
ANDY                 10-JAN-17 10:53:22

However, V11 doesn't have this column, and unless you have auditing on you can't use the audit trail.

Due to a quirk with the accounts, an account_status will show as OPEN if the user hasn't logged in since the password expired.

So this query:

select a.name,a.ctime,a.ptime,b.account_status,b.profile,c.limit
from sys.user$ a,dba_users b,dba_profiles c
where c.resource_name='PASSWORD_LIFE_TIME'
and a.name=b.username
and b.profile=c.profile
and c.limit != 'UNLIMITED';


will show the last time the password was changed (ptime) and the number of days after which it will expire (limit).

 NAME CTIME     PTIME    ACCOUNT_STATUS     PROFILE            LIMIT
--------------- -------  ---------          --------           ---- 

ANDY 02-FEB-15 02-FEB-15 OPEN         APP_ADM_DEFAULT          40

So this shows that even though the user was created and password set on the 02-FEB-15, and the password expires after 40 days, the account is still OPEN, not LOCKED or EXPIRED.

According to this Oracle Support note -


Why Account Status Is Open When Expiry Date is Old Date in DBA_USERS (Doc ID 291195.1)

unless the user attempts to connect after the password has expired, the account will show as OPEN.

So any account that is in OPEN status and is past the expiry_date for the profile is likely to be unused.

Thursday, March 14, 2019

How to recover loss of SYSTEM tablespace on different location

This is another post on a scenario describing how to proceed when you lose the SYSTEM tablespace: as already said this tablespace always contains the data dictionary tables for the entire database.

In particular this example will restore the lost tablespace to another location, just as you have to restore it because a disk controller is no more working and you have to recreate it (recovering) to a different location.

When you lose the SYSTEM tablespace the instance could crash or hang: anyway the instance will alert you as soon as possible.

It's not rare the case when you cannot even shutdown the instance and proceed with a kill or a shutdown abort command.

Let's begin simulating the loss of the SYSTEM tablespace. In my case the instance was running, so I shutted it down 

[oracle@alpha datafile]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 14 16:38:15 2019
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

[oracle@alpha datafile]$ ls -lrt
total 1555972
-rw-r----- 1 oracle oinstall  10493952 Mar 13 13:46 example.dbf.back
-rw-r----- 1 oracle oinstall  62922752 Mar 13 22:01 o1_mf_temp_g8lf4w5d_.tmp
-rw-r----- 1 oracle oinstall   5251072 Mar 14 16:38 o1_mf_users_g8mglp3n_.dbf
-rw-r----- 1 oracle oinstall  62922752 Mar 14 16:38 o1_mf_undotbs1_g8lf34lb_.dbf
-rw-r----- 1 oracle oinstall 828383232 Mar 14 16:38 o1_mf_system_g8lmhryv_.dbf
-rw-r----- 1 oracle oinstall 660611072 Mar 14 16:38 o1_mf_sysaux_g8lf08dm_.dbf
-rw-r----- 1 oracle oinstall  10493952 Mar 14 16:38 example.dbf


Remove SYSTEM datafile.

[oracle@alpha datafile]$ pwd
/u01/app/oracle/oradata/TEST/datafile
[oracle@alpha datafile]$ rm o1_mf_system_g8lmhryv_.dbf


Lets connect to RMAN and start database in MOUNT to restore/recover process

[oracle@alpha datafile]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Mar 14 16:41:31 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database (not started)

RMAN> startup mount
Oracle instance started
database mounted

Total System Global Area     419430400 bytes
Fixed Size                     2925120 bytes
Variable Size                335547840 bytes
Database Buffers              75497472 bytes
Redo Buffers                   5459968 bytes


Using the report schema command, you can see RMAN is not able to correctly know the size of SYSTEM datafile. 

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name TEST

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    0        SYSTEM               ***     /u01/app/oracle/oradata/TEST/datafile/o1_mf_system_g8lmhryv_.dbf3    630      SYSAUX               ***     /u01/app/oracle/oradata/TEST/datafile/o1_mf_sysaux_g8lf08dm_.dbf
4    60       UNDOTBS1             ***     /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_g8lf34lb_.dbf
5    10       EXAMPLE              ***     /u01/app/oracle/oradata/TEST/datafile/example.dbf
6    5        USERS                ***     /u01/app/oracle/oradata/TEST/datafile/o1_mf_users_g8mglp3n_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    60       TEMP                 32767       /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_g8lf4w5d_.tmp


Now restore the datafile to different location.

[oracle@alpha datafile]$ mkdir non_default_location

[oracle@alpha datafile]$ cd non_default_location/


[oracle@alpha non_default_location]$ pwd
/u01/app/oracle/oradata/TEST/datafile/non_default_location

To switch a datafile to another location you have to user SET NEWNAME.
And its important to include SWITCH DATAFILE ALL command before recovery

What does it mean and why we have to execute that command ? From Oracle documentation it "specifies that all data files for which a SET NEWNAME FOR DATAFILE command has been issued in this job are switched to their new name": control file will be so updated with the new location of SYSTEM datafile. 

RMAN> run {
set newname for datafile 1 to '/u01/app/oracle/oradata/TEST/datafile/non_default_location/system.dbf';
restore tablespace system;
switch datafile all;
recover tablespace system;
alter database open;
}2> 3> 4> 5> 6> 7>


executing command: SET NEWNAME
Starting restore at 14-MAR-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=15 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/TEST/datafile/non_default_location/system.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/TEST/backupset/2019_03_13/o1_mf_nnndf_TAG20190313T134524_g8lmy49o_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/TEST/backupset/2019_03_13/o1_mf_nnndf_TAG20190313T134524_g8lmy49o_.bkp tag=TAG20190313T134524
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 14-MAR-19

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=1002906030 file name=/u01/app/oracle/oradata/TEST/datafile/non_default_location/system.dbf

Starting recover at 14-MAR-19
using channel ORA_DISK_1

starting media recovery
archived log for thread 1 with sequence 12 is already on disk as file /u01/app/oracle/fast_recovery_area/TEST/archivelog/2019_03_13/o1_mf_1_12_g8lmz859_.arc
archived log for thread 1 with sequence 13 is already on disk as file /u01/app/oracle/fast_recovery_area/TEST/archivelog/2019_03_13/o1_mf_1_13_g8mjyhs7_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u01/app/oracle/fast_recovery_area/TEST/archivelog/2019_03_13/o1_mf_1_14_g8mjzwsm_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u01/app/oracle/fast_recovery_area/TEST/archivelog/2019_03_13/o1_mf_1_15_g8mk1hr7_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u01/app/oracle/fast_recovery_area/TEST/archivelog/2019_03_14/o1_mf_1_16_g8nqnvxr_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/TEST/archivelog/2019_03_13/o1_mf_1_12_g8lmz859_.arc thread=1 sequence=12
archived log file name=/u01/app/oracle/fast_recovery_area/TEST/archivelog/2019_03_13/o1_mf_1_13_g8mjyhs7_.arc thread=1 sequence=13
archived log file name=/u01/app/oracle/fast_recovery_area/TEST/archivelog/2019_03_13/o1_mf_1_14_g8mjzwsm_.arc thread=1 sequence=14
media recovery complete, elapsed time: 00:00:08
Finished recover at 14-MAR-19

Statement processed

After the end of recovery process, you can report the schema info again. The size of SYSTEM tablespace is again well known and the report schema command shows also a new location of the datafile number 1. 

RMAN> report schema;
Report of database schema for database with db_unique_name TEST
List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               YES     /u01/app/oracle/oradata/TEST/datafile/non_default_location/system.dbf
3    640      SYSAUX               NO      /u01/app/oracle/oradata/TEST/datafile/o1_mf_sysaux_g8lf08dm_.dbf
4    60       UNDOTBS1             YES     /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_g8lf34lb_.dbf
5    10       EXAMPLE              NO      /u01/app/oracle/oradata/TEST/datafile/example.dbf
6    5        USERS                NO      /u01/app/oracle/oradata/TEST/datafile/o1_mf_users_g8mglp3n_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    60       TEMP                 32767       /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_g8lf4w5d_.tmp


We can see the system.dbf file in new location.

[oracle@alpha non_default_location]$ ls -rlt
total 809768
-rw-r----- 1 oracle oinstall 828383232 Mar 14 17:01 system.dbf