Wednesday, December 24, 2025

step by step Reading MSSQL DB data from Oracle 19c database using DB link & ODBC Drivers :

 Reading MSSQL DB data from Oracle 19c  database using DB link & ODBC Drivers  :

****************************************************************************

List of steps:

**************

1. Root - Install Microsoft SQL server ODBC drivers.

  sudo ACCEPT_EULA=Y yum install -y msodbcsql17

2. Root - Create ODBC entry in /etc/odbc.ini file pointing to remote.

3. Oracle - create init parameter file initora_19c_DB_MSSQL. Ora

4. Oracle - register SID in listener.ora.

5. Oracle- Create entry in tnsnames.ora pointing to SID.

6. Oracle - create DB link in oracle database.

7. Oracle- Optionally - Create synonym.




8. Install Microsoft SQL server ODBC drivers

How to install the Microsoft ODBC Driver for SQL Server on Linux. It also includes instructions for the optional command-line tools for SQL Server and the unixODBC development headers.

Here we are taking up the ODBC 17 drivers. Please go to the below link and ask to unix team to install the driver on the server.

https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver17&tabs=redhat18-install%2Credhat17-install%2Cdebian8-install%2Credhat7-13-install%2Crhel7-offline



Once installed the package they need to run the ODBC driver using below command.


  sudo ACCEPT_EULA=Y yum install -y msodbcsql17


9. Create ODBC entry in /etc/odbc.ini file pointing to remote


To create an ODBC entry in /etc/odbc.ini that points to a remote database, you need to define a Data Source Name (DSN) within the file. This DSN will contain the necessary parameters to connect to your remote database.

Here is an example of an odbc.ini entry for a remote database.


[DS_MSSQL]

Description = MSSQL

Driver = /usr/lib64/libmsodbcsql-17.so

Server = 20.20.30.30.\MSSQL_instance1

Port = 57500

Database = MSSQL_DB

UID = user_ora2mssql


Explanation of the parameters:

[DS_MSSQL]: This is the DSN name. You can choose any descriptive name for your connection.

Description: A brief description of the data source.

Driver: The name of the ODBC driver you have installed for your specific database SQL Server. Ensure the corresponding driver is correctly installed and configured in /etc/odbcinst.ini.

Server : The hostname or IP address of your remote database server.

Port: The port number on which the database server is listening for  SQL Server.

Database: The name of the specific database you want to connect to on the remote server.

UID: The username for authenticating with the database.



10. Create init parameter file initora_19c_DB_MSSQL. Ora


To create a Heterogeneous Services (HS) initialization parameter file for connecting to a SQL Server database named ora_19c_DB, you would create a text file named initora_19c_DB_MSSQL.ora and place it in the $ORACLE_HOME/hs/admin directory. This file contains parameters specific to the gateway, not the main Oracle database.

The file should be located in the $ORACLE_HOME/hs/admin directory.

Below is a sample configuration. The specific parameters required will depend on your environment and the specific Oracle Database Gateway version you are using.


vi $ORACLE_HOME/hs/admin/initora_19c_DB_MSSQL.ora

# needed for the database gateway for odbc

# HS init parameters

HS_FDS_CONNECT_INFO = DS_MSSQL

HS_FDS_SHAREABLE_NAME = /usr/lib64/libmsodbcsql-17.so

HS_LANGUAGE= AL16UTF16

HS_FDS_REMOTE_DB_CHARSET=AMERICAN_AMERICA.AL16UTF16

HS_NLS_NCHAR=UCS2

#

# ODBS SPECIFIC ENVIRONMEANT VARIABLES

#

set ODBCINI=/etc/odbc.ini


11. register SID in listener.ora. 


Add the HS SID listener in listener file To register an Oracle System Identifier (SID) for a database instance or a Heterogeneous Services (HS) agent in the listener.ora file, you need to perform static registration. This involves manually adding a SID_DESC entry to the SID_LIST section of the file. 

The listener. Ora file is typically located in the $ORACLE_HOME/network/admin directory

Add a SID_DESC entry:-

Within the SID_LIST_<listener_name> section (by default SID_LIST_LISTENER), add a new SID_DESC block. This block specifies the details for the service you want to register.For an Oracle Database Instance

==================================

LISTENER_ora_19c_DB =

  (DESCRIPTION_LIST =

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = TCP)(HOST = 30.30.30.50)(PORT = 1587))

      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1587))

    )

  )

 

SID_LIST_LISTENER_ora_19c_DB =

  (SID_LIST =

    (SID_DESC =

      (SID_NAME = ora_19c_DB)

      (ORACLE_HOME = /oracle/ora_19c_DB/12201)

    )

    (SID_DESC =

      (SID_NAME = ora_19c_DB_MSSQL)

      (ORACLE_HOME = /oracle/ora_19c_DB/12201)

      (PROGRAM = /oracle/ora_19c_DB/12201/bin/dg4odbc)

    )

  )


12. Create entry in tnsnames.ora pointing to SID.


To create an entry in the tnsnames.ora file pointing to an Oracle SID, follow these steps:

Locate the tnsnames.ora file: This file is typically found in the $ORACLE_HOME/network/admin directory. If the TNS_ADMIN environment variable is set, it will point to the directory containing the tnsnames.ora file.

Open the tnsnames.ora file: Use a text editor to open the tnsnames.ora file.

To create an entry in the tnsnames.ora file for an Oracle Heterogeneous Services (HS) SID, you need to manually edit the file to include a specific connect descriptor. The HS=OK parameter is crucial in the CONNECT_DATA section to direct Oracle Net to use Heterogeneous Services.


TNS_MSSQL.WORLD =

(DESCRIPTION =

    (SDU = 32768)

    (ADDRESS_LIST =

        (ADDRESS =

          (PROTOCOL = TCP)

          (HOST = 30.30.30.50)

          (PORT = 1587)

        )

    )

    (CONNECT_DATA =

       (SID =ora_19c_DB_MSSQL)

         )

        (HS=OK)

  )


(HS = OK): This mandatory parameter tells Oracle Net to connect to a non-Oracle system using Heterogeneous Services.


13. Oracle - create DB link in oracle database.


Permissions: 

Ensure the user creating the database link has the necessary CREATE DATABASE LINK privilege. The remote_user in the CONNECT TO clause must exist in the remote database and have privileges to access the objects you intend to use via the link.

Public vs. Private: 

Database links can be PUBLIC (accessible by all users) or private (owned by the user who created it). To create a public link, add the PUBLIC keyword after CREATE.

GLOBAL_NAMES parameter: 

The GLOBAL_NAMES initialization parameter can impact database link behavior. If set to TRUE, the database link name must match the global name of the remote database. Consider setting it to FALSE if strict name matching is not desired.

Note :- Before you creating the DB link they need to be create the SQL server authentication user in SQL server then only you make connection between from Oracle to MSSQL.

Create SQL Server Authentication User:- 

In SQL Server Management Studio (SSMS): 

o Go to Security → Logins → New Login.

o Choose SQL Server Authentication (not Windows Authentication).

o Set a strong password and ensure Enforce password policy is configured as needed.

o Map the login to the required database and assign appropriate roles (e.g., db_datareader.


Syntax :-         CREATE DATABASE LINK my_remote_link     CONNECT TO remote_user IDENTIFIED     BY remote_password    USING 'REMOTE_DB_ALIAS';


Example :- create public database link  mssql_dblink6 connect to "user_ora2mssql" identified by "" using 'TNS_MSSQL';


Select  count(*)  from  "view_test"@mssql_dblink6;

 

14. Optionally - Create synonym.


To create a synonym for a remote object accessed through a database link in Oracle.


Once the database link is established, you can create a synonym for a specific object (table, view, sequence, etc.) in the remote database.


CREATE SYNONYM synonym_name FOR remote_table_name@db_link_n


Example :- Create public synonym mssqlview   For view_test@mssql_dblink6;


After creating the synonym, you can query the remote table using the synonym name as if it were a local object:


Select count(*) from mssqlview.


This will read the Data from MSSQL from Oracle DB


 

 


Tuesday, December 23, 2025

step by step to read oracle DB data from MSSQL server using linked server in MSSQL database

 Linked Server in MS SQL Server

-----------------------------------


Source : MS SQL 2019 EE on Windows 22

Target : Oracle 19c R2 EE on Linux 8 - IP - 192.168.56.74 Port- 1583 ServiceName- ORA_DB_19c


Pre-Req:

-----------


a. You must have Oracle client installed on MSSQL server.

b. Working Network Connection between both Servers.

   (If required create exception in Firewall.)

c. Remote (Oracle) DB credentials.

d. Proper TNS entry for remote (Oracle) DB.


Steps:

---------


1. Create User in Remote DB Server


Create user oracle identified by oracle123;

grant create session to oracle;


2. Provide Required Permission to User as per your need


grant select any table to oracle;

grant select any dictionary to oracle;


3. Create TNS Entry in Oracle Client TNSNAMES.ora in Source Server (MS SQL Server)


ORA_DB_19c=(DESCRIPTION=

(CONNECT_DATA=

(SERVICE_NAME=ORA_DB_19c))

(ADDRESS=

(PROTOCOL=TCP)(HOST=12.16.53.12)(PORT=1583)

)

)


4. Create Linked Server Using SSMS


You Need 3 Values : 


a. LINKED SERVER NAME -- Any Name you want to use , Just Like any DBLink. --ORACLE_DB

b. Product Name -- Oracle

c. Data Source -- TNS Service Name of Oracle DB

d. Fill Credentials of Remote DB in Security Section --oracle DB user name & password


5. Enable AllowInProcess in SQL Server


EXEC master.dbo.sp_MSset_oledb_prop N'OraOLEDB.Oracle', N'AllowInProcess', 1


6. Test Linked Server


7. Fetch records from Oracle DB using Linked Server in MS SQL Server

ORACLE_DB---Linked server name


SELECT * FROM OPENQUERY([ORACLE_DB] , 'select name,open_mode,host_name from v$database,v$instance') ;

Thursday, December 18, 2025

SQL script to push the grants to a group in AWS Redshift DB for Read and write access

 1.select 'grant usage on schema ' ||nspname||' to <new_group_name>;' from pg_catalog.pg_namespace 

where nspname not in ('public','pg_catalog','catalog_history','information_schema','dbainfo') and nspname not like 'pg_%';  ---1



2.select 'grant select,insert,update,delete on all tables in schema ' ||nspname||' to <new_group_name>;' from pg_catalog.pg_namespace 

where nspname not in ('public','pg_catalog','catalog_history','information_schema','dbainfo') and nspname not like 'pg_%';     ---2



3.select 'ALTER DEFAULT PRIVILEGES IN SCHEMA ' ||nspname||' for user "<table_owner>" GRANT select,insert,update,delete ON tables to <new_group_name>;' from pg_catalog.pg_namespace 

where nspname not in ('public','pg_catalog','catalog_history','information_schema','dbainfo') and nspname not like 'pg_%';      ---3


SQL script to change the ownership of schema, table ,procedure in AWS Redshift Database

 

SQL script to change the ownership of schema, table ,procedure in AWS Redshift Database

1.change the ownership of schema


select 'alter schema ' ||nspname||' owner to "<new_schema_name>";' from pg_catalog.pg_namespace 

where nspname not in ('public','pg_catalog','catalog_history','information_schema','dbainfo') and nspname not like 'pg_%';


2.change the ownership of table


select 'alter table   klg_nga_kla.'||schemaname||'."'||tablename||'" owner to "<new_schema_name>";'

from pg_catalog.pg_tables where schemaname not in ('public','pg_catalog','catalog_history','information_schema','dbainfo') and schemaname not like 'pg_%';


3.change the ownership of procedure

SELECT 'alter  procedure '||n.nspname||'."'||p.proname||'" owner to "<new_schema_name>";'

    

FROM

    pg_catalog.pg_namespace n

JOIN pg_catalog.pg_proc p ON

    pronamespace = n.oid

join pg_catalog.pg_user b on

    b.usesysid = p.proowner

where

    nspname not in ('public','pg_catalog','catalog_history','information_schema','dbainfo') and nspname not like 'pg_%';


--add argument on the procedue while changing the ownership


AWS Redshift SQL scripts to find out session and Audit information

 What is the best way to assess tables that need to be vacuumed or analyzed?

This query returns tables where greater than 20% of rows are unsorted or statistics are 20% stale.


SELECT "database", "schema", "table", unsorted, stats_off

FROM svv_table_info

WHERE unsorted > 20

OR stats_off > 20


How can I troubleshoot loading errors?

Selecting from stl_load_errors provides information about errors during loading, and can be helpful for troubleshooting problematic loads.


SELECT *

FROM stl_load_errors

ORDER BY starttime DESC

LIMIT 100;




How to look specifically for failed logins?

SELECT *

FROM stl_connection_log

WHERE event='authentication failure'

ORDER BY recordtime;


Showing successfully authenticated users with the number of successful authentications:

SELECT username, event, COUNT(*)

FROM stl_connection_log

WHERE event = 'authenticated'

GROUP BY 1, 2

ORDER BY 3 DESC;


Showing successfully authenticated users by hourly buckets:

SELECT DATE_PART(YEAR, recordtime) || '-' ||

LPAD(DATE_PART(MONTH, recordtime),2,'0') || '-' ||

LPAD(DATE_PART(DAY, recordtime),2,'0') || ' ' ||

LPAD(DATE_PART(HOUR, recordtime),2,'0') AS hour_bucket, username, COUNT(*)

FROM stl_connection_log

WHERE event = 'authenticated'

GROUP BY 1, 2

ORDER BY 1, 2 DESC;



Showing a list of the connection drivers used by the redshift users:

SELECT username, application_name, COUNT(*) 

FROM stl_connection_log

WHERE application_name != ''

GROUP BY 1,2

ORDER BY 1,2;



Privilege violation logging & monitoring in AWS Redshift

Prior to setting such access controls, you will be able to see queries pulling data from these resources by querying STL_QUERY, as seen below: Retrieving queries access to specific objects in Redshift:

SELECT * FROM STL_QUERY

WHERE userid!=1

AND querytxt LIKE '%customers%'

ORDER BY query DESC

LIMIT 100;


SELECT username,dbname,recordtime

FROM stl_connection_log

WHERE event='authentication failure'

ORDER BY recordtime > '2022-07-08';



 Get the disk based queries information for last 2 days

SELECT q.query, 

       q.endtime - q.starttime             AS duration, 

       SUM(( bytes ) / 1024 / 1024 / 1024) AS GigaBytes, 

       aborted, 

       q.querytxt 

FROM   stl_query q 

       join svl_query_summary qs 

         ON qs.query = q.query 

WHERE  qs.is_diskbased = 't' 

       AND q.starttime BETWEEN SYSDATE - 2 AND SYSDATE 

GROUP  BY q.query, 

          q.querytxt, 

          duration, 

          aborted 

ORDER  BY gigabytes DESC ;




/* Query showing information about sessions with currently running queries */

SELECT s.process AS pid

       ,date_Trunc ('second',s.starttime) AS S_START

       ,datediff(minutes,s.starttime,getdate ()) AS conn_mins

       ,trim(s.user_name) AS USER

       ,trim(s.db_name) AS DB

       ,date_trunc ('second',i.starttime) AS Q_START

       ,i.query

       ,trim(i.query) AS sql

FROM stv_sessions s

  LEFT JOIN stv_recents i

         ON s.process = i.pid

        AND i.status = 'Running'

WHERE s.user_name <> 'rdsdb'

ORDER BY 1;


/* Query shows EXPLAIN plans which flagged "missing statistics" on the underlying tables */

SELECT substring(trim(plannode),1,100) AS plannode

       ,COUNT(*)

FROM stl_explain

WHERE plannode LIKE '%missing statistics%'

AND plannode NOT LIKE '%redshift_auto_health_check_%'

GROUP BY plannode

ORDER BY 2 DESC;


/* query showing queries which are waiting on a WLM Query Slot */

SELECT w.query

       ,substring(q.querytxt,1,100) AS querytxt

       ,w.queue_start_time

       ,w.service_class AS class

       ,w.slot_count AS slots

       ,w.total_queue_time / 1000000 AS queue_seconds

       ,w.total_exec_time / 1000000 exec_seconds

       ,(w.total_queue_time + w.total_Exec_time) / 1000000 AS total_seconds

FROM stl_wlm_query w

  LEFT JOIN stl_query q

         ON q.query = w.query

        AND q.userid = w.userid

WHERE w.queue_start_Time >= dateadd(day,-7,CURRENT_DATE)

AND   w.total_queue_Time > 0

-- and q.starttime >= dateadd(day, -7, current_Date)    

-- and ( querytxt like 'select%' or querytxt like 'SELECT%' ) 

ORDER BY w.total_queue_time DESC

         ,w.queue_start_time DESC limit 35;


/* query showing queries which are waiting on a WLM Query Slot */

SELECT w.query

       ,substring(q.querytxt,1,100) AS querytxt

       ,w.queue_start_time

       ,w.service_class AS class

       ,w.slot_count AS slots

       ,w.total_queue_time / 1000000 AS queue_seconds

       ,w.total_exec_time / 1000000 exec_seconds

       ,(w.total_queue_time + w.total_Exec_time) / 1000000 AS total_seconds

FROM stl_wlm_query w

  LEFT JOIN stl_query q

         ON q.query = w.query

        AND q.userid = w.userid

WHERE w.queue_start_Time >= dateadd(day,-7,CURRENT_DATE)

AND   w.total_queue_Time > 0

-- and q.starttime >= dateadd(day, -7, current_Date)    

-- and ( querytxt like 'select%' or querytxt like 'SELECT%' ) 

ORDER BY w.total_queue_time DESC

         ,w.queue_start_time DESC limit 35;


Tuesday, December 16, 2025

Shell script to check and record the database size in Redshift Database

Here is the custom script used to check the Redshift DB size and record it every month .This  will  be useful to get the know the DB growth.


1.create the table on DB  to store the DB size

CREATE TABLE admin.rs_db_size_dev (

    db_name    VARCHAR(256) ENCODE zstd,

    db_size_gb NUMERIC(    db_size_gb NUMERIC(37,2) ENCODE az64,

    date       DATE ENCODE az64

)

DISTSTYLE AUTO

2.schedule the job on cron

2 0 12 * *  /redshiftadmin/aws/scripts/dev_db_size_chk.sh DB1_DW1_India >> /redshiftadmin/aws/audit/log/db_size_dev_india.log 2>&1

3.load the script to  the server.

##################################################################################################

## purpose     :script used to collect Redshift dev DB size details                              #

## Author      : Bala P                                                                          #

## Developed   :10-july-2023 V1                                                                  #

##################################################################################################

#

##!/usr/bin/bash

#

export PGHOST=XXXXXXXXXXX.redshift.amazonaws.com

export PGPORT='5439'

export PGDATABASE=$1

export PGUSER=rsdbadmin


export PGPASSWORD='XXXXXXXXXXXX'

export wdir=/redshiftadmin/aws/audit/scripts/

export logdir=/redshiftadmin/aws/audit/log



query_result=$(psql -tA -c  "INSERT INTO admin.rs_db_size_dev (db_name, db_size_GB, date)

SELECT     dbase_name, total_GB AS db_size, CURRENT_DATE   FROM ( SELECT   dbase_name,  SUM(megabytes/1024) AS total_GB

FROM  admin.v_space_used_per_tbl   GROUP BY   dbase_name ) AS aggregated_data;")

#

#

if [[ -n "$query_result" ]]; then

##   if [[ -n "$query_result" && $(echo "$query_result" | grep -c 'exec_time_hours > 0.001') -gt 0 ]]; then

#       

recipient="BALAS@abc.com"


       subject="DB size  @ $PGDATABASE in Dev"

       body="DB size in dev cluster\n$query_result"


       echo -e "$body" | mailx -s "$subject" "$recipient"

      fi


4.output of the script


=# select * from admin.rs_db_size_dev;

 db_name    | db_size_gb |    date

---------------+------------+------------

DB1_DW1_India |    1654.00 | 2024-10-12

DB1_DW1_India |    1671.00 | 2024-11-12

DB1_DW1_India |    1515.00 | 2024-12-12

DB1_DW1_India |    1528.00 | 2025-01-12

DB1_DW1_India |    1573.00 | 2025-02-12

DB1_DW1_India |    1579.00 | 2025-03-12

DB1_DW1_India |    1666.00 | 2025-04-12

DB1_DW1_India |    1787.00 | 2025-05-12

DB1_DW1_India |    1788.00 | 2025-06-12

DB1_DW1_India |    1791.00 | 2025-07-12

DB1_DW1_India |    1788.00 | 2025-08-12

DB1_DW1_India |    1796.00 | 2025-09-12

DB1_DW1_India |    1801.00 | 2025-10-12

DB1_DW1_India |    1801.00 | 2025-11-12

DB1_DW1_India |    1801.00 | 2025-12-12

DB1_DW1_India |     249.00 | 2023-11-10

DB1_DW1_India |     249.00 | 2023-11-10

DB1_DW1_India |     249.00 | 2023-11-12

DB1_DW1_India |     246.00 | 2023-12-12

DB1_DW1_India |     166.00 | 2024-01-12

DB1_DW1_India |     318.00 | 2024-02-12

DB1_DW1_India |     236.00 | 2024-03-12

DB1_DW1_India |     334.00 | 2024-04-12

DB1_DW1_India |     772.00 | 2024-05-12

DB1_DW1_India |     965.00 | 2024-06-12

DB1_DW1_India |    1082.00 | 2024-07-12

DB1_DW1_India |    1528.00 | 2024-08-12

DB1_DW1_India |    1606.00 | 2024-09-12

(28 rows)




Snowflake Database Admin scripts

 Snowflake DB Admin scripts:


Snowflake DB admin commands/scripts

***************************************


To know the current_user


SELECT CURRENT_USER();


To know the current_Role


SELECT CURRENT_ROLE();


To know the current_DB


SELECT CURRENT_DATABASE();


To use another DB


use DWDB1


To list down the DB's


SHOW DATABASES;


select * from information_schema.databases;


To list down the tables,views,users


show tables;


SHOW WAREHOUSES;

SHOW USERS;

show views;


show functions;

show procedures;


SHOW SCHEMAs;


show roles;


we got Oracle database error ORA-38774: cannot disable media recovery - flashback database is enabled

 Issue:

we got Oracle database error  ORA-38774: cannot disable media recovery - flashback database is enabled when tried to turn off the archive log mode .

Root cause:

oracle DB will not allow  to turn off the archive log mode ,until you disable the Flashback.


Solution:

Total System Global Area 8.7242E+10 bytes

Fixed Size                  5295200 bytes

Variable Size            9663677344 bytes

Database Buffers         7.7309E+10 bytes

Redo Buffers              263139328 bytes

Database mounted.

SQL> alter database noarchivelog;

alter database noarchivelog

*

ERROR at line 1:

ORA-38774: cannot disable media recovery - flashback database is enabled



SQL> select name,flashback_on from v$database;


NAME      FLASHBACK_ON

--------- ------------------

DB1       YES


SQL> alter database flashback off;


Database altered.


SQL> select name,flashback_on from v$database;


NAME      FLASHBACK_ON

--------- ------------------

DB1       NO


SQL> select name,open_mode from v$database;


NAME      OPEN_MODE

--------- --------------------

DB1       MOUNTED


SQL> alter database noarchivelog;


Database altered.


SQL> alter database open;


Database altered.


SQL> archive log list;

Database log mode              No Archive Mode

Automatic archival             Disabled

Archive destination            /oracle/DB1/oraarch/DB1arch

Oldest online log sequence     361611

Current log sequence           361616

SQL>  select name,open_mode from v$database;


NAME      OPEN_MODE

--------- --------------------

DB1       READ WRITE


Tuesday, December 2, 2025

Shell script to find out super user list in AWS Redshift

 ################################################################################################

# purpose     :script used to collect PROD  Redshift super user list for audit

# Author      : Bala P

# Developed   :19-sep-2022 V1

################################################################################################


#!/usr/bin/bash


export PGHOST=XXXX.redshift.amazonaws.com

export PGPORT='5454'

export PGDATABASE=RS_DB1

export PGUSER=rsdbadmin


DBA=balamani@abc.com

export DBA


export PGPASSWORD='XXXXXXXXXX'

export wdir=/redshiftadmin/aws/audit/scripts/

export logdir=/redshiftadmin/aws/audit/log


psql  -f $wdir/super_user_list.sql -o $logdir/super_user_list.sql_output.log




mailx -s "Super user list from Redshift PROD cluster " $DBA  < $logdir/super_user_list.sql_output.log


************************************


 super_user_list.sql

\qecho

select usename,usesuper FROM pg_catalog.pg_user_info where usesuper='true';



check_tbl_ownr_compl.sql

select  schemaname,tablename ,tableowner from pg_catalog.pg_tables where schemaname not in ('public','pg_catalog','catalog_history','information_schema','dbainfo') and tableowner not in ('rsdbadmin') and schemaname not like 'pg_%';


 chk_tbl_owner_public.sql

select schemaname,tablename,tableowner  from pg_catalog.pg_tables  where schemaname='public';



************************************


shell script to find out long running session details in AWS Redshift database

 Here is the shell script to monitor the Long running session details in AWS redshift, we can schedule it .


##################################################################################################

### purpose     :script used to collect Redshift long running session details                     #

### Author      : Bala P                                                                          #

### Developed   :10-july-2023 V1                                                                  #

###################################################################################################

##

##!/usr/bin/bash

##


export PGDATABASE=$1

export PGUSER=rsdbadmin

export PGHOST=XXXXXXXXXX.redshift.amazonaws.com

export PGPORT='5439'


export PGPASSWORD='XXXXXXXXXXXXXXX'

export wdir=/redshiftadmin/aws/audit/scripts/

export logdir=/redshiftadmin/aws/audit/log



 query_result=$(psql -tA -c "select SPLIT_PART(SUBSTRING(user_name, 5), '@', 1) as user_name, DATE_TRUNC('second',starttime) as start_time,status,pid,duration/ (1000000.0 * 60 * 60) AS exec_time_hoursi,query  from stv_recents where user_name not in ('rdsdb') and duration/(1000000.0 * 60 * 60) > 3;")


#

 if [[ -n "$query_result" ]]; then


recipient="balamani@abc.com"

       subject="Queries with Execution Time > 3 hours @ $PGDATABASE in PROD"

       body="user_name|starttime |state|pid|exec_time_hours, queries  execution time greater than 3 hours PROD:\n\n$query_result"


     echo -e "$body" | mailx -s "$subject" "$recipient"

            fi




shell script for defragmentation in oracle Database Tables

 Here is the shell script to do de-fragmentation activities for a table in Oracle database. 

You can feed the table name and schema name and run this script in back ground 



#!/bin/ksh


export ORACLE_HOME=/oracle/TESTDB5/1924

export PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_SID=TESTDB5


TMP_LOG=/oracle/scripts/logs/trans_table_redef.log

echo > $TMP_LOG


sqlplus -s /nolog <<EOF >> $TMP_LOG

whenever sqlerror exit 1

conn / as sysdba

SET SERVEROUTPUT ON;

ALTER SESSION FORCE PARALLEL DML PARALLEL 2;

ALTER SESSION FORCE PARALLEL QUERY PARALLEL 2;

exec DBMS_REDEFINITION.CAN_REDEF_TABLE('<schema_name>','<Table_name>', DBMS_REDEFINITION.CONS_USE_ROWID);

exit

EOF


# Check if SQL execution was successful

if [ $? -eq 0 ]; then

    SUBJECT="Table Redefinition Completed Successfully"

else

    SUBJECT="Table Redefinition Failed"

fi


# Send the log file via email

mailx -s "$SUBJECT" Balamani@abc.com < $TMP_LOG

shell script to build oracle database with easy interactive mode

 Here is the full  script which will get few input and build the oracle database and configure the listener,tnsnames.ora.pfile/spfile and convert the database to archive log 

#########################################################################################################

#purpose       : Script used to build oracle DB,pass requested info                                     #

#Input needed  :DB_NAME,ORACLE_HOME,PORT#,DB CHARSET,sys,system,PDB pws,Redlog size in MB               #

#script will do: create DB,convert archive mode,setup listener& tns and start,add /etc/oratab,BCT       #

#Developed by  :Bala P  3March2025  v2                                                                  #

#########################################################################################################

#!/bin/sh



# Prompt for database name and convert to uppercase

echo "Enter the database name:"

read DB_NAME

DB_NAME=$(echo $DB_NAME | tr '[:lower:]' '[:upper:]')

# Prompt the user for   port number


read -p "Enter the port number: " PORT_NUMBER

# Prompt the user for   Redolog size in MB


read -p "Enter the Redolog size in MB: " RLOG



# Prompt for Oracle DB Home

echo "Enter the Oracle DB Home full Path:"

read  ORACLE_HOME


# Check if sqlplus /oracle_home status

if [ ! -x "$ORACLE_HOME/bin/sqlplus" ]; then

echo " sqlplus not found at $ORACLE_HOME/bin. Please check ORACLE_HOME."

exit 1

else

  echo "Oracle_Home FOUND ,proceeding with next steps"

fi

echo -e "\n \n"

#Prompt for  DB CHARACTER SET

echo "Enter the DB  CHARACTER SET"

read charset


# Prompt for passwords

echo "Enter new password for SYS:"

read -s sysPassword

echo "Enter new password for SYSTEM:"

read -s systemPassword

echo "Enter new password for PDBADMIN:"

read -s pdbAdminPassword



export ORACLE_HOME=$ORACLE_HOME

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export ORACLE_SID=$DB_NAME



# Create necessary directories

OLD_UMASK=`umask`

umask 0027

mkdir -p /oracle/${DB_NAME}

mkdir -p $ORACLE_HOME/dbs

mkdir -p /oracle/${DB_NAME}/admin/${DB_NAME}/adump

mkdir -p /oracle/${DB_NAME}/admin/${DB_NAME}/dpdump

mkdir -p /oracle/${DB_NAME}/admin/${DB_NAME}/pfile

mkdir -p /oracle/${DB_NAME}/audit

mkdir -p /oracle/${DB_NAME}/cfgtoollogs/dbca/${DB_NAME}

mkdir -p /oracle/${DB_NAME}/control1

mkdir -p /oracle/${DB_NAME}/control2

mkdir -p /oracle/${DB_NAME}/control3

mkdir -p /oracle/${DB_NAME}/mirrlogA

mkdir -p /oracle/${DB_NAME}/mirrlogB

mkdir -p /oracle/${DB_NAME}/origlogA

mkdir -p /oracle/${DB_NAME}/origlogB

mkdir -p /oracle/${DB_NAME}/data1

mkdir -p /oracle/${DB_NAME}/SystemTBS

mkdir -p /oracle/${DB_NAME}/UndoTBS

mkdir -p /oracle/${DB_NAME}/tempTBS

mkdir -p /oracle/${DB_NAME}/admin/scripts

umask ${OLD_UMASK}


# Create init<DB_NAME>.ora file

rm -f /oracle/${DB_NAME}/admin/scripts/init${DB_NAME}.ora


cat <<EOL > /oracle/${DB_NAME}/admin/scripts/init${DB_NAME}.ora

${DB_NAME}.__oracle_base='/oracle/${DB_NAME}'#ORACLE_BASE set from environment

${DB_NAME}.__db_cache_size=1451700

${DB_NAME}.__java_pool_size=16686

${DB_NAME}.__large_pool_size=16686

${DB_NAME}.__sga_target=1067918334

${DB_NAME}.__shared_io_pool_size=216920

${DB_NAME}.__shared_pool_size=2552992

${DB_NAME}.__data_transfer_cache_size=0

${DB_NAME}.__streams_pool_size=0

*.audit_file_dest='/oracle/${DB_NAME}/admin/${DB_NAME}/adump'

*.audit_trail='db'

*.compatible='19.0.0'

*.control_file_record_keep_time=30

*.control_files='/oracle/${DB_NAME}/control1/control01.ctl','/oracle/${DB_NAME}/control2/control02.ctl','/oracle/${DB_NAME}/control3/control03.ctl'

*.db_block_size=8192

*.db_domain='abc.domain.com'

*.db_files=200

*.db_name='${DB_NAME}'

*.diagnostic_dest='/oracle/${DB_NAME}'

*.filesystemio_options='SETALL'

*.log_archive_dest='/oracle/${DB_NAME}/oraarch/${DB_NAME}arch'

*.log_archive_format='%t_%s_%r.arc'

*.log_checkpoints_to_alert=TRUE

*.max_dump_file_size='20000'

*.open_cursors=3000

*.parallel_execution_message_size=16384

*.parallel_max_servers=20

*.parallel_threads_per_cpu=1

*.pga_aggregate_target=675m

*.processes=500

*.replication_dependency_tracking=FALSE

*.sessions=1000

*.sga_target=1g

*.undo_management='AUTO'

*.undo_retention=21600

*.undo_tablespace='UNDOTBS'

_disable_directory_link_check=TRUE

CONTROL_MANAGEMENT_PACK_ACCESS = "NONE"

EOL


# Create Oracle password file

rm -f $ORACLE_HOME/dbs/orapw${DB_NAME}

$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/orapw${DB_NAME} force=y format=12


# Run SQL scripts for DB creation and configuration

export ORACLE_HOME=$ORACLE_HOME

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export ORACLE_SID=$DB_NAME

echo -e "\n \n creating the DB...."

sqlplus /nolog <<EOF

CONNECT SYS/${sysPassword} AS SYSDBA

conn / as sysdba

SET VERIFY OFF

SPOOL /oracle/${DB_NAME}/admin/scripts/CreateDB.log

STARTUP NOMOUNT PFILE='/oracle/${DB_NAME}/admin/scripts/init${DB_NAME}.ora';

CREATE DATABASE "${DB_NAME}"

MAXINSTANCES 8

MAXLOGHISTORY 1

MAXLOGFILES 16

MAXLOGMEMBERS 3

MAXDATAFILES 1000

DATAFILE '/oracle/${DB_NAME}/SystemTBS/system01.dbf' SIZE 2072M REUSE AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED

EXTENT MANAGEMENT LOCAL

SYSAUX DATAFILE '/oracle/${DB_NAME}/SystemTBS/sysaux01.dbf' SIZE 2192M REUSE AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED

SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '/oracle/${DB_NAME}/tempTBS/temp01.dbf' SIZE 1024M REUSE AUTOEXTEND ON NEXT  640K MAXSIZE UNLIMITED

SMALLFILE UNDO TABLESPACE "UNDOTBS" DATAFILE  '/oracle/${DB_NAME}/UndoTBS/undotbs01.dbf' SIZE 1024M REUSE AUTOEXTEND ON NEXT  5120K MAXSIZE UNLIMITED

CHARACTER SET "${charset}"

NATIONAL CHARACTER SET AL16UTF16

LOGFILE

GROUP 1(

    '/oracle/${DB_NAME}/origlogA/redo_g1m1.log',

    '/oracle/${DB_NAME}/mirrlogA/redo_g1m2.log'

  ) SIZE ${RLOG}M BLOCKSIZE 512,

  GROUP 2 (

    '/oracle/${DB_NAME}/origlogB/redo_g2m1.log',

    '/oracle/${DB_NAME}/mirrlogB/redo_g2m2.log'

  ) SIZE ${RLOG}M BLOCKSIZE 512,

  GROUP 3 (

    '/oracle/${DB_NAME}/origlogB/redo_g3m1.log',

    '/oracle/${DB_NAME}/mirrlogB/redo_g3m2.log'

  ) SIZE ${RLOG}M BLOCKSIZE 512,

  GROUP 4 (

    '/oracle/${DB_NAME}/origlogA/redo_g4m1.log',

    '/oracle/${DB_NAME}/mirrlogA/redo_g4m2.log'

  ) SIZE ${RLOG}M BLOCKSIZE 512

USER SYS IDENTIFIED BY "${sysPassword}" USER SYSTEM IDENTIFIED BY "${systemPassword}";


SPOOL OFF

----creating the needed tablespaces--------

conn / as sysdba

SPOOL /oracle/${DB_NAME}/admin/scripts/CreateDBFiles.log



CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  '/oracle/${DB_NAME}/data1/users01.dbf' SIZE 500M REUSE AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL  SEGMENT SPACE MANAGEMENT  AUTO;

ALTER DATABASE DEFAULT TABLESPACE "USERS";

SPOOL OFF


------loading oracle binaries--------------

conn / as sysdba

SPOOL /oracle/${DB_NAME}/admin/scripts/CreateDBCatalog.log

@$ORACLE_HOME/rdbms/admin/catalog.sql;

@$ORACLE_HOME/rdbms/admin/catproc.sql;

@$ORACLE_HOME/rdbms/admin/catoctk.sql;


@$ORACLE_HOME/rdbms/admin/owminst.plb;

CONNECT SYSTEM/${systemPassword}

@$ORACLE_HOME/sqlplus/admin/pupbld.sql;

CONNECT SYS/${sysPassword} AS SYSDBA

@$ORACLE_HOME/sqlplus/admin/pupdel.sql;

CONNECT SYSTEM/${systemPassword};

SPOOL /oracle/${DB_NAME}/admin/scripts/sqlPlusHelp.log

@$ORACLE_HOME/sqlplus/admin/help/hlpbld.sql helpus.sql;

SPOOL OFF

conn / as sysdba


SPOOL /oracle/${DB_NAME}/admin/scripts/JServer.log

@$ORACLE_HOME/javavm/install/initjvm.sql;

@$ORACLE_HOME/xdk/admin/initxml.sql;

@$ORACLE_HOME/xdk/admin/xmlja.sql;

@$ORACLE_HOME/rdbms/admin/catjava.sql;

@$ORACLE_HOME/rdbms/admin/catxdbj.sql;

SPOOL OFF

---checked above  this

conn / as sysdba

SPOOL /oracle/${DB_NAME}/admin/scripts/context.log

@$ORACLE_HOME/ctx/admin/catctx.sql Xbkfsdcdf1ggh_123 SYSAUX TEMP LOCK; --no path

ALTER USER CTXSYS ACCOUNT UNLOCK IDENTIFIED BY "CTXSYS";

CONNECT CTXSYS/CTXSYS

@$ORACLE_HOME/ctx/admin/defaults/dr0defin.sql "AMERICAN";

CONNECT SYS/${sysPassword} AS SYSDBA

conn / as sysdba

ALTER USER CTXSYS PASSWORD EXPIRE ACCOUNT LOCK;

@$ORACLE_HOME/rdbms/admin/dbmsxdbt.sql;

SPOOL OFF

conn / as sysdba

SPOOL /oracle/${DB_NAME}/admin/scripts/ordinst.log

@$ORACLE_HOME/ord/admin/ordinst.sql SYSAUX SYSAUX;

SPOOL OFF

conn / as sysdba

SPOOL /oracle/${DB_NAME}/admin/scripts/interMedia.log

@$ORACLE_HOME/ord/im/admin/iminst.sql;

SPOOL OFF

conn / as sysdba

SPOOL /oracle/${DB_NAME}/admin/scripts/cwmlite.log

@$ORACLE_HOME/olap/admin/olap.sql SYSAUX TEMP;

SPOOL OFF

conn / as sysdba

SPOOL /oracle/${DB_NAME}/admin/scripts/spatial.log

@$ORACLE_HOME/md/admin/mdinst.sql;

SPOOL OFF

conn / as sysdba

SPOOL /oracle/${DB_NAME}/admin/scripts/apex.log APPEND

@$ORACLE_HOME/apex/catapx.sql Xbkfsdcdf1ggh_123 SYSAUX SYSAUX TEMP /i/ NONE;

SPOOL OFF

conn / as sysdba

SPOOL /oracle/${DB_NAME}/admin/scripts/postDBCreation.log

host $ORACLE_HOME/OPatch/datapatch -skip_upgrade_check -db ${DB_NAME};

CREATE SPFILE='$ORACLE_HOME/dbs/spfile${DB_NAME}.ora' FROM PFILE='/oracle/${DB_NAME}/admin/scripts/init${DB_NAME}.ora';

SELECT 'utlrp_begin: ' || TO_CHAR(SYSDATE, 'HH:MI:SS') FROM DUAL;

@$ORACLE_HOME/rdbms/admin/utlrp.sql;

SELECT 'utlrp_end: ' || TO_CHAR(SYSDATE, 'HH:MI:SS') FROM DUAL;

SELECT comp_id, status FROM dba_registry;

SHUTDOWN IMMEDIATE;

STARTUP;

SPOOL OFF

conn / as sysdba




SPOOL /oracle/${DB_NAME}/admin/scripts/lockAccount.log

BEGIN

FOR item IN ( SELECT USERNAME, AUTHENTICATION_TYPE FROM DBA_USERS WHERE ACCOUNT_STATUS IN ('OPEN', 'LOCKED', 'EXPIRED') AND USERNAME NOT IN (

'SYS','SYSTEM') )

LOOP

IF item.AUTHENTICATION_TYPE='PASSWORD' THEN

  DBMS_OUTPUT.PUT_LINE('Locking and Expiring: ' || item.USERNAME);

  EXECUTE IMMEDIATE 'ALTER USER ' ||

         SYS.DBMS_ASSERT.ENQUOTE_NAME(

         SYS.DBMS_ASSERT.SCHEMA_NAME(

         item.USERNAME),FALSE) || ' PASSWORD EXPIRE ACCOUNT LOCK' ;

ELSE

  DBMS_OUTPUT.PUT_LINE('Locking: ' || item.USERNAME);

  EXECUTE IMMEDIATE 'ALTER USER ' ||

         SYS.DBMS_ASSERT.ENQUOTE_NAME(

         SYS.DBMS_ASSERT.SCHEMA_NAME(

         item.USERNAME),FALSE) || ' ACCOUNT LOCK' ;

END IF;

END LOOP;

END;

/

SPOOL OFF

EOF

# Set ORACLE_HOME

export ORACLE_HOME=$ORACLE_HOME


# Get the current hostname

SERVER_NAME=$(hostname)


# Define paths

TNS_PATH="$ORACLE_HOME/network/admin/tnsnames.ora"

LISTENER_PATH="$ORACLE_HOME/network/admin/listener.ora"

TNS_BACKUP_PATH="${TNS_PATH}_backup_$(date +%Y%m%d_%H%M%S)"

LISTENER_BACKUP_PATH="${LISTENER_PATH}_backup_$(date +%Y%m%d_%H%M%S)"


# Backup tnsnames.ora

if [ -f "$TNS_PATH" ]; then

  cp "$TNS_PATH" "$TNS_BACKUP_PATH"

  echo "Backup of tnsnames.ora created at $TNS_BACKUP_PATH"

fi


# Append to tnsnames.ora

cat <<EOF >> "$TNS_PATH"


${DB_NAME}.WORLD=

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS =

        (PROTOCOL = TCP)

        (HOST = ${SERVER_NAME})

        (PORT = ${PORT_NUMBER})

      )

    )

    (CONNECT_DATA =

      (SID = ${DB_NAME})

      (GLOBAL_NAME = ${DB_NAME}.WORLD)

    )

  )

#for PROD

RMANCATPRD.WORLD=

  (DESCRIPTION =

    (SDU = 32768)

    (ADDRESS_LIST =

        (ADDRESS =

          (PROTOCOL = TCP)

          (HOST = <catlog_server>)

          (PORT = 1591)

        )

    )

    (CONNECT_DATA =

       (SID = KRMANCATPRD)

       (GLOBAL_NAME = RMANCATPRD.WORLD)

    )

  )

#prod end

EOF


echo "New TNS entry for ${DB_NAME} added to $TNS_PATH"


# Backup listener.ora

if [ -f "$LISTENER_PATH" ]; then

  cp "$LISTENER_PATH" "$LISTENER_BACKUP_PATH"

  echo "Backup of listener.ora created at $LISTENER_BACKUP_PATH"

fi


# Create listener.ora

cat <<EOF > "$LISTENER_PATH"

LISTENER_${DB_NAME} =

  (ADDRESS_LIST =

    (ADDRESS =

      (PROTOCOL = IPC)

      (KEY = ${DB_NAME}.WORLD)

    )

    (ADDRESS =

      (PROTOCOL = IPC)

      (KEY = ${DB_NAME})

    )

    (ADDRESS =

      (PROTOCOL = TCP)

      (HOST = ${SERVER_NAME})

      (PORT = ${PORT_NUMBER})

    )

  )

STARTUP_WAIT_TIME_LISTENER_${DB_NAME} = 0

CONNECT_TIMEOUT_LISTENER_${DB_NAME} = 10

TRACE_LEVEL_LISTENER_${DB_NAME} = OFF

SID_LIST_LISTENER_${DB_NAME} =

  (SID_LIST =

    (SID_DESC =

      (SID_NAME = ${DB_NAME})

      (ORACLE_HOME = $ORACLE_HOME)

    )

  )

EOF

echo "New listener entry for ${DB_NAME} added to $LISTENER_PATH"


# Define paths for sqlnet.ora

SQLNET_PATH="$ORACLE_HOME/network/admin/sqlnet.ora"

SQLNET_BACKUP_PATH="${SQLNET_PATH}_backup_$(date +%Y%m%d_%H%M%S)"


# Backup sqlnet.ora

if [ -f "$SQLNET_PATH" ]; then

  cp "$SQLNET_PATH" "$SQLNET_BACKUP_PATH"

  echo "Backup of sqlnet.ora created at $SQLNET_BACKUP_PATH"

fi


# Append to sqlnet.ora

cat <<EOF >> "$SQLNET_PATH"


AUTOMATIC_IPC = ON

TRACE_LEVEL_CLIENT = OFF

NAMES.DEFAULT_DOMAIN = WORLD

NAME.DEFAULT_ZONE = WORLD

SQLNET.EXPIRE_TIME = 0

TCP.NODELAY=YES


EOF



#start the listener

export ORACLE_HOME=$ORACLE_HOME

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export ORACLE_SID=$DB_NAME


lsnrctl start LISTENER_${DB_NAME}


TNSPING_LOG="tnsping_${DB_NAME}.log"


# Run tnsping and log the output

tnsping ${DB_NAME}.WORLD > "$TNSPING_LOG" 2>&1


# Check tnsping result

if grep -q "OK" "$TNSPING_LOG"; then

  echo "TNSPING successful: This is good"

else

  echo "TNSPING failed: Please check the configuration"

fi



# Check logs for errors or warnings

LOG_FILES="/oracle/${DB_NAME}/admin/scripts/*.log"

OUTPUT_FILE="/oracle/${DB_NAME}/admin/scripts/error_check.log"

ERRORS_FOUND=false


> "$OUTPUT_FILE"  # Clear previous output


for file in $LOG_FILES

do

  if grep -iq "ORA-" "$file" ; then

    ERRORS_FOUND=true

    ERR_FILE="${file}_err"


    # Extract relevant lines and save to .log_err file

    grep -iE "ORA-" "$file" > "$ERR_FILE"


    echo "Issues found in: $(basename "$file") $(basename "$ERR_FILE")" >> "$OUTPUT_FILE"

  fi

done


if [ "$ERRORS_FOUND" = false ]; then

  echo "Everything is good with Db creation." > "$OUTPUT_FILE"


fi


#Backup /etc/oratab with timestamp

timestamp=$(date +%Y%m%d_%H%M%S)

cp /etc/oratab /oracle/$DB_NAME/admin/scripts/oratab_bak_$timestamp


# Append entry to /etc/oratab

echo "${DB_NAME}:${ORACLE_HOME}:Y" >> /etc/oratab


echo "Entry added to /etc/oratab and backup created as /oracle/$DB_NAME/admin/scripts/oratab_bak_$timestamp"


# Set environment variables arch mode conversion

export ORACLE_HOME=$ORACLE_HOME

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export ORACLE_SID=$DB_NAME


# Check if DB is already in ARCHIVELOG mode

archive_mode=$(sqlplus -s / as sysdba <<'EOF'

SET HEADING OFF

SET FEEDBACK OFF

SET PAGESIZE 0

SELECT log_mode FROM v$database;

EXIT

EOF

)


archive_mode=$(echo "$archive_mode" | xargs)  # Trim whitespace


if [ "$archive_mode" == "ARCHIVELOG" ]; then

  echo "Database is already in ARCHIVELOG mode. Skipping conversion."

else

  echo "Converting database to ARCHIVELOG mode..."


  sqlplus / as sysdba <<EOF

  SPOOL convert_db_archive_mode_bct.log


  SHOW PARAMETER spfile;

  CREATE PFILE='$ORACLE_HOME/pfile_before_arch.ora' FROM SPFILE;

  SHUTDOWN IMMEDIATE;

  STARTUP MOUNT;

  ALTER DATABASE ARCHIVELOG;

  ALTER DATABASE OPEN;

  ARCHIVE LOG LIST;

  alter database enable block change tracking using file '/oracle/${DB_NAME}/SystemTBS/${DB_NAME}_block_change.dbf';



  SPOOL OFF

  EXIT

EOF


fi



echo "exporting Db variables final check"


export ORACLE_HOME=$ORACLE_HOME

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export ORACLE_SID=$DB_NAME


EMAIL="BALAmani@abc.com"


# Run SQL to get DB name and open mode

DB_STATUS=$(sqlplus -s / as sysdba <<'EOF'

SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF

SELECT open_mode FROM v$database;

EXIT;

EOF

)


# Check if SQL*Plus ran successfully

if [[ $? -ne 0 ]]; then

    echo "SQL*Plus execution failed."

    exit 1

fi


# Trim whitespace

DB_STATUS=$(echo "$DB_STATUS" | xargs)


# Log the status

echo "Database status returned: $DB_STATUS"


# Check if status is "READ WRITE"

if [[ "$DB_STATUS" == "READ WRITE" ]]; then

    echo "DB creation completed successfully." | mailx -s "DB $DB_NAME build completed, proceed with next steps" "$EMAIL"

else

    echo "Database is not in READ WRITE mode. Current status: $DB_STATUS"

    exit 1

fi


exit 0