Wednesday, April 17, 2024

Pl/Sql script to generate the script to add datafile in oracle database.

 we have scenario ,used to add 100 plus datafiles to a tablespace  and we dont need to create it manually,I have created pl/sql script to generate n number of datafiles for this table space.

This is really helped us.

DECLARE

    v_tablespace_name VARCHAR2(30) := 'USER'; -- Specify your tablespace name

    v_datafile_prefix VARCHAR2(50) := '/path/to/datafile'; -- Specify the base path for datafiles

    v_datafile_size_mb NUMBER := 100; -- Specify the size of each datafile in MB

    v_num_files NUMBER := 100; -- Specify the number of datafiles to add

    

    v_sql VARCHAR2(1000);

BEGIN

    FOR i IN 1..v_num_files LOOP

        -- Generate SQL to add datafile

        v_sql := 'ALTER TABLESPACE ' || v_tablespace_name ||

                 ' ADD DATAFILE ''' || v_datafile_prefix || i || '.dbf''' ||

                 ' SIZE ' || v_datafile_size_mb || 'M AUTOEXTEND OFF NEXT 100M MAXSIZE UNLIMITED;';

        

        -- Output SQL statement

        DBMS_OUTPUT.PUT_LINE(v_sql);

    END LOOP;

END;

/


Friday, December 1, 2023

ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM], [Check traces and OS configuration],

 issue:

ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM], [Check traces and OS configuration], [Check Oracle document and MOS notes], [] error reported on oracle database 64 bit 19.16.0.0


Root cause:

looks like oradism  (file located under cd $ORACLE_HOME/bin) permission issue.


Solution:

1) check if the below parameter set to _highest_priority_processes=VKTM in DB level ,if not set

alter system set _highest_priority_processes=VKTM scope=spfile;

2) check cd $ORACLE_HOME/bin oradism  and make sure it has Sticky bit set,if not set the sticky bit and reboot the DB.


cd $ORACLE_HOME/bin ls -lrt oradism (see the sticky bit)

-rwxr-x--- 1 oracle oinstall 147848 Apr 17  2019 oradism  >>>>>>>>>>>>>>>>>> wrong one

-rwsr-x--- 1 root oinstall 147848 Apr 17 2019 oradism >>>>>>>>>>>>>>>>>>>> Correct one

Tuesday, July 18, 2023

script to find what are the users having CREATE privilege's on AWS Redshift database

 script to find what are the users having CREATE privilege's on AWS Redshift database:


SELECT u.usename AS username,

       nsp.nspname AS schema_name,

       has_schema_privilege(u.usename, nsp.nspname, 'CREATE') AS has_create_privilege

FROM pg_user u

CROSS JOIN pg_namespace nsp

WHERE nsp.nspname NOT LIKE 'pg_%' AND nsp.nspname not in ('information_schema','public') and u.usename not in('admin')

  AND has_schema_privilege(u.usename, nsp.nspname, 'CREATE') = true

ORDER BY u.usename, nsp.nspname;

Tuesday, January 3, 2023

How to migrate the data between AWS Redshift clusters ? or steps to configure the datashare in aws Redshift cluster?

 pre-request:

******************


get the cluster_name_space of source & Target cluster.



[source_cluster_name_space]==>source_DB


[target_cluster_name_space]==>Target_DB


source_cluster (Producer)

**********************


 create datashare Data_share_copy_source_2_target publicaccessible=false; --create Datashare


 alter datashare Data_share_copy_source_2_target add schema  schema_name; --- add schema to migrate the tables


 alter datashare Data_share_copy_source_2_target set includenew=true for schema schema_name;--add future object


 alter datashare Data_share_copy_source_2_target add all tables  in  schema schema_name;--current objects



 grant usage on datashare Data_share_copy_source_2_target to namespace '[target_cluster_name_space]';




Target_cluster (consumer)

********************


--create new datashare  to import the data to current/target cluster


create database import_DS_target from datashare Data_share_copy_source_2_target of  namespace '[source_cluster_name_space]';


--copy the data from imported Datashare  to  target DB tables


Move data  from source to Target:

***************************

create the empty table1 with same structure on target DB and insert the data.

insert into schema_name.table1 select * from  Data_share_copy_source_2_target.schema_name.table1

Monday, November 14, 2022

ORA-00800 and ORA-700 in oracle database 19.16.0.0

 Issue:

we have recently build new oracle database with version 19.16.0.0 and alert log reported the below 2 errors.


Errors in file /oracle/DB_name/diag/rdbms/DB_name/DB_name/trace/DB_name_vktm_60713.trc  (incident=54058):

ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM], [Check traces and OS configuration], [Check Oracle document and MOS notes], [] (Doc ID 2718971.1)

Incident details in: /oracle/DB_name/diag/rdbms/DB_name/DB_name/incident/incdir_54058/DB_name_vktm_60713_i54058.trc

2022-11-09T12:21:15.626799+00:00

Error attempting to elevate VKTM's priority: no further priority changes will be attempted for this process

VKTM started with pid=5, OS id=60713

Root cause and solution:

ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM] (Doc ID 2718971.1)I


In the DB_name DB system, it seems "$ORACLE_HOME/bin/oradism" file permissions and ownership are not as per the oracle recommendation. It should be owned by root:oinstall and setups bit set. Will inform to System admin to set the ownership and permission.

$ ls -latr $ORACLE_HOME/bin/oradism

-rwsr-x--- 1 root oinstall 147848 Apr 17 2019 oradism >>>>>>>>>>>>>>>>>>>>>>>> Correct one

Need run from root user:

# chmod u+s oradism

************************************************************


Issue:

PGA_AGGREGATE_TARGET specified is high

Errors in file /oracle/DB_name/diag/rdbms/DB_name/DB_name/trace/DB_name_ora_60307.trc  (incident=48015):

ORA-00700: soft internal error, arguments: [pga physmem limit], [3221225472], [3033848422], [], [], [], [], [], [], [], [], []

Incident details in: /oracle/DB_name/diag/rdbms/DB_name/DB_name/incident/incdir_48015/DB_name_ora_60307_i48015.trc

LICENSE_MAX_SESSION = 0


Oracle Recommendation :

ORA-700 [sms physmem limit] / [pga physmem limit] Errors while Starting the Database (Doc ID 2610312.1)


The issue is caused by overall memory configuration, i.e. the SGA TARGET + PGA LIMIT total is greater than system's memory limit. Here SGA TARGET + PGA memory allocated  is around 92 % of total physical memory. We have changed “pga_aggregate_target” memory value as per recommendation. Please see below details for reference. We will check the alert log after a DB bounce to confirm the alerts.


/oracle/DB_name/diag/rdbms/DB_name/DB_name/incident/incdir_48015/DB_name_ora_60307_i48015.trc

----- Beginning of Customized Incident Dump(s) -----


WARNING: PGA_AGGREGATE_TARGET (3072 MB) is too high for

         the amount of physical memory (15531 MB) and

         SGA size (8192 MB) - it should be less than 2893 MB.

Thumb rule to setup PGA_AGGREGATE_TARGET:

(Doc ID 2610312.1)

Total memory = 100%  
Reserved for OS = 20% 

Out of the remaining 80% 
PGA_AGGREGATE_TARGET = 50% of 80%   
SGA_TARGET = 50% of 80% of total memory
***********************************************************

PGA_AGGREGATE_TARGET = 25% of 85% total memory 3
PGA_AGGREGATE_LIMIT = 50% of 85% total memory 6
SGA_TARGET = 50% of 85% total memory

***********************************************************

Monday, November 7, 2022

script to copy data from S3 bucket to Redshift?

 

script to copy data from S3 bucket to Redshift:

script to copy data from S3 bucket to Redshift using keys:

copy db_name.schema_name.tabl2 FROM 's3://s3_bucket/tabl2.csv' 

credentials 'aws_access_key_id=123456789; aws_secret_access_key=987654321;token=12345678898' CSV IGNOREHEADER 1 REGION 'us-east-1';


script to copy data from S3 bucket to Redshift using ARN:

copy  db_name.schema_name.table8 from  's3://s3_bucket/table8.csv

iam_role 'arn:aws:iam::123456789:role/S3_to_Redshift_copy_role'

 CSV IGNOREHEADER ;

How to make the session and user priority to increase in Redshift Database?

 user priority change:


The new priority to be assigned to all queries issued by user_name. 

This argument must be a string with the value CRITICAL, HIGHEST, HIGH, NORMAL, LOW, LOWEST, or RESET. 

Only superusers can change the priority to CRITICAL.


 Changing the priority to RESET removes the priority setting for user_name.


select change_user_priority('<username>','<Priority>');


select change_user_priority('user@abc.com','HIGHEST');



session priorty change:


CHANGE_SESSION_PRIORITY enables superusers to immediately change the priority of any session in the system. 

Only one session, user, or query can run with the priority CRITICAL.



returns the process identifier of the server process handling the current session.

select pg_backend_pid();

select change_session_priority(30311, 'Lowest');

               

 change_session_priority

---------------------------------------------------------

Succeeded to change session priority. Changed session (pid:30311) priority to lowest.

(1 row)

setup DDL Group in Redshift database

 setup DDL  Group in Redshift database


1.create group   <group_name>

2.load the DDL Privilege's to group

3.add the users to DDL  group


create schema <schema_name>;

create group  group_<schema>_DDL;

GRANT CREATE ON  database <DB name>  to group group_<schema>_DDL;

GRANT all ON  schema <schema_name>    to group group_<schema>_DDL;

How to setup Read-only and Read-write Group in Redshift database?

 

setup Read-only and Read-write   Group in Redshift database:

1.create group   <group_name>

2.load the select /DML Privilege's to group

3.add the users to Read-only or Read-write  group


Please find the syntax for Read-Only:

grant usage on schema <schema_name> to group Redshift_readonly_group;

grant select on all tables in schema <schema_name> to group Redshift_readonly_group;

ALTER DEFAULT PRIVILEGES IN SCHEMA <schema_name> GRANT select ON tables TO group Redshift_readonly_group;


Please find the syntax for Read-Write:

grant usage on schema <schema_name> to group Redshift_read-write_group;

grant select,insert,update,delete on all tables in schema <schema_name> to group Redshift_read-write_group;

ALTER DEFAULT PRIVILEGES IN SCHEMA <schema_name> GRANT select,insert,update,delete ON tables TO group Redshift_read-write_group;

what is the privilege needed to create UDF function using Python in Redshift Database?

 privilege needed to  create UDF function using Python in Redshift:

grant usage on language plpythonu to group <group>;