Monday, November 14, 2022

ORA-00800 and ORA-700 in oracle database 19.16.0.0

 Issue:

we have recently build new oracle database with version 19.16.0.0 and alert log reported the below 2 errors.


Errors in file /oracle/DB_name/diag/rdbms/DB_name/DB_name/trace/DB_name_vktm_60713.trc  (incident=54058):

ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM], [Check traces and OS configuration], [Check Oracle document and MOS notes], [] (Doc ID 2718971.1)

Incident details in: /oracle/DB_name/diag/rdbms/DB_name/DB_name/incident/incdir_54058/DB_name_vktm_60713_i54058.trc

2022-11-09T12:21:15.626799+00:00

Error attempting to elevate VKTM's priority: no further priority changes will be attempted for this process

VKTM started with pid=5, OS id=60713

Root cause and solution:

ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM] (Doc ID 2718971.1)I


In the DB_name DB system, it seems "$ORACLE_HOME/bin/oradism" file permissions and ownership are not as per the oracle recommendation. It should be owned by root:oinstall and setups bit set. Will inform to System admin to set the ownership and permission.

$ ls -latr $ORACLE_HOME/bin/oradism

-rwsr-x--- 1 root oinstall 147848 Apr 17 2019 oradism >>>>>>>>>>>>>>>>>>>>>>>> Correct one

Need run from root user:

# chmod u+s oradism

************************************************************


Issue:

PGA_AGGREGATE_TARGET specified is high

Errors in file /oracle/DB_name/diag/rdbms/DB_name/DB_name/trace/DB_name_ora_60307.trc  (incident=48015):

ORA-00700: soft internal error, arguments: [pga physmem limit], [3221225472], [3033848422], [], [], [], [], [], [], [], [], []

Incident details in: /oracle/DB_name/diag/rdbms/DB_name/DB_name/incident/incdir_48015/DB_name_ora_60307_i48015.trc

LICENSE_MAX_SESSION = 0


Oracle Recommendation :

ORA-700 [sms physmem limit] / [pga physmem limit] Errors while Starting the Database (Doc ID 2610312.1)


The issue is caused by overall memory configuration, i.e. the SGA TARGET + PGA LIMIT total is greater than system's memory limit. Here SGA TARGET + PGA memory allocated  is around 92 % of total physical memory. We have changed “pga_aggregate_target” memory value as per recommendation. Please see below details for reference. We will check the alert log after a DB bounce to confirm the alerts.


/oracle/DB_name/diag/rdbms/DB_name/DB_name/incident/incdir_48015/DB_name_ora_60307_i48015.trc

----- Beginning of Customized Incident Dump(s) -----


WARNING: PGA_AGGREGATE_TARGET (3072 MB) is too high for

         the amount of physical memory (15531 MB) and

         SGA size (8192 MB) - it should be less than 2893 MB.

Thumb rule to setup PGA_AGGREGATE_TARGET:

(Doc ID 2610312.1)

Total memory = 100%  
Reserved for OS = 20% 

Out of the remaining 80% 
PGA_AGGREGATE_TARGET = 50% of 80%   
SGA_TARGET = 50% of 80% of total memory
***********************************************************

PGA_AGGREGATE_TARGET = 25% of 85% total memory 3
PGA_AGGREGATE_LIMIT = 50% of 85% total memory 6
SGA_TARGET = 50% of 85% total memory

***********************************************************

Monday, November 7, 2022

script to copy data from S3 bucket to Redshift?

 

script to copy data from S3 bucket to Redshift:

script to copy data from S3 bucket to Redshift using keys:

copy db_name.schema_name.tabl2 FROM 's3://s3_bucket/tabl2.csv' 

credentials 'aws_access_key_id=123456789; aws_secret_access_key=987654321;token=12345678898' CSV IGNOREHEADER 1 REGION 'us-east-1';


script to copy data from S3 bucket to Redshift using ARN:

copy  db_name.schema_name.table8 from  's3://s3_bucket/table8.csv

iam_role 'arn:aws:iam::123456789:role/S3_to_Redshift_copy_role'

 CSV IGNOREHEADER ;

How to make the session and user priority to increase in Redshift Database?

 user priority change:


The new priority to be assigned to all queries issued by user_name. 

This argument must be a string with the value CRITICAL, HIGHEST, HIGH, NORMAL, LOW, LOWEST, or RESET. 

Only superusers can change the priority to CRITICAL.


 Changing the priority to RESET removes the priority setting for user_name.


select change_user_priority('<username>','<Priority>');


select change_user_priority('user@abc.com','HIGHEST');



session priorty change:


CHANGE_SESSION_PRIORITY enables superusers to immediately change the priority of any session in the system. 

Only one session, user, or query can run with the priority CRITICAL.



returns the process identifier of the server process handling the current session.

select pg_backend_pid();

select change_session_priority(30311, 'Lowest');

               

 change_session_priority

---------------------------------------------------------

Succeeded to change session priority. Changed session (pid:30311) priority to lowest.

(1 row)

setup DDL Group in Redshift database

 setup DDL  Group in Redshift database


1.create group   <group_name>

2.load the DDL Privilege's to group

3.add the users to DDL  group


create schema <schema_name>;

create group  group_<schema>_DDL;

GRANT CREATE ON  database <DB name>  to group group_<schema>_DDL;

GRANT all ON  schema <schema_name>    to group group_<schema>_DDL;

How to setup Read-only and Read-write Group in Redshift database?

 

setup Read-only and Read-write   Group in Redshift database:

1.create group   <group_name>

2.load the select /DML Privilege's to group

3.add the users to Read-only or Read-write  group


Please find the syntax for Read-Only:

grant usage on schema <schema_name> to group Redshift_readonly_group;

grant select on all tables in schema <schema_name> to group Redshift_readonly_group;

ALTER DEFAULT PRIVILEGES IN SCHEMA <schema_name> GRANT select ON tables TO group Redshift_readonly_group;


Please find the syntax for Read-Write:

grant usage on schema <schema_name> to group Redshift_read-write_group;

grant select,insert,update,delete on all tables in schema <schema_name> to group Redshift_read-write_group;

ALTER DEFAULT PRIVILEGES IN SCHEMA <schema_name> GRANT select,insert,update,delete ON tables TO group Redshift_read-write_group;

what is the privilege needed to create UDF function using Python in Redshift Database?

 privilege needed to  create UDF function using Python in Redshift:

grant usage on language plpythonu to group <group>;

script to find currently running queries in Redshift Database

 script to find  currently running queries in Redshift:

select userid , query , pid , starttime , text from stv_inflight order by starttime desc;


script to find  currently locks in Redshift:

select table_id, last_update, lock_owner, lock_owner_pid from stv_locks;

SELECT *  FROM STV_SESSIONS;

Redshift interesting interview questions ?

 Redshift interesting  interview questions:

1) can we  make the SSO user as super user in Redshift?

        No,you cant   make the SSO  authenticated user as super user in Redshift Database.


 alter user "user@ABC.COM" CREATEUSER; ==> not working

SQL Error [42601]: ERROR: Superusers cannot have disabled passwords


2)How to Lock the Redshift Database users?

we can disable the user based on the life cycle using super user.

 alter user "user@ABC.COM"  valid until '10-aug-2022'; 

This will make the user as disabled and wont be able to login to Redshift database.

script to Unload Redshift table data to S3 bucket?

 script to Unload Redshift table data to S3 bucket:

we can unload the data using 2 credential from Redshift to S3 bucket.

1)access key & secret key

2) ARN roles


UNLOAD ('SELECT * FROM DB_name.schema1.table2')

TO 's3://s3_bucket_name/table2.csv'

iam_role 'arn:aws:iam::123456789:role/s3_to_Redshift_load_bucket'

KMS_KEY_ID '123456789'

encrypted PARALLEL on

CSV;

s3_bucket_name--> s3 bucket name

ARN name --> iam_role 'arn:aws:iam::123456789:role/s3_to_Redshift_load_bucket

KMS_KEY_ID --> '123456789'


CSV --> format


IAM error retrieving temp credentials: User: arn:aws:sts::123456789:assumed-role/AzureSSOuser/user@abc.com is not authorized to perform: redshift:JoinGroup on resource: arn:aws:redshift:us-east-1:123456789:dbgroup:Redshift-prod/User.assignedroles because no identity-based policy allows the redshift:JoinGroup action (Service: AmazonRedshift; Status Code: 403; Error Code: AccessDenied; Request ID:123456789) User: arn:aws:sts::123456789:assumed-role/AzureSSOuser/user@abc.com is not authorized to perform: redshift:JoinGroup on resource: arn:aws:redshift:us-east-1:123456789:dbgroup:Redshift-prod/User.assignedroles because no identity-based policy allows the redshift:JoinGroup action (Service: AmazonRedshift; Status Code: 403; Error Code: AccessDenied; Request ID:123456789)

 Issue:

we got the below error when Redshift SSO user login to Redshift database:

IAM error retrieving temp credentials: User: arn:aws:sts::123456789:assumed-role/AzureSSOuser/user@abc.com is not authorized to perform: redshift:JoinGroup on resource: arn:aws:redshift:us-east-1:123456789:dbgroup:Redshift-prod/User.assignedroles because no identity-based policy allows the redshift:JoinGroup action (Service: AmazonRedshift; Status Code: 403; Error Code: AccessDenied; Request ID:123456789)


 User: arn:aws:sts::123456789:assumed-role/AzureSSOuser/user@abc.com is not authorized to perform: redshift:JoinGroup on resource: arn:aws:redshift:us-east-1:123456789:dbgroup:Redshift-prod/User.assignedroles because no identity-based policy allows the redshift:JoinGroup action (Service: AmazonRedshift; Status Code: 403; Error Code: AccessDenied; Request ID:123456789)


Root cause & Solution:

when a Redshift  SSO user tried login to  database using SSO authentication got the above error,because dbgroup parameter is NOT mentioned in the connection string,user should add the dbgroups parameter  in the connection staring and it worked.

script to find out unload and copy counts on Redshift database?

script to find out unload and copy counts on Redshift database:

The below script will provide the count of copy/unload process on the same session.


 select pg_last_copy_count();


select pg_last_unload_count();

script to find out if the user granted assumrole privilege on database for unload/copy?

 script to find out if the user granted assumrole privilege on database for unload/copy:


select has_assumerole_privilege('user@abc.com', 'arn:aws:iam::123456789:role/redshift-s3-unload-service-role', 'copy');


has_assumerole_privilege

------------------------

false      


select has_assumerole_privilege('user@abc.com', 'arn:aws:iam::123456789:role/redshift-s3-unload-service-role', 'unload');


has_assumerole_privilege

------------------------

false      

Thursday, November 3, 2022

SQL Error [XX000]: ERROR: User user@abc.com does not have ASSUMEROLE permission on IAM role "arn:aws:iam::123456789:role/Redshift-s3-unload-role" for UNLOAD

 Issue:

when I tried to unload some data from Redshift to S3 bucket using the  Redshift SSO  user got the below error.


SQL Error [XX000]: ERROR: User user@abc.com does not have ASSUMEROLE permission on IAM role "arn:aws:iam::123456789:role/Redshift-s3-unload-role" for UNLOAD


Solution:

after some investigation  found that ,we should grant the assumrole privilege's to the user ,after granting the privilege issue resolved.

Syntax: ==>Unload

grant assumerole on 'arn:aws:iam::123456789:role/Redshift-s3-unload-role'  to "user@abc.com" for unload;

Syntax: ==>copy

grant assumerole on 'arn:aws:iam::123456789:role/Redshift-s3-unload-role'  to "user@abc.com" for copy;