Calling a Linux bash script from Oracle PLSQL

Here is a little cool bit of PLSQL that allows you to call anything really from PLSQL.

This script is to FTP files. There are methods of using FTP direct from PLSQL, the issue that I had was that I had to FTP from one file name to another.

I have left the variable names in the SET_JOB_ARGUMENT_VALUE just to illustrate the connection between the package and bash script

job_name_ varchar2(100);

begin

-- Create the job_name_ in what ever means you want, either hard codes or a date selection. 
Mine is passed from another PLSQL package
DBMS_SCHEDULER.DROP_JOB(job_name_);
exception
 when others then
dbms_output.put_line('There is no job to drop');
End;

dbms_scheduler.create_job(
job_name=>job_name_,
 job_type=>'executable',
 job_action=>'PATH TO SCRIPT',
 number_of_arguments=>7,
 enabled=>FALSE);

DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(job_name => 
job_name_,argument_position =>1,argument_value => v_server_address);
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(job_name => 
job_name_,argument_position =>2,argument_value => v_username);
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(job_name => 
job_name_,argument_position =>3,argument_value => v_password);
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(job_name => 
job_name_,argument_position =>4,argument_value => v_local_file_);
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(job_name => 
job_name_,argument_position =>5,argument_value => v_put_file_);
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(job_name => 
job_name_,argument_position =>6,argument_value => v_local_directory_);
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(job_name => 
job_name_,argument_position =>7,argument_value => v_remote_directory_);
DBMS_SCHEDULER.ENABLE(job_name_);

The bash script is as follows,  as you can see you change the number of arguments to match

your script. In this instance there are 7 parameters to pass.

#!/bin/sh -vx 
# Script to FTP data to server 
# Paramters: host FTP Server 
# user - FTP Username 
# passwd - FTP Password 
# file - File to send/put 
# toname - this is a special if you want to call the file a different 
# name as you put it up
# local_dir - where the local file can be found
# remote_dir - where to put the local file
# type of FTP PUT / GET
############################################################################ 
# Variables 
HOST=$1
USER=$2
PASSWD=$3
FILE=$4
TONAME=$5
LOCAL_DIR=$6
REMOTE_DIR=$7
DATE=`date +%Y-%m-%d`

ftp -inv $HOST<<END_SCRIPT>>/u12/transfer/logs/ftp_log_$DATE
quote USER $USER 
quote PASS $PASSWD 
lcd $LOCAL_DIR
cd $REMOTE_DIR
put $FILE $TONAME 
quit 
END_SCRIPT 
exit 0

To get this working there are a number of elements that you will need to setup.

First of all you need to allow job and external job creation in Oracle

grant create job to USERNAME;
grant create external job to USERNAME ;

If you notice in the bash script the FTP job needs to write out to a log file where the path is /u12/transfer/logs.

This means that you need to know who the user will be to run the script, this can be found by reading the following file

$ORACLE_HOME/rdbms/admin/externaljob.ora

Mine looks like this

# This configuration file is used by dbms_scheduler when executing external
# (operating system) jobs. It contains the user and group to run external
# jobs as. It must only be writable by the owner and must be owned by root.
# If extjob is not setuid then the only allowable run_user
# is the user Oracle runs as and the only allowable run_group is the group
# Oracle runs as.

run_user = nobody
run_group = nobody

The lines that you need to look at are run_user and run_group

So you would create the directory

mkdir -p /u12/transfer/logs
chown nobody:nobody /u12/transfer/logs

This would then allow the PLSQL package to create the log file.

If you want to look at the schedule details then you can use the following SQL

select 
log_date,
job_name, 
status, 
req_start_date, 
actual_start_date, 
run_duration,
additional_info
from dba_scheduler_job_run_details
where job_name = '&job_name'
order by log_date

SYS.UTL_FILE write error

ARRRRGGGGGHHHHHHHHHH!!!!!!!!! I feel better for that now.

I have just spent the last hour trying to figure this oooohhhhh so simple problem out.

I and using Oracles UTL_FILE plsql packages to write files out to a directory.

 

So you create the directory on the Oracle server

mkdir /u12/transfer

 

You go through the norm of creating said directory in Oracle

create or replace directory TRANSFER as '/u12/transfer'
grant read, write on directory transfer to XXX;

When you try to write to the directory you get

ERROR at line x:
ORA-29283: invalid file operation
ORA-06512: at “SYS.UTL_FILE”, line xxx
ORA-29283: invalid file operation

Well the fix is easy when you know how. You have to make the Linux directory globally read and writeable

chmod 777 /u12/transfer

 

Easy when you know how 🙂

 

 

 


Read a Directory using PLSQL

In Oracle 10G there is not UTL function / procedure to read the listing of a directory.

There is a undocumented procedure that can be used

 

CREATE OR REPLACE PROCEDURE list_directory
(directory VARCHAR2)
IS
ns          VARCHAR2(1024);
v_directory VARCHAR2(1024);
BEGIN
v_directory := directory;
SYS.DBMS_BACKUP_RESTORE.SEARCHFILES(v_directory, ns);
FOR each_file IN (SELECT fname_krbmsft AS name FROM x$krbmsft) LOOP
DBMS_OUTPUT.PUT_LINE(each_file.name);
END LOOP;
END;

 

This must be compiled as SYS and then EXECUTE permision granted to the relevat user.  Care must be taken on who should have access to this, I would suggest not giving it out to all user including poer users but limit it to perhaps a batch processing user.  This is where most of the benefit I believe will come from.

Grant execute of List_Directory to XXXXXXXX;

 

This then can be wrappered in a PLSL procedure a quick example below

begin
sys.list_directory(‘/IFS_Intergration/DataDump’);
for each_file in (SELECT FNAME_KRBMSFT AS name FROM X$KRBMSFT) loop
DBMS_OUTPUT.PUT_LINE(each_file.name);
END LOOP;
end;

TO BE CONTINUED, time for home 🙂