Monday, November 28, 2005

Oracle: Silent installs using response files

I have been setting up Oracle 10g R2 on a Redhat box which has to be reached by logging into 3 separate boxes (3 hops). Unfortunately the ability to run a x-windows client and use the Oracle universal installer GUI is not possible.
So I spent 30 minutes studying how to make a response file to pass to the installer.

The Oracle installer (on unix) is called normally from the command line using the command:
bash% runInstaller

Running the installer in silent mode and passing a response file is achieved by running the command:
bash% runInstaller -silent -Responsefile response.rsp

Luckily Oracle don't leave you to create your own, they provide a few sample response files. For 10g those were located (after unzipping the Oracle database zipfile) in the database/response directory.
After that it was dead-easy. Each sample response file is extremely well commented and explains each parameter well enough to understand its importance. Bravo Oracle!

So after about 15-20 minutes editing a copy of a sample response file, I kicked off the installer and bingo!! I now have a new Oracle10g install, this was complete in about 5 minutes.

The best part of the response file was the ability to take that file and reuse on the failover box. The return on investment of time after the first edit is that each new machine which is using a similar Oracle install becomes simple. Almost to the point of adding the command to the standard unix (redhat) build guide.

I will write about using the DB creation assistant in the next couple of days.

Have Fun

Monday, November 21, 2005

More site changes

I have been playing around with the various bits and pieces which you can plug into the blog. You will notice I have shed some anonymity.
I have noticed a couple of people who have come to the site have looked at my profile and I guess it is time to step into the 21st Century internet.

Will I add adsense? I am not sure, I have reading elsewhere that more specific blogs which focus on a specific topic are more likely to have more traffic. However I don't see this as my new 9-5 job anytime in the near future. I am enjoying SQLserver and Oracle tuning too much to leave it.

If I do spin off parts of the site to other specific blogs I will probably continue posting the articles here anyway.

Have Fun

Oracle: Automatic Statistics Collection

I have been reading about the new features in Oracle10g. One of the nice add-ons is the out-of-the-box automatic statistics collection.
This means one less script to run externally from the OS, whether it is cron or AT or whatever.

DBMS_SCHEDULER is a PL/SQL package which provides a complete scheduling system. You can have groups of windows of schedules. This provides plenty of flexibility to choose when and what you want run on the database. Plus given the some other packages (FILE_TRANSFER) this could include any log shipping, ETL or other tasks as well. I haven't tested this functionality yet, but it is on the to-do list.

So I checked out what scheduler_jobs where scheduled to run using this SQL

-- List scheduler jobs

column "Name" format a30
column "Start Date" format a21
column "Next Run" format a21

select owner||'.'||job_name "Name",
to_char(start_date, 'Mon DD HH24:MI:SS') "Start Date",
to_char(next_run_date, 'Mon DD HH24:MI:SS') "Next Run",
enabled,state,run_count,last_run_duration
from dba_scheduler_jobs
order by owner,job_name;

I found the in-built PURGE_LOGS and GATHER_STATS_JOB scheduled.
GATHER_STATS_JOB runs the package DBMS_STATS.GATHER_DATABASE_STATS.

However the GATHER_STATS_JOB didn't have a next_run_date, which was weird. So off to the metalink.oracle.com to check what was the story with the missing next_run_date. Not much, one forum question with a response to raise an iTAR.

Instead of rushing to log an iTAR I thought it might be more useful to check out how the GATHER_STATS_JOB was meant to be schedule.
The job is created during install through the running of the script in $ORACLE_HOME/rdbms/admin/catmwin.sql.
This script creates the job, two schedule windows, a schedule group and a schedule resource group.
I checked the various DBA_SCHEDULER_ views to check that everything was there and it was, I also dropped and recreated using the catmwin.sql script. Still the job had no next_run_date.

So I decided to create a new schedule called NIGHTLY which will run at 10pm each night. I assigned that schedule to the GATHER_STATS_JOB. Here is the PL/SQL

BEGIN
dbms_scheduler.create_schedule(
schedule_name=>'NIGHTLY',
start_date=>sysdate+7/24,
repeat_interval=>'freq=daily;byhour=22;byminute=0;bysecond=0',
end_date=>NULL,
comments=>'Nightly window for maintenance task');
dbms_scheduler.set_attribute('GATHER_STATS_JOB','SCHEDULE_NAME','NIGHTLY');
EXCEPTION
when others then
if sqlcode = -27477 then NULL;
else raise;
end if;
END;
/

Note: You will have to set the start_date to some future date. I set it to 9.30pm (as it was 2.30pm when I run this).

I checked the scheduler_jobs again and bingo. The GATHER_STATS_JOB now has a next_run_date which is set to start at 10pm.
It would seem that the start_date information was not being resolved from the various windows within the window group.

Hopefully this will help you if you want to get the statistics collection happening faster.

This is just scratching the surface of the power of this package. It is used by several different packages within Oracle and suggests plenty of flexibility and power to perform database tasks.

Have Fun


Saturday, November 05, 2005

SQL Server Logshipping and Daylight saving

Last weekend, Most of eastern Australia switched to summer time or daylight saving time.

Fun and games. The logshipping is normally done every 2 minutes, unfortunately those jobs which run on the hour (2am became 3am) broke.
SQLserver job agent reruns all jobs between 2am and 3am. So BACKUP LOG was run up to 30 times at 3am. Unfortunately the filename was not sufficiently unique, even a timestamp wasn't enough eg. dbname_200510300300.log
So bang the RESTORE LOG on the failover box breaks as it is now missing those overwriten logs.

Once the alarms are handled. This just required a restore from backup, leaving in STANDBY mode and rolling the failover database forward until it is up-to-date.

Moral of the story, we will investigate more unique filenames and/or having a 61 minute blackout period in the schedule of the log shipping. This doesn't mean the logshipping won't run for one hour, it will just mean it won't be run every 60/x time per hour to make up for the missing hour.

Have Fun