Sunday, January 22, 2006

SQL Performance reports from profiler traces

Over the December and January period I have been working with some developers to get their database performance improved. Not that there has been a problem with performance, rather we have been looking proactively at the slower stored procedures and reviewing the SQL and the logic behind each to see if we can improve.

This has lead to the need for the ability to compare the performance of stored procedures over periods. After finally fixing the buggy SQL code which returns the stored procedure name or DML that was run, I reworked some older code to return a comparison of some performance metrics over two periods.
This code uses two inline views which then join on the stored procedure name or DML SQL to compare number of calls, total duration and the main one, duration per call. This enables us to produce a report which can quickly identify code which is running significantly slower.

Unfortunately I am not going to publish that code just yet. I want to rework into a stored procedure which can be added to an expanding library of jobs and stored procedures to performance tune SQLserver.
The plan is to have a complete solution to provide the tools to automate performance metric collection and reporting using tools provided by SQLserver. I am hoping that people will be able to download a zip file which provides an easy install.
Tuning based on the reports is always going to be harder, but capturing the information is the first step.

After some other workload is over (doing an database migration), I will have time to recode into stored procedure and also install and play with SQLserver 2005.
Given the new performance tables, and my existing library of Oracle scripts I should be in a position to remap those Oracle scripts to use the new SQLserver 2005 tables where possible.

Have Fun

Battlefield 2 fun

I haven't played FPS (First Person Shooters) in a while. Having a family with 2 little kids doesn't lend itself to having much time to be mucking around playing computer games.
However after I cleaned up the study (spare bedroom) in the house, Santa was good enough to give me Battlefield 2 for xmas.

I had been playing the demo on and off for 2 months and decided to buy the game. Demos are a great way to try before you buy. The full version of the game is always better than a crippled version. In BF2 demo, it was one map, but it was enough to showcase the game.

So I am happily smashing the computer AI bots in singleplayer even on veteran level, so it was time to face reality and play against humans. You can not code the computer to act like humans.
For example, in another multiplayer game Starcraft (real time strategy RTS), we were playing each other in a local LAN game of 6 players. One of players nuked their own base to destroy an attack. The computer would never do that.

So I go to the GameArena Bigpond Games servers and after some false starts due to network problems I am in a 32 player map. I got my ar*e kicked. I think I managed about 10 kills over about 5 games, and was killed 10+ times each of those.
The hardest part was I knew noone in the game, plus I hadn't been using the chat and talk functions at all, so I couldn't communicate quick enough. eg. Stop driving as there are mines ahead.
On the fun side of things I learned a couple of things
1) Head ons at sped are a good way to kill more people even if you die.
2) Some players are awesome flying jets, deadly accurate too.

The funniest situations were
1) Missing throwing a grenade into a helicopter three times... I need practice. I managed to bounce the grenade off the helicopter instead of landing inside which would have killed 4.
2) Being rocket-ed to death and returning the favour later.

More practice is required which should be fun

Have Fun

Tuesday, December 13, 2005

Oracle vs SQLserver: Consistent Reads

I am in a good position given I work extensively with both Oracle and SQLserver databases to see the subtle but major differences in how certain parts of the database are implemented.

What is a consistent read?
It is the C in the ACID (Atomic,Consistent,Isolated and Durable) transaction model. It means when you read the data, your query will see a version of the data which has not changed from when you started running your query.

To provide consistent reads (or READ_COMMITTED transaction level) SQLserver uses shared locks on the data to stop any changes occurring to the data whilst a query is run. This effectively blocks any writes to the data.

***IMPORTANT*** In SQLserver 2000 and earlier, this is the main reason for explosive lock contention and table lock escalations, especially on tables having many inserts, deletes or updates happening concurrently. This type of lock and table escalation will blow away your CPU and any scalability you were hoping for.

Note: In SQLserver 2005, Microsoft have introduced READ_COMMITTED_SNAPSHOT as a database option, which will allow the database to show a version of the data (at the time of execution) without requiring shared locks, similar to the read consistent view Oracle provides with rollback/undo segments. See this part of the MSDN documentation for more information.

Oracle provide consistent reads (or read consistent view) of the data via the use of rollback or undo segments, these segments store the prior versions of the blocks (similar to pages in SQLserver) which contain the rows which are being written to. When you query the data and hit a block which has changed since you query started, Oracle will rebuild the data from the rollback or undo segment to keep the data consistent. See the Oracle Concepts manual for a detailed description of this method.

The dirty little secret (of SQLserver) is that you can do dirty or read uncommitted reads which will return data, even if it has changed since the query started. In some cases this is the only way to enable SQLserver to scale without falling over with lock contention. You use the TABLE hint (NOLOCK) or SET TRANSACTION LEVEL READ UNCOMMITTED at the beginning of your SQL code. This should be implemented only after understanding the nature of the work your application is performing against the database.

Finally SQLserver 2005 is getting closer in functionality to Oracle. It will be interesting to see the snapshots that SQLserver use to provide the versions of data are prone to problems which Oracle has had in the past i.e. the infamous "snapshot too old" error.

Have Fun

Paul

Sunday, December 11, 2005

SQL Server: Remote queries on views

I spent most of Friday testing out different methods for retrieving data using remote queries with SQLserver.

Remote queries are of the form

select ... from databaseServer2.databaseName.TableOwner.TableName JOIN ...

One of the limitations of remote queries is you can't use TABLE HINTS like (NOLOCK) on remote queries. Given that this query call was for a report which doesn't require uncommitted data that is a bit of a bummer. The report aggregates some log data on a table which is getting constant inserts, so the inability to read the data whilst not effecting the constant writes was causing LOCK contention.

The first thought was the wrap the table in a view on the remote server something like this

create view uv_logtable as select from logtable (NOLOCK)

Unfortunately it would seem that remote queries and views are not a good option if you want the SQLserver execution plan to be any good. The execution plan when there is a view involved was very ordinary. Queries which had a 4 sec response time blew out to more than 4 mins, in some causes much longer. I am going to test some more and determine what the rules are.

The final thought was to take the stored procedure which runs the report and port to the remote server i.e. adding remote queries for the other data. This worked well, especially considering that most of the work was being down on the remote server in that report.
The added benefit was that I could add NOLOCK hints to the now local tables and amount of lock requests (tracked from perfmon) was stable compared to the current setup.

I will update after we go through our Monday peak period as to whether this was a success.

Have Fun

Paul

Update: Monday was fine, Tuesday was back to 100% CPU!!! Unfortunately I had missed adding a NOHINT to the local table for the report stored procedure. At one point we had 500,000+ lock requests per sec. Once I fixed the stored procedure, it took 2-3 minutes for the CPU to go back to 4-5%!
That massive CPU load caused by lock contention is a demostration of the problem SQLserver has with providing the ACID transactional model, by using shared locks to enable consistent reads. More in this post.

Personal DJ

Here is an interesting website Pandora, (ht: Steve Pavlina) which allows you to search for an artist or song title and builds a music feed based on that search. You can also guide your computer DJ by giving it feedback on whether you like to song or not. Very cool.

This service is a similar progression towards more consumer (individual) driven pulling of media content like time shifting TV, radio, movies and podcasts, with RSS and Atom providing an easy tool to see new or updated content.

Have Fun

Paul

Thursday, December 08, 2005

Demographics explaining housing boom

I was looking at this article on Stephen Kirchner's economics blog, it is an interesting article from the Federal Reserve in the US about a model which uses demographics to predict housing demand and hence property booms.

Here are link to the PDF article from the Federal Reserve.

I noticed that the author of the article didn't run the model over Australia or New Zealand. It would be interesting to see what the model predicts for Australia. The other interesting country to model would be China, given the one child policy will affect China's demographics in the future.

I recommend you go read some of the Julian L. Simon's articles, this article is a good example of the idea that people are largely an untapped resource of potential, and there is also the online book "Ultimate Resource"

Have Fun

trackbacks: Fundamentals of House Price Inflation

Monday, November 28, 2005

Oracle: Silent installs using response files

I have been setting up Oracle 10g R2 on a Redhat box which has to be reached by logging into 3 separate boxes (3 hops). Unfortunately the ability to run a x-windows client and use the Oracle universal installer GUI is not possible.
So I spent 30 minutes studying how to make a response file to pass to the installer.

The Oracle installer (on unix) is called normally from the command line using the command:
bash% runInstaller

Running the installer in silent mode and passing a response file is achieved by running the command:
bash% runInstaller -silent -Responsefile response.rsp

Luckily Oracle don't leave you to create your own, they provide a few sample response files. For 10g those were located (after unzipping the Oracle database zipfile) in the database/response directory.
After that it was dead-easy. Each sample response file is extremely well commented and explains each parameter well enough to understand its importance. Bravo Oracle!

So after about 15-20 minutes editing a copy of a sample response file, I kicked off the installer and bingo!! I now have a new Oracle10g install, this was complete in about 5 minutes.

The best part of the response file was the ability to take that file and reuse on the failover box. The return on investment of time after the first edit is that each new machine which is using a similar Oracle install becomes simple. Almost to the point of adding the command to the standard unix (redhat) build guide.

I will write about using the DB creation assistant in the next couple of days.

Have Fun

Monday, November 21, 2005

More site changes

I have been playing around with the various bits and pieces which you can plug into the blog. You will notice I have shed some anonymity.
I have noticed a couple of people who have come to the site have looked at my profile and I guess it is time to step into the 21st Century internet.

Will I add adsense? I am not sure, I have reading elsewhere that more specific blogs which focus on a specific topic are more likely to have more traffic. However I don't see this as my new 9-5 job anytime in the near future. I am enjoying SQLserver and Oracle tuning too much to leave it.

If I do spin off parts of the site to other specific blogs I will probably continue posting the articles here anyway.

Have Fun

Oracle: Automatic Statistics Collection

I have been reading about the new features in Oracle10g. One of the nice add-ons is the out-of-the-box automatic statistics collection.
This means one less script to run externally from the OS, whether it is cron or AT or whatever.

DBMS_SCHEDULER is a PL/SQL package which provides a complete scheduling system. You can have groups of windows of schedules. This provides plenty of flexibility to choose when and what you want run on the database. Plus given the some other packages (FILE_TRANSFER) this could include any log shipping, ETL or other tasks as well. I haven't tested this functionality yet, but it is on the to-do list.

So I checked out what scheduler_jobs where scheduled to run using this SQL

-- List scheduler jobs

column "Name" format a30
column "Start Date" format a21
column "Next Run" format a21

select owner||'.'||job_name "Name",
to_char(start_date, 'Mon DD HH24:MI:SS') "Start Date",
to_char(next_run_date, 'Mon DD HH24:MI:SS') "Next Run",
enabled,state,run_count,last_run_duration
from dba_scheduler_jobs
order by owner,job_name;

I found the in-built PURGE_LOGS and GATHER_STATS_JOB scheduled.
GATHER_STATS_JOB runs the package DBMS_STATS.GATHER_DATABASE_STATS.

However the GATHER_STATS_JOB didn't have a next_run_date, which was weird. So off to the metalink.oracle.com to check what was the story with the missing next_run_date. Not much, one forum question with a response to raise an iTAR.

Instead of rushing to log an iTAR I thought it might be more useful to check out how the GATHER_STATS_JOB was meant to be schedule.
The job is created during install through the running of the script in $ORACLE_HOME/rdbms/admin/catmwin.sql.
This script creates the job, two schedule windows, a schedule group and a schedule resource group.
I checked the various DBA_SCHEDULER_ views to check that everything was there and it was, I also dropped and recreated using the catmwin.sql script. Still the job had no next_run_date.

So I decided to create a new schedule called NIGHTLY which will run at 10pm each night. I assigned that schedule to the GATHER_STATS_JOB. Here is the PL/SQL

BEGIN
dbms_scheduler.create_schedule(
schedule_name=>'NIGHTLY',
start_date=>sysdate+7/24,
repeat_interval=>'freq=daily;byhour=22;byminute=0;bysecond=0',
end_date=>NULL,
comments=>'Nightly window for maintenance task');
dbms_scheduler.set_attribute('GATHER_STATS_JOB','SCHEDULE_NAME','NIGHTLY');
EXCEPTION
when others then
if sqlcode = -27477 then NULL;
else raise;
end if;
END;
/

Note: You will have to set the start_date to some future date. I set it to 9.30pm (as it was 2.30pm when I run this).

I checked the scheduler_jobs again and bingo. The GATHER_STATS_JOB now has a next_run_date which is set to start at 10pm.
It would seem that the start_date information was not being resolved from the various windows within the window group.

Hopefully this will help you if you want to get the statistics collection happening faster.

This is just scratching the surface of the power of this package. It is used by several different packages within Oracle and suggests plenty of flexibility and power to perform database tasks.

Have Fun


Saturday, November 05, 2005

SQL Server Logshipping and Daylight saving

Last weekend, Most of eastern Australia switched to summer time or daylight saving time.

Fun and games. The logshipping is normally done every 2 minutes, unfortunately those jobs which run on the hour (2am became 3am) broke.
SQLserver job agent reruns all jobs between 2am and 3am. So BACKUP LOG was run up to 30 times at 3am. Unfortunately the filename was not sufficiently unique, even a timestamp wasn't enough eg. dbname_200510300300.log
So bang the RESTORE LOG on the failover box breaks as it is now missing those overwriten logs.

Once the alarms are handled. This just required a restore from backup, leaving in STANDBY mode and rolling the failover database forward until it is up-to-date.

Moral of the story, we will investigate more unique filenames and/or having a 61 minute blackout period in the schedule of the log shipping. This doesn't mean the logshipping won't run for one hour, it will just mean it won't be run every 60/x time per hour to make up for the missing hour.

Have Fun




Sunday, October 16, 2005

Rent vs Mortgage: Comparisons of Dead Money

The standard advice is that rent is dead money. But is it?
In this article we'll be delve into this standard financial advice. We will compare the differences between paying rent and paying a mortgage, and investing the difference.
If one is better than the other, is there an optimal point for either.

Any bank website will provide information about the loan repayments and also the schedule of the loan (year by year). This information provides a good start to determine the optimal strategy for renting vs mortgage.
If you are not disciplined enough to save the difference between rent and mortgage repayment then having mortgage is a good form of forced investment.
Also as the loan amount may not change for the life of the loan (if no equity is withdrawn), the amount owed remains static. This means that whilst you might have a $300,000 loan, the property should appreciate over the period of the loan to have a value much higher than the original purchase price.
This capital gain/appreciation was used as a sample return on property and used to determine the value and hence the rent over the life of a loan.

The calculations were made with the following numbers:
1) Mortgage interest rate: 7.32%
2) Rental Yield: 4%
3) Property Return: 5% (Total 9%)
4) Fixed Interest: 5%
5) Stocks Return: 12%
6) loan amount: $270,000.00 ($300,000 with 10% deposit)

Assumptions:
1) Landlord is able to increase rent each 12 months to maintain rental yields in line with property value. ** This has not been the case in Sydney over the last 5 years
2) Straight line increases in investments (ala no cycles)
3) No changes in mortgage interest rates for the life of the loan.

Basically rent and interest are dead money in terms that neither provide a return. The amount of money available to invest (the difference between mortgage and rent) enables to renter try and out-perform the property investor (mortgage repayments).

Here are the results:
1) If the money difference between rent and mortgage is invested in stocks, the long term gain means the total investment after 30 years is higher for the renter, with a $750,000 difference.
2) The optimal point were rent begins to exceed interest repayment is around 10 years. This result would indicate that if you must buy property, moving into the house after 10 years is a effective way to minimize the cost of housing.
3) Around year 13-14 the rent begins to exceed the total repayment per week. (Remember assumption 1 though). This point is found by using the percentage the principal makes up of the mortgage repayment vs the difference between rent and mortgage.

What these results indicate is that if you are a good tenant paying a reasonable rent,
1) Sign a longer lease as this demostrates you are a good tenant to the landlord.
2) Setup an automatic saving account which deducts at paytime.
3) Invest the proceeds in a stock market or buy the property and rent elsewhere and put the difference towards repaying the loan faster.

I didn't check whether using leverage ala investment property or stock margin loan increased the overall return to the renter. Without checking, it probably will and will also be tax effective.
Result one demostrates that renting can be an lifestyle choice which with discipline to invest can result in having more wealth at the end of the loan term.

Have Fun

Disclaimer: You will need to check these calculations for yourself and seek financial advice.

Wednesday, September 21, 2005

Oracle Text LOBs - space allocation

Careful allocation of space for Oracle Text or other LOBs is extremely important, given that the storage unit is in chunks of Oracle blocks.
The CHUNK size is a multiple of the blocksize!

So you have chosen a reasonable blocksize of 8K for your default blocksize in the database,
what happens if your LOBs are much less than 8K? ... All that extra space is wasted per lob.

I noticed Oracle have released a good whitepaper on this issue which spells this out much better than the documentation and various scattered support notes. Pity this was too late (May 2004) for the people creating the database I now support :)

This database which has been severely burnt by the problem ala 50% space wastage of a large database.
We have both text and pictures, with the pictures stored out of the row. The LOB information is XML data. The average text size is 2K, the default blocksize is 8K. The average picture LOB XML is even less than 2K.
I ran a couple of scripts to determine the average lob space used and the amount of space wastage is averages 50-60%!!!

We have a couple of steps to complete before we can migrate to tablespaces which have smaller blocksizes (4k) and move the partitioned table lob data to those new tablespaces. In our case only the lob segments need to reside on the new (smaller blocksize) tablespaces.

I have included the script. The script gets back the distribution of lob sizes for a given table and lob name.
Note: The lob name is the column name which is of a LOB datatype.

It will also take a while to run this script on very large datasets or smaller spec machines.

Have Fun

--- script begins ---

-- Show sizes for LOBS
select
avg(dbms_lob.getlength(&&lobname)) "average LOB size", stddev(dbms_lob.getlength(&&lobname)) "stddev LOB size", min(dbms_lob.getlength(&&lobname)) "Min LOB size",
max(dbms_lob.getlength(&&lobname)) "Max LOB size"
from table_name
/

Friday, September 16, 2005

The difference between debt and equity

This is an interesting ruling to say the least. The SMH reports the ruling in favour of a shareholder to be classed as a creditor.
This is plain wrong.

Investing in a company makes you a shareholder (you own a share of the company). A part OWNER in the company.
When a company appoints an administrator, it is saying the current owners are no longer capable of running the company. Normally the creditors appoint administrators.

Being a creditor of a company is different. You have either lent money to the company or you have sold services or goods to the company. Both actions are essentially the same thing... In lending money to the company, you have transfered money to the company so they can go purchase goods and services.
As a creditor you have performed one part of a transaction. As a creditor the company recognises that the company owes you payment in the future, in the form of money.

This is the fundamental difference between equity and debt.
Equity = share in company profits either through dividends or share price increase
Debt/Creditor = owed money (future goods and services)

Have Fun

Wednesday, September 14, 2005

Automating SQL Profiler

Stored procedures to automate SQL profiler. Here is the two scripts which I use to automate running SQL profiler on a weekly basis.

This script sets the necessary settings for profiler.
Three things you must do before running this stored procedure.

  1. You need to add a path to the file. I normally use d:\perfmon\database_... This script will add the date and time to the end of the filepath. This means the filename looks something like _20050914_150.trc
  2. Check or change the maxfilesize. It is currently set to 50M.
  3. Check or change the duration filter. It is currently set to only capture SQL/Stored Procedures which run for more than 500ms.
  4. Change or remove the filter for database name. I normally trace the specific database related to the application.
Have Fun

--- script begins here ---

use msdb
go

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[dba_trc_database]')
and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[dba_trc_database]
GO

SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO

CREATE PROCEDURE dba_trc_database

@Sec int = 7200 -- Duration in seconds for the trace (2 hours)
, @TraceID int OUTPUT -- Return the TraceID to the caller.
WITH RECOMPILE
/*
* Creates a trace on the selected database.
***********************************************************************/
AS
declare @rc int
declare @dbaTraceID int
declare @maxfilesize bigint
declare @DateTime datetime
DECLARE @FileName nvarchar(256)
, @StartDT datetime -- When the trace started

SELECT @StartDT = getdate()
SELECT @FileName = N'c:\trace_'+convert(varchar,getdate(),112)+'_'+cast(datepart(hh,getdate()) as varchar)+cast(datepart(mi,getdate()) as varchar) ,
@DateTime = DATEADD (s, @Sec, @StartDT)

set @maxfilesize = 50

exec @rc = sp_trace_create @TraceID output, 2, @FileName , @maxfilesize, @Datetime

if (@rc != 0) goto error

-- Set the events
-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 10, 1, @on
exec sp_trace_setevent @TraceID, 10, 2, @on
exec sp_trace_setevent @TraceID, 10, 8, @on
exec sp_trace_setevent @TraceID, 10, 12, @on
exec sp_trace_setevent @TraceID, 10, 13, @on
exec sp_trace_setevent @TraceID, 10, 14, @on
exec sp_trace_setevent @TraceID, 10, 15, @on
exec sp_trace_setevent @TraceID, 10, 16, @on
exec sp_trace_setevent @TraceID, 10, 17, @on
exec sp_trace_setevent @TraceID, 10, 18, @on
exec sp_trace_setevent @TraceID, 12, 1, @on
exec sp_trace_setevent @TraceID, 12, 2, @on
exec sp_trace_setevent @TraceID, 12, 8, @on
exec sp_trace_setevent @TraceID, 12, 12, @on
exec sp_trace_setevent @TraceID, 12, 13, @on
exec sp_trace_setevent @TraceID, 12, 14, @on
exec sp_trace_setevent @TraceID, 12, 15, @on
exec sp_trace_setevent @TraceID, 12, 16, @on
exec sp_trace_setevent @TraceID, 12, 17, @on
exec sp_trace_setevent @TraceID, 12, 18, @on


-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint

exec sp_trace_setfilter @TraceID, 10, 0, 7, N'SQL Profiler'

-- Duration greater than 500ms
set @bigintfilter = 500
exec sp_trace_setfilter @TraceID, 13, 0, 4, @bigintfilter

set @intfilter = 100
exec sp_trace_setfilter @TraceID, 22, 0, 4, @intfilter

-- If you want a specific database traced change to database name
exec sp_trace_setfilter @TraceID, 35, 1, 6, N'insert database name here'
-- Enable trace

exec sp_trace_setstatus 1, 1

-- display trace id for future references
print @TraceID
goto finish

error:
select ErrorCode=@rc

finish:

GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

--- script ends here ---

The second script is to setup a weekly trace at 3pm. It will run for 2 hours (7200 secs).
Steps before running this script

  1. Check or modify the duration of the trace. The current duration is 7200 secs (2 hours)
  2. Check or modify the time the job will run. This is easier to perform within Enterprise Manager after the job has been created.
--- script begins here ---

-- Script generated on 4/22/2005 11:06 AM
-- By: sa
-- Server:

BEGIN TRANSACTION
DECLARE @JobID BINARY(16)
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF (SELECT COUNT(*) FROM msdb.dbo.syscategories WHERE name = N'[Uncategorized (Local)]') <>
EXECUTE msdb.dbo.sp_add_category @name = N'[Uncategorized (Local)]'

-- Delete the job with the same name (if it exists)
SELECT @JobID = job_id
FROM msdb.dbo.sysjobs
WHERE (name = N'Weekly shoulder peak profiler trace')
IF (@JobID IS NOT NULL)
BEGIN
-- Check if the job is a multi-server job
IF (EXISTS (SELECT *
FROM msdb.dbo.sysjobservers
WHERE (job_id = @JobID) AND (server_id <> 0)))
BEGIN
-- There is, so abort the script
RAISERROR (N'Unable to import job ''Weekly shoulder peak profiler trace'' since there is already a multi-server job with this name.', 16, 1)
GOTO QuitWithRollback
END
ELSE
-- Delete the [local] job
EXECUTE msdb.dbo.sp_delete_job @job_name = N'Weekly shoulder peak profiler trace'
SELECT @JobID = NULL
END

BEGIN

-- Add the job
EXECUTE @ReturnCode = msdb.dbo.sp_add_job @job_id = @JobID OUTPUT , @job_name = N'Weekly shoulder peak profiler trace', @owner_login_name = N'sa', @description = N'Runs a profiler trace to 50M or half hour on a shoulder peak (3pm)', @category_name = N'[Uncategorized (Local)]', @enabled = 1, @notify_level_email = 0, @notify_level_page = 0, @notify_level_netsend = 0, @notify_level_eventlog = 2, @delete_level= 0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

-- Add the job steps
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID, @step_id = 1, @step_name = N'Trace database', @command = N'declare @traceID int
declare @rc int

exec dba_Trc_database ''7200'',@traceID

PRINT ''RC from starting Trace = '' + convert(varchar(9), @rc)
PRINT ''DBA TraceID = '' + convert(varchar(9), @traceID)', @database_name = N'msdb', @server = N'', @database_user_name = N'', @subsystem = N'TSQL', @cmdexec_success_code = 0, @flags = 0, @retry_attempts = 0, @retry_interval = 1, @output_file_name = N'', @on_success_step_id = 0, @on_success_action = 1, @on_fail_step_id = 0, @on_fail_action = 2
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXECUTE @ReturnCode = msdb.dbo.sp_update_job @job_id = @JobID, @start_step_id = 1

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

-- Add the job schedules
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id = @JobID, @name = N'Weekly', @enabled = 1, @freq_type = 8, @active_start_date = 20050323, @active_start_time = 150000, @freq_interval = 2, @freq_subday_type = 1, @freq_subday_interval = 0, @freq_relative_interval = 0, @freq_recurrence_factor = 1, @active_end_date = 99991231, @active_end_time = 235959
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

-- Add the Target Servers
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @JobID, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:


--- script ends here ---



Required Disclaimer: Use at your own risk. I accept NO responsibility for any losses which occur in the use of this script.



Performance Tuning using SQL Profiler

The standard method of capturing SQL within SQLserver is to use Profiler. The output produced from the profiler can be saved to a file or to a table.
Once this information has been captured you can start checking out what SQL has the longest duration, one way to start SQLserver performance tuning.

Reading around the various SQLserver websites it is better to save the output from the profiler to a file.

On all the instances I look after I have setup an automated job which runs the profiler on a weekly basis. I have to test whether I can run the profiler more often. However as the databases are reasonable stable production environments there is little need to run every day.

I have written a couple of scripts to create stored procedures to run the profiler and create the job. I will add the scripts as a separate post.

After all that, what do you look for?

SQL and Stored procedures which have a long duration per call and large overall duration.

This is only part of the whole tuning picture which should always include looking at waits, however I prefer to look for the low hanging fruit during the tuning process. This low hanging fruit is the SQL or procedure missing an index, or whose logic is flawed from a performance perspective.

I will provide more scripts in the near future to perform the analysis based on the trace files produced from SQL Profiler. In SQLserver 2000 SP3, Microsoft provided a function which can be called directly from the database called ::fn_trace_gettable.
The best method is to use that function to grab the trace data from the trace file and insert it into a table.

The scripts to automate the tracing will follow this post.

Have Fun


Tuesday, September 06, 2005

Trade and Foreign Debt - 5

The trade deficit was close...so close to become a trade surplus in July, it will be interesting to see what happens in August.
As per normal the ABS has the document as a PDF which you can download and read at your leisure.
The real numbers had the deficit at AUS$389 Million, wiping $1,768 Million of the June Deficit. As always the moving average (Seasonally adjusted) and Trend Estimate (moving average of Seasonally adjusted) still show the deficit being more than a billion in the red.

How was the gold plated BBQ economy going in July 2005?
Finally the big increase in prices (mentioned by BHP, RIO, numerous coal companies) in coal, coking coal, iron showed up. So much for the forecast for the surplus in June or July. Maybe August is the swap over month.
The only chance that it won't is the potential for the higher oil price to show up. Although given Australia also exports fuel, any increase will be offset by an increase in export oil revenue.

Goods exports are more almost $2 Billion over July 2004 from $10.129 Billion to $12.275 Billion. That is a whooping 20% increase. This is with cereal grains and wool missing in action due to drought.
Wool now only makes up 2.1% of the total good exports, the same amount as beverages!

On the import side, the capital investment still stands out. Industrial machinery, steel and processed industrial supplies all show increases over July last year. This capital investment will see real wages rise over time as each laborer becomes more productive. The equipment imported (capital goods) is mining and related equipment. Computers and Scientific equipment also feature.
Investment in more and better tools leads to productivity gains. This means more can be produced, which leads to more profits. This is showing up in the annual reports of many public companies and corporate tax revenues of the Government.

When you look at the state by state breakdown, it becomes clear that Queensland and Western Australia have increased their exports and imports, pursuing their comparative advantages in mining.

The most interesting nugget of information is in the state by state breakdown of exports and imports. Victoria is going backwards in both exports and imports! Something to delve into in the future.

The more Australia produces the more we can buy. Produce and consume.

Have Fun

Previous articles:
Trade Deficit and Foreign Debt -1
Trade Deficit and Foreign Debt -2
Trade Deficit and Foreign Debt -3
Trade Deficit and Foreign Debt -4
Related Articles:
Australian Trade Partners

Thursday, August 25, 2005

Is rational economics heartless?

To be rational to use reason. To be heartless is to lack compassion or morals/ethics, something like a sociopath.
EconLog have a good article on this very subject.

This is more of a philosophical question. Is being rational, using the facility of reason or your mind mean you lack compassion?

That is the real kicker, Adam Smith nailed this centuries ago. You can be completely self directed in your production and consumption and people still benefit more, than if you allowed someone else to determine or direct your actions. The invisible hand!!

This is the difference between a free, bottom-up market (capitalism) and a directed top-down market (statist = socialism, communist, nazism, rabid environmentism).

I suggest reading some Julian L Simon (Ultimate Resource) or Ayn Rand, or even "7 effective habits". The human mind is the powerhouse, the fount of capitalism.

As my older brother used to say when we were whining and complaining, "Get a grip (with reality)" i.e. no amount of whining and complaining is going to change your situation. You have to think and act.

Have Fun

Wednesday, August 17, 2005

Economic illiteracy - 1

Is this bordering on a political opinion piece... maybe

Here is a quote the maiden speech from a certain new Aussie senator.

"The purpose of the economy is not to produce the lowest price product for the end consumer. That may be a consequence of a good economy but it is not the purpose. The purpose of the economy is to create the greater nexus between the wealth of the nation and its people, and it generally does this through small business."

Drop the NOT and that is THE purpose of an free market/capitalist economy.

"The purpose of the (free market/capitalist) economy is to produce the lowest price product for the end consumer."

The senator's belief is that the economy exists to increase the connections (greater nexus) between the wealth of the nation and its people. It portrays the wealth as being static, and does not mention how this wealth is created. If you believe wealth is static, redistribution of that said wealth is just around the corner.

Why build a mine, run a farm or business if it is not to offer (produce) the most product at the cheapest price to every other producer/consumer.
This more product at a cheaper price is the driver, the purpose of the economy which creates the wealth the senator speaks connecting to everyone.
Every producer will want to connect to as many consumers as possible, and every consumer wants to connect to as many producers (of the same product) as possible. Anything which gets in the way means the market is less free than it could be.
If the senator wants more connectivity between producers and consumers, get rid of excessive government regulation and subsidies.

The senator needs to read some real economics. Bastiat is a good start.

The other underlining issue for the senator is the lack of investment and migration to regional areas. He is trying to answer and provide solutions to the question: why do people move and live in the city?

The answer is labour specialisation. In a city you can become more specialised at what you do, making yourself more productive and earn higher wages to boot.
Regional areas due to lower populations don't allow specialisation to occur to the same degree. This is why there are few if any brain surgeons and DBAs in regional areas.

Have Fun

Tuesday, August 16, 2005

Articles on CAPM model

This blog has a entry to some essays written by Arnold Kling about the Capital Asset Pricing Model (CAPM) developed by Fischer Black.

It is worth reading all the essays. They are easy to find by using the articles by the same author link on the left side of the techcentral station website.

It gave me a couple of ideas on how to get the most diversified portfolio. Basically you want to hold stocks which have a very low correlation with each other (or covariance). I had a quick google for ASX share betas and didn't find much. Most stock betas relate the stock's correlation to the market. In a way that is bad, considering that some indexes are weighted and therefore the correlation is actually a correlation to the those (more heavily weighted) stocks.

On the ASX there are plenty of indexes available and even some exchange tradable funds which are index funds.
So is holding a index fund the most diversified portfolio available to an Australian based investor?
If there exists a most diversified portfolio, how was it's performance against the index over the last 2,5 and 10 years?
To be competitive with an index fund, the portfolio will need to minimize the number of rebalancing actions required to reduce the overall transaction cost.

The other interesting idea from the CAPM is that individual investors control their risk exposure. One example was taking a low risk stock and using margin (debt) to leverage into a higher risk.

Have Fun

Tuesday, August 02, 2005

Australian trading partners

I still can not believe the lack of research in some of the opinion pieces in the newspaper. China is not our biggest trading partner.

Here are the facts (from the ABS May 2005 Trade figures) and this release from ABS from 2004

Trading partners (AUS$) overall
#1 US -$10.9 Billion
#2 Japan +$6.7 Billion
#3 Germany -$6.7 Billion
#4 China -$6.6 Billion
#5 South Korea +$4.3 Billion

Go and read these publications, they are free and they will set the record straight. The most interesting facts are what each of these countries trades with Australia. Most of the trade from the US and Germany (and France) is capital goods.

How many years of cultural cringe have we had to put up with when Australia is called a commodity economy, somehow implying with a certain smugness the backward nature of our gold plated BBQ economy (Meat,Bread (wheat), gold, iron and coal/coke).
It is our comparative advantage!

The other interesting fact is the growth of India and China and the massive trade we do with our New Zealand friends. Given the potential size of both India and China compared to NZ the trade potential is enormous.

Have Fun