run_duration to seconds #sqlserver sysjobhistory

Sample code showing how to convert the run_duration column from msdb.dbo.sysjobhistory to seconds.


select
j.job_id
,j.name as job_name
, s.step_name
, s.step_id
-- , s.command
,dbo.agent_datetime(run_date, 0) as step_start_date
,dbo.agent_datetime(run_date, run_time) as step_start_date_time
,h.instance_id
,h.run_duration
,run_duration/10000 as run_duration_hours_only --hours
,run_duration/100%100 as run_duration_minutes_only --minutes
,run_duration%100 as run_duration_seconds_only --seconds
,(run_duration/10000 * 60 * 60) + -- hours as seconds
(run_duration/100%100 * 60) + --minutes as seconds
(run_duration%100 ) as run_duration_total_seconds --seconds
-- , h.*
from msdb.dbo.sysjobs j inner join
msdb.dbo.sysjobsteps s on j.job_id = s.job_id inner join
msdb.dbo.sysjobhistory h on s.job_id = h.job_id and s.step_id = h.step_id
where
1=1
and j.name like 'syspolicy_purge_history'
order by h.instance_id desc

SQL Server Error: 9001, Severity: 21, State: 5 – The log for database `database_name` is not available. Check the event log for related error messages. Resolve any errors and restart the database. | Error: 9001, Severity: 21, State: 5. | Matched: %Error%,

Upon being greeted with the following logs, I started investigating the issue.

I found the issue to be caused by the database being closed, and not any major underlying disk issue.

Check if your database has auto_close enabled.


select @@SERVERNAME AS server_name,getutcdate() as report_date_utc,name as database_name, is_auto_close_on , state_desc, user_access_desc
from sys.databases
where is_auto_close_on = 1
order by name asc

If autoclose is on, switch this database to no longer use auto_close.

Before making any changes, check the integrity of the database. If there are no errors generated by this command, then  move on to changing the auto_close option.

dbcc checkdb('database_name')

alter database [database_name] set offline with rollback immediate;
go
alter database [database_name] set online;

alter database [database_name] set AUTO_CLOSE OFF;

Understanding I/O Performance for SQL Server on Amazon EC2 / AWS.

Having run SQL Server on EC2, be advised EC2 is a very stable platform, however you are required to pay for the performance you need. I have done several repeated tests, and my experience is that Amazon gives you exactly what you pay for.

If you are having a performance problem, you need to look at your entire infrastructure and ensure you have not over provisioned one aspect of the system, while under-provisioning another.

The AWS blog tells the Dedicated network throughput for each instance size. I have chosen to expand on this to show you the best choices you have for EBS volume configuration for these instance types. If your only objective is maxing out I/O performance the table below should be sufficient. If however you need large amounts of space, you may choose to increase your drive count, and reduce your Provisioned IOPs per volume.

 

There are several key things, I have learned and wanted to state for you:

1) Your instance size determines Guaranteed Network Throughput

2) Everything that goes off your server traverses that single NIC ( Disk I/O, Network I/O, everything)

3) 1000 IOPS is the approximately 16 MB per second on AWS, the block size is 16K.

4)  If you are running SQL with Terabytes of Data you most likely need an instance size is rated as High for Network Performance and potentially one of the newer instances which promise 10Gigabit throughput.

5) Please do configure your server to be EBS optimized.

6) EBS volumes are currently limited to 1TB in size, so to create larger disks use Software RAID in your Operating System. (p.s. Amazon will deliver the IOPS of all volumes in the RAID Array — just remember you cannot exceed the Dedicated Throughput of your NIC).

7) If your backups are running terribly long and you can’t seem to figure out why… you most likely have an I/O bottleneck related to your server configuration. Amazon is NOT the problem.

Below is the table, hope it works for you..

 

Any questions / comments that may assist in improving this post are appreciated.

 

 

Instance Type Dedicated Throughput Dedicated Through Put (MB/second)  Max IOPs Through Put for EBS Purchasing   Most Optimized Purchase Size   Min Drive Size (GB) @ MAX IOPS   Min Drive Count to Get Maximum IOPS 
m1.large 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m1.xlarge 1000 Mbps 125                7,812.50                    8,000                             800                   2
 m2.2xlarge (new) 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m2.4xlarge 1000 Mbps 125                7,812.50                    8,000                             800                   2
 m3.xlarge (new) 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m3.2xlarge (new) 1000 Mbps 125                7,812.50                    8,000                             800                   2
 c1.xlarge (new) 1000 Mbps 125                7,812.50                    8,000                             800                   2
Instance Type Dedicated Throughput Dedicated Through Put (MB/second)  Max IOPs Through Put for EBS Purchasing   Most Optimized Purchase Size   Min Drive Size (GB) @ MAX IOPS   Min Drive Count to Get Maximum IOPS 
m1.large 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m1.xlarge 1000 Mbps 125                7,812.50                    8,000                             800                   2
 m2.2xlarge (new) 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m2.4xlarge 1000 Mbps 125                7,812.50                    8,000                             800                   2
 m3.xlarge (new) 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m3.2xlarge (new) 1000 Mbps 125                7,812.50                    8,000                             800                   2
 c1.xlarge (new) 1000 Mbps 125                7,812.50                    8,000                             800                   2

SSRS Identifying Reports to Tune – Top 10 Offenders

To discover reports which are candidates for tuning I use the following query to identify the reports which take the longest time to perform their data retrieval tasks.

The query I use is

DECLARE @ReportPath VARCHAR(MAX)
SET @ReportPath = '%'


SELECT TOP 10
c.[Path],c.[Name]
,[ReportID]
,COUNT(1) as UseCount
,SUM([TimeDataRetrieval]) as [TimeDataRetrieval]
,SUM([TimeProcessing]) as [TimeProcessing]
,SUM([TimeRendering]) as [TimeRendering]
,(SUM([TimeDataRetrieval])/(SUM([TimeDataRetrieval]+[TimeProcessing]+[TimeRendering])*1.00)) *100 as TimeDataRetrieval_TotalTime
,(SUM([TimeProcessing])   /(SUM([TimeDataRetrieval]+[TimeProcessing]+[TimeRendering])*1.00)) *100 as TimeProcessing_TotalTime
,(SUM([TimeRendering])    /(SUM([TimeDataRetrieval]+[TimeProcessing]+[TimeRendering])*1.00)) *100 as TimeRendering_TotalTime
,SUM([TimeDataRetrieval]+[TimeProcessing]+[TimeRendering]) as TotalTime
,SUM([ByteCount]) as [ByteCount]
,SUM([RowCount]) as [RowCount]
,MIN([TimeStart]) as FirstUsed
,MAX([TimeStart]) as LastUsed
,AVG([TimeDataRetrieval]*1.00)  as [Avg_TimeDataRetrieval]
,AVG([TimeProcessing]*1.00)        as [Avg_TimeProcessing]
,AVG([TimeRendering]*1.00)        as [Avg_TimeRendering]
,AVG([RowCount])                as [Avg_RowCount]
,AVG([TimeDataRetrieval]/(case when [RowCount] = 0 then 0.01 else [RowCount] end *1.00))  as [Avg_TimeDataRetrieval_Per_Row]  -- rudimentary hack to prevent divide by 0 and also geta number even when no data returned.
FROM
[dbo].[catalog] c WITH(NOLOCK)
LEFT OUTER JOIN [dbo].[ExecutionLog] el  WITH(NOLOCK) ON (c.ItemID = el.ReportID)
WHERE
1=1
-- AND c.[Path] LIKE @ReportPath
GROUP BY c.[Path],c.[Name],[ReportID]
HAVING COUNT(1) > 5
ORDER BY
SUM([TimeDataRetrieval]+[TimeProcessing]+[TimeRendering]) DESC,AVG([TimeDataRetrieval]) DESC

This gives me a list of the reports on the server in the order of longest time taken for data retrieval. Once I have this list, I then look at the reports and see what I can do to make the retrieval of each data set faster / more efficient by reviewing execution plans, and re-writing the queries / stored procedures.

restore database and automatically drop database snapshot transact sql

To facilitate quick testing of changes on my development and test environments I leverage database snapshots greatly.  The benefit of snapshots is that I can revert data to its previous state without having testers wait 30 – 40 minutes to perform another database restore, only to repeat tests again. Also, I can do a join on the data in the snapshot database, to compare data.

Snapshots are a great help. However, I still need to refresh the source database every now and then from the production source to get up to date data. The following methodology allows me to drop all existing snapshots and restore database in one step, without having to do a lot of work manually checking for snapshots, then dropping each one individually, then performing my restore.

Once I have the script define for a database I put in svn so that my entire team has access to the script and can restore a database unattended.

e.g. http://svn/database/servername/database_restore_scripts/mydatabase.sql

Please note however that

  1. If you run this script you drop and lose all your snapshots on a database permanently.
  2. If you do not have backups of your test databases, make sure that all your users are ok with you refreshing the environment before doing your data refresh from production.

My process is to

  1. drop any existing database snapshots
  2. offline the database
  3. online the database
  4. restore the database
  5. fix users


-- DROP DATABASE SNAPSHOTS IF THEY EXIST
DECLARE @emailMessage NVARCHAR(4000)
DECLARE @database VARCHAR(100)
DECLARE @sql NVARCHAR(4000)
DECLARE cur_db CURSOR
FOR
select name
from sys.databases
where source_database_id IN (select database_id from sys.databases where name LIKE 'MyDatabase')
ORDER BY create_date DESC
FOR READ ONLY
OPEN cur_db
FETCH NEXT FROM cur_db INTO @database
WHILE (@@FETCH_STATUS = 0 )
BEGIN

SET @sql = 'ALTER DATABASE ['+@database+'] SET OFFLINE WITH ROLLBACK IMMEDIATE;'
EXEC sp_executesql @sql
SET @sql = 'ALTER DATABASE ['+@database+'] SET ONLINE;'
EXEC sp_executesql @sql
SET @sql = 'DROP DATABASE ['+@database+'];'
-- PRINT @sql
EXEC sp_executesql @sql
FETCH NEXT FROM cur_db INTO @database
END
CLOSE cur_db
DEALLOCATE cur_db
GO
-- OFFLINE THE MAIN DATABASE TO FORCE USERS OFF SYSTEM
ALTER DATABASE  [MyDatabase] SET OFFLINE WITH ROLLBACK IMMEDIATE;
GO
-- ONLINE THE DATABASE
ALTER DATABASE  [MyDatabase] SET ONLINE;
GO

-- BEGIN RESTORING DATABASE.
DECLARE @exitcode int, @sqlerrorcode int;
EXEC MASTER..sqlbackup N'-SQL "RESTORE DATABASE [MyDatabase]
FROM DISK=''\\BACKUP\MyDatabase.sqb''
WITH REPLACE
, DISCONNECT_EXISTING
, RECOVERY"', @exitcode OUTPUT, @sqlerrorcode OUTPUT;
IF (@exitcode <> 0) OR (@sqlerrorcode <> 0)
BEGIN
SET @emailMessage = 'FAILED DATABASE RESTORE | ' + @@SERVERNAME + ' | ' + SUSER_SNAME() + ' | '+ 'MyDatabase' + ' | ' + CONVERT(VARCHAR(20),getdate(),120)
EXEC msdb.dbo.sp_send_dbmail
@recipients='dbas@mydomain.com;',
@subject= @emailMessage,
@body= @emailMessage
END
ELSE
BEGIN
SET @emailMessage = 'SUCCESSFUL DATABASE RESTORE | ' + @@SERVERNAME + ' | ' + SUSER_SNAME() + ' | '+ 'MyDatabase' + ' | ' + CONVERT(VARCHAR(20),getdate(),120)
EXEC msdb.dbo.sp_send_dbmail
@recipients='dbas@mydomain.com;',
@subject= @emailMessage,
@body= @emailMessage
END
GO
-- RUN CODE TO RESOLVE MISMATCHED SIDS and USER ACCOUNTS WITH LOGINS
EXEC [MyDatabase].dbo.sp_fixusers
GO

how to find stored procedures in sql server 2008 or sql server 2005 across all databases

As much as I hate searching for objects or not having complete information to troubleshoot an issue sometimes I have to go search for a stored procedure / function in a database server.

To find which database contains a stored procedure I am looking for on a server I run the following script

-- SEARCH FOR OBJECT
-- FIND A ROUTINE / PROCEDURE / FUNCTION IN ALL DATABASES ON SERVER.
EXEC sp_msforeachdb N'USE [?];
SELECT
@@SERVERNAME + '' | '' + SUSER_SNAME() + '' | ''+ DB_NAME() + '' | '' + CONVERT(VARCHAR(20),getdate(),120) as [SOURCE],
ROUTINE_CATALOG, ROUTINE_SCHEMA, ROUTINE_NAME ,
''[''+ CAST(@@SERVERNAME AS VARCHAR(100))+ ''].[''+ ROUTINE_CATALOG + ''].[''+ ROUTINE_SCHEMA + ''].[''+ ROUTINE_NAME + '']'' as [FQN]
FROM [?].INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_NAME LIKE ''%myROUTINEName%'';
'
GO

I combine this with registering a group of servers in management studio, and in one quick step I can search for a procedure or function in any set of servers and databases.

 

sql server create or replace procedure

Oracle has great syntax to CREATE OR REPLACE a stored procedure.

When I have to sync up multiple servers or deploy a specific procedure across my entire SQL Server Environment, I like to have a single script which can create the procedure where it needs to be created, and alter the procedure if it needs to be altered.

To make my code runnable on a server that may / may not already have my stored procedure I create a code stub, then replace it with the correct code.

Here is an example of code I use for database backups with Red Gate

IF NOT EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_NAME LIKE 'usp_BackupLogs' AND ROUTINE_TYPE LIKE 'PROCEDURE' )
BEGIN
EXEC sp_executesql N'CREATE PROCEDURE [dbo].[usp_BackupLogs] AS SELECT CONVERT(VARCHAR(20),getdate() ,120)  + '' - Code Stub to Be Replaced by Alter Script '' as [Stub]'
END
GO
ALTER PROCEDURE [dbo].[usp_BackupLogs]
AS
DECLARE @emailMessage NVARCHAR(4000)
DECLARE @exitcode int
DECLARE @sqlerrorcode int
EXEC MASTER..sqlbackup N'-SQL "BACKUP LOGS [*] TO DISK = ''\\BACKUPS\LOGS\'' WITH NAME = '''', DESCRIPTION = '''', INIT, PASSWORD = ''mypassword'', MAXDATABLOCK = 65536,KEYSIZE = 256, COMPRESSION = 1, THREADS = 1"', @exitcode OUTPUT, @sqlerrorcode OUTPUT
IF (@exitcode <>  0) OR (@sqlerrorcode <> 0)
BEGIN
SET @emailMessage = 'FAILED LOG BACKUP | ' + @@SERVERNAME + ' | ' + SUSER_SNAME() + ' | '+ DB_NAME() + ' | ' + CONVERT(VARCHAR(20),getdate(),120)
EXEC msdb.dbo.sp_send_dbmail
@recipients=N'DBAS@mydomain.com',
@importance = 'High',
@subject = @emailMessage,
@body= @emailMessage
END
-- COMMENTED OUT INTENTIONALLY - DO NOT WANT EMAILS ON SUCCESSFUL LOG BACKUPS (esp. if scheduled every 15 minutes/ frequently)
--ELSE
-- BEGIN
-- SET @emailMessage = 'SUCCESSFUL LOG BACKUP | ' + @@SERVERNAME + ' | ' + SUSER_SNAME() + ' | '+ DB_NAME() + ' | ' + CONVERT(VARCHAR(20),getdate(),120)
-- EXEC msdb.dbo.sp_send_dbmail
-- @recipients=N'DBAS@mydomain.com',
-- @subject = @emailMessage,
-- @body= @emailMessage ;
-- End

Following this example if I want to deploy all the code from a server to another machine, I generate the CREATE Statements using the following code:

This prevents the need for 2 separate scripts (CREATE AND ALTER).

----------------------------------------------------------------
-- CREATE OR REPLACE ALL STORED PROCEDURES
----------------------------------------------------------------
SELECT
ROUTINE_NAME
,'EXEC sp_executesql N'''+REPLACE(N'CREATE PROCEDURE ['+ROUTINE_SCHEMA+'].['+ROUTINE_NAME+'] AS SELECT CONVERT(VARCHAR(20),getdate() ,120) + '' - Code Stub to Be Replaced by Alter Script '' as [Stub]','''','''''') + '''' as cmd
FROM INFORMATION_SCHEMA.ROUTINES
WHERE
ROUTINE_TYPE LIKE 'PROCEDURE'
ORDER BY ROUTINE_NAME ASC

Subsequently I script all objects as ALTERS and append to the CREATE script.

I create a single Script File with the following convention
1. CREATE scripts (code stubs) ( with if not exists )
2. ALTER scripts (real code)