Archive for T-SQL Tuesday

T-SQL Tuesday #74 – Knowing When Data Changes Occur in Standard Edition

Robert L. Davis (b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009.

This month’s T-SQL Tuesday’s topic is on Data Change. This is an important topic for me, since I’m about to create our first Data Warehouse at work. I’ve chosen to champion the companies who are using the Standard Edition of SQL Server.

Crestfallen

I took a class on Change Data Capture this past year at the SQL Saturday in Portland, Oregon. I absolutely loved it. I couldn’t wait to implement it. Then I found out it was an Enterprise Level feature only. I was crestfallen.

I went back to work asking if we could consider using Enterprise SQL Server, I was told that we have too many cores on our Nutanix hardware to make the features we would use worth it. You see, SQL Server pricing is based on the number of cores the underlying hardware has, not the number of cores you utilize in your VM. So what is a girl to do? (And guys, I haven’t forgotten you either.)

Since the Change Data Capture feature is not available to us Standard Edition users, let me take you through another pattern that uses auditing columns on each table.

The Basis of The Pattern I Use

I use a pattern that includes four fields on all transactional tables. This (absolutely) includes lookup tables too. The two table types that are an exception to this pattern are audit tables and error tables. I’ll cover why later in this article.

Four fields include CreatedOn, CreatedBy, UpdatedOn, and UpdatedBy. The dates should be DateTime2. CreatedOn is the easiest to populate. You can create a default on the field to be populated with GetDate().

The other fields can be a little more tricky depending on your environment. There are two patterns for populating CreatedBy as well as two patterns for populating UpdatedBy and UpdatedOn. Below you will find the pros and cons of each.

Pattern One For CreatedBy

The first pattern has the CreatedBy using the SYSTEM_USER function to populate the field. This will retrieve the windows login that the user used to login into the application.

Pros
  • This pattern allows the use of a default on the CreatedBy field. This allows the field to not be dependant on code to populate it.
  • All users have a validated account.
  • Accounts that are used by SSIS or services don’t need to have a row in the User table.
Cons
  • The field will be a varchar or an nvarchar.
  • The CreatedBy field will need to be parsed to find out which user inserted the record for reporting. (One way around this, is to remove the domain before inserting the (n)varchar).

This is my preferred technique since you don’t need to rely on the User table being immediately populated for a user to start using an application. It also makes it easier when the UpdatedBy field is updated through a pattern below.

 

Pattern Two for CreatedBy

This technique is based on an existing User table and uses their ID for the value of CreatedBy.

Pros
  • This pattern allows the CreatedBy field to join to the User table through an integer. Information about the user can be determined easily by joining to the User table but takes less space than the varchar/nvarchar mentioned above.
  • CreatedBy is not relying on windows authentication. This is helpful when not all users of the application have a windows login account.
Cons
  • This pattern requires a row for various tasks in the User table when SSIS or services insert data automatically.
  • This pattern requires CreatedBy to be populated by the application instead of by a default value. This requires working with the AppDev team and getting the AppDev team to write some additional code for you. (If you need to resort to bribing them, I suggest pizza, donuts, chocolate, or beer.)

 

Now let’s talk about UpdatedOn and UpdatedBy. They are a little more tricky to populate since they are populated on an update.

Pattern One For UpdatedBy and UpdatedOn

This pattern requires code to populate the fields since there are no defaults on updated rows. As stated above, you’ll have to work with the AppDev team.

Note: This is not a bad thing. I whole heartedly believe in having a good relationship with the AppDev team. It makes it easier to work with the team when they have the ability to write SQL. It also has the benefit of being included earlier in design sessions. (I personally have been blessed with a great AppDev lead.)

Pros
  • This pattern can be used to populate the UpdatedBy field and even the UpdatedOn fields using similar code.
  • Either User IDs or Windows account can be used to populate UpdatedBy.
Cons
  • The pattern is dependent on the AppDev team to remember to include UpdatedBy and UpdatedOn for every updated row, especially when there are tight deadlines.
  • You need to be vigilant that the fields are always updated, especially with lookup tables.

This is my preferred technique since it doesn’t involve triggers, which we’ll see in pattern two below.

Pattern Two For UpdatedBy and UpdatedOn

This pattern uses the After Update trigger. The reason I choose the After Update trigger, is that it waits until all the changes are made to the table(s) being updated by other triggers.

Note: I do not encourage multiple tables to be updated in a single trigger. (Hmm. This sounds like a good future post.)

Pros
  • This can be used as a stop gap while the AppDev team refactors their code to populate UpdatedBy and UpdatedOn fields as well as CreatedBy.
  • There is no requirement to rely on the application code to populate these fields.
Cons
  • The UpdatedBy can’t be updated with the User’s Id when populating the field.
  • You might not be allowed to use triggers. (Note: Triggers are not bad when used correctly.)

CREATE TABLE dbo.OrderStatusType
(
OrderStatusTypeId INT IDENTITY PRIMARY KEY
,OrderStatusName VARCHAR(50) NOT NULL
,IsActive BIT NOT NULL DEFAULT 1
,CreatedBy VARCHAR(50) NOT NULL DEFAULT SYSTEM_USER
,CreatedOn DATETIME2 NOT NULL DEFAULT GETDATE()
,UpdatedBy VARCHAR(50)
,UpdatedOn DATETIME2
)
GO

CREATE TRIGGER dbo.Trg_OrderStatusType_u ON dbo.OrderStatusType
AFTER UPDATE
AS
SET NOCOUNT ON

UPDATE dbo.OrderStatusType
SET
UpdatedOn = ISNULL(i.UpdatedOn, GETDATE()) -- This makes sure if a NULL was used, it is overwritten with the current Date.
,UpdatedBy = ISNULL(i.UpdatedBy, SUSER_NAME()) -- This makes sure if a NULL was used, it is overwritten with the current user.
FROM
dbo.OrderStatusType AS ost
INNER Join inserted AS i ON ost.OrderStatusTypeID = i.OrderStatusTypeID
GO
INSERT INTO OrderStatusType
(OrderStatusName
,IsActive
)
VALUES
('Created',1)
,('Back Orddered', 1)
,('Shipped',1)
,('Completed',1)

GO

SELECT * FROM OrderStatusType AS ost

UPDATE
OrderStatusType
SET
IsActive = 0
WHERE
OrderStatusTypeId = 2

SELECT * FROM OrderStatusType AS ost

UPDATE
OrderStatusType
SET
IsActive = 0
,UpdatedBy = 'George'
,UpdatedOn = '2016-01-11 17:38'
WHERE
OrderStatusTypeId = 4

SELECT * FROM OrderStatusType AS ost

If you are adding this pattern to existing triggers such as “AFTER INSERT, UPDATE, DELETE”, then make sure to check that the execution of this trigger was based on an update. This can be done by checking if there are any records in the Inserted and deleted tables that are used in triggers.

CREATE TRIGGER dbo.Trg_OrderStatusType_iud ON dbo.OrderStatusType
AFTER INSERT, UPDATE, DELETE
AS
SET NOCOUNT ON

IF EXISTS (SELECT 1 FROM Inserted) AND EXISTS (SELECT 1 FROM deleted)
BEGIN
UPDATE dbo.OrderStatusType
SET
UpdatedOn = ISNULL(i.UpdatedOn, GETDATE()) -- This makes sure if a NULL was used, it is overwritten with the current Date.
FROM
dbo.OrderStatusType AS ost
INNER Join inserted AS i ON ost.OrderStatusTypeID = i.OrderStatusTypeID
END
GO

Exceptions For Audit and Error Tables

These two types of tables are usually populated by inserts only. Most of the time the CreatedOn field is the only field that needs to be added. I would change the name to AuditedOn and ErroredOn respectively since the CreatedOn will most likely represent the creation of the original record.

Why are Lookup tables not in this list? They can be modified over time. Someone may need a value changed or the value needs to be discontinued. The row can’t be deleted when it is discontinued since that would break referential integrity.

In Summary

By using CreatedOn and UpdatedOn you can compare dates when data needs to be processed without having to compare the entire table.

One last thing on the UpdatedBy and UpdatedOn fields. For a long time I insisted that these two fields are NULL during an insert, but I have since changed my mind. By setting the UpdatedBy and UpdatedOn to the same values as CreatedBy and CreatedOn for an insert, you won’t have to deal with the ISNULL function or using the CreatedOn for (some) comparisons.

Thanks For All The Fish

I want to thank Robert L. Davis for hosting this month and I look forward to participating in future T-SQL Tuesday blog parties.

T-SQL Tuesday #72 Summary – Data Modeling Gone Wrong

This month marks the 72nd T-SQL Tuesday.  Adam Machanic’s (b|t) started the T-SQL Tuesday blog party in December of 2009. Each month an invitation is sent out on the first Tuesday of the month, inviting bloggers to participate in a common topic. On the second Tuesday of the month all the bloggers post their contribution to the event for everyone to read. The host sums up all the participant’s entries at the end of the week. This month I’m the host and the topic is …

Data Modeling Gone Wrong

The purpose of SQL Server, is to make sure that the databases are kept safe and run as optimally as possible. The problem is, if the data model is flawed, or not maintained, then no matter how optimally the SQL Server is configured, the database won’t be able to function efficiently.

Below you will find the summary of the 19 posts from this blog party.

Summary

Note: This is a heap. The summary is based on the order received… Well, except Ed’s I inserted him at the top. You’ll see why.

Ed Leighton-Dick
SQL New Blogger Challenge: Week 2 ideas
I found several posts in my comments that simply reiterated my invitation. I didn’t list them below. I am, however listing this one. Ed is challenging new bloggers. I love this idea. He not only broke down my invitation for new bloggers, but gave some ideas on how to find a topic that will fit into the prompt. Thanks go out to Ed for encouraging the “next generation” of SQL Bloggers.

Thomas Rushton – The Lone DBA
T-SQL Tuesday #72 – Data Modeling Gone Wrong
He wrote about generically named Fields and overloaded Fields. This is a very important topic and I’m glad it’s at the top.

Robert Pearl
T-SQL Tuesday No.72 – Is Your Data Model, Normal?
Robert started off his post with, “Well, I’m not a data modeler, nor do I play one in the corporate world.” He then goes on to tell us how important data modeling is. Finally, he goes over the basics of the 3 forms of normalization, which he feels everyone should know. (And I wholeheartedly agree with him.)

Chris Yates

T-SQL Tuesday #72 Invitation – Data Modeling Gone
Wrong

Chris, one of my dearest friends, wrote a fabulous topic on what you should think about and do prior to creating that first table. He then went on to talk about some good general practices to keep in mind while creating the data model.

Tamera Clark
#TSQL2SDAY – Data Modeling Gone Wrong

I really liked Tamera’s approach to the topic. She goes into the realities of teams not admitting there are problems due to

  • “Reasons”
  • Applications
  • Egos

I’ve been there, seen that and have the t-shirt.

Andy Galbraith
T-SQL Tuesday #72 – Implicit Conversion Problems

This is one of those topics that people, who don’t write SQL, often forget about. Implicit conversations and how they impact queries. This is a great read.

Tim Peters

That time a beer broke my database – T-SQL Tuesday
#72

Tim has a great example of one of his data modeling experiences about finding out (after the data model has been deployed) that another table to hold multiple breweries was needed.

Side note: He has a great website, Beer Intelligence Project, where he has documented and visualized new beers. I think he probably enjoyed the research tremendously too.

Rob Volk
T-SQL Tuesday #72: Data Modeling
As usual, Rob tells a great story. This one is a fictitious story about how a small ice cream store grew over time. It’s told from the point of view of the database. The database started as a small database and grew until paralysis hit, I mean performance issues hit.

Steve Jones
Poor Data Modeling – T-SQL Tuesday #72
Steve also has a great topic about data modeling morphing into a difficult model. I’ve worked with the same data models that he describes here. That of a hierarchal data model that can get out of hand if it is not planned properly. The moral of the story is, when a data model is being designed, be forward thinking.

Mike Fal
#TSQL2SDAY: You’re Gonna Cluster that GUID
Mike attacked a very controversial data type: The GUID <sinister music inserted here>. This is a great topic to add to this collection of posts. Mike makes a great argument on why GUIDs should be indexed. He also specifies a specific use case for it.

Aaron Bertrand

T-SQL Tuesday #72 : Models Gone Wild!

I think Aaron and I were separated at birth. I preach the same list of “database sins”. In Aaron’s post he goes over two categories of “sins”, bad names and incorrectly used data types. I think the only thing I would add to his list would be to not use the default SQL Server constraint names. Give them meaningful names!

Kenneth Fisher
Why are you still using datetime?
Kenneth brings a great question to the table. Why are you still using datetime? He goes into the different date data types that are available and why we should be taking advantage of them.

Anders Pedersen
T-SQL Tuesday #72: Data modeling gone extremely wrong
Anders gives a good example of how an over-normalized database can slow down your application.

Terry McCann
Tally/Number tables in SQL Server #TSQL2sday
Terry took a different approach. He looked at the prompt from the point of implementing some best practices instead of identifying bad ones. He wrote about how Tally/Number tables can help with queries to find missing data.

Rob Farley
What’s driving your data model?
Like Chris Yates, Rob wrote about considering the data models purpose BEFORE creating it. The difference between the two posts, is Rob took it from the data warehouse data model point of view.

Malathi Mahadevan
TSQL Tuesday #72 – Data Modeling gone wrong
My good friend Mala, had a great topic for her post. She talked about the “one table to rule them all” pattern that she encountered at one of her jobs. I really liked one of her sentences in her summary, “The big lesson that came out of it was that the size of the schema or fewer tables do not really mean a simpler/easier to manage design, in fact it can be the exact opposite.”

Sander Stad
Data Modeling Gone Wrong
Sander painted a story of XML woe. I want you to be prepared when you read his excellent post. He has an image of an execution plan that will make your hair stand on end. Make sure you are sitting down.

Jens Vestergaard
#TSQL2SDAY – Data Modeling Gone Wrong
Jens has a horrifying tale of bad field name combined with the lack of foreign keys. I may have bad dreams about this one tonight. He does have a great reaction GIF that summarizes how any of us would feel about being faced with the database he describes.

(Update…I forgot to add my own post. Oopse.)

Mickey Stuewe
T-SQL Tuesday #72 – Bad Decisions Made With Surrogate Keys
In my post, I go into when you should and should not use Surrogate Keys as primary keys. I then give a case on how surrogates can cause duplicate data in a many to many table.

Thanks for all the fish!

First, I would like to point out how cool this blog party is. Even though Adam lives in the US, this is not just a US blog party. It’s international. In my list I have posts from The Netherlands, the UK, Denmark, and Australia. I think that is really cool. The one thing that I would like to see is, more women joining the blog party. Including myself, there were only three women who participated. While I’m very happy that Tamera and Mala joined the party, I would like to see more in the future.

T-SQL Tuesday #72 – Bad Decisions Made With Surrogate Keys

This is my second time hosting the t-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009.

This month’s invitation topic is on Data Modeling Gone Wrong. Being a Database Developer, I deal with bad database design decisions daily. One of my app-dev teammates loves to tell me that the bad decisions were made because I didn’t work there yet. (That makes me laugh.)

Surrogate Keys vs Natural Keys

The point of surrogate keys is to represent complicated natural keys as the primary key of the table. Both the surrogate key and natural key will yield a unique key for the row. Sometimes that unique natural key is the entire row. When possible, it is better to use the natural key since it is the true representation of the row. Unfortunately, this is not always practical. Let’s look at some examples.

TSQLTuesday72 Image1

In the employee table it would take four fields to make a primary key from the natural key (first name, last name, social security number, and birthdate). Note: This is assuming this table is only used in the US and the employees have social security numbers. The reason the birthdate is also needed is due to the fact that social security numbers can be reused after someone has passed away. For the employee table it makes sense to have a surrogate key since it would be cumbersome to use all four fields as foreign keys in other tables.

The StateList (representing each state in the United States) is a good example of using the natural key as the primary key. Each state only uses two characters to represent the State so CHAR(2) can be used for the natural key and the primary key. This would provide the added benefit of not needing to join back to the StateList to get the two character representation of the State abbreviation…unless additional information about the state is needed. So what is the point of this table? Well, by having it, you are guaranteed referential integrity on the StateCode field by having a foreign key back to the StateList table. You don’t have to worry that someone puts ZZ as a StateCode.

Danger, Will Robinson!

One of the problems I’ve seen with careless use of surrogate keys are the duplication of natural keys. Quite often it’s overlooked that the natural key still needs to have a unique constraint. Without it, the reporting team ends up having to use MAX or DISTINCT to get the latest instance of the natural key, or SSIS packages are needed to clean up the duplicates. This can be compounded with many-to-many tables.

Many-to-many tables allow two tables to be joined multiple times. An example can be seen in the car insurance industry.  If you have multiple people on the same insurance and they are registered to drive multiple cars, then a many-to-many table would be created to capture the data.

If a surrogate key is used on the many-to-many table in order to provide uniqueness and if the natural key does not have a unique constraint, then duplicate natural key combinations can occur. This can be obfuscated if there is additional information in the table. Maybe the amount the car is insured, is also maintained in this table. Let’s take Victoria’s insurance as an example. If Victoria is in the table with her 1971 Corvette listed twice with two different insurance amounts listed, which one is the current one? The better pattern in this case would be to use the natural key.

TSQLTuesday72 Image2

Conclusion

Surrogate keys are very useful, but it should not be assumed that they should be used for all tables. The natural key should always be considered first. If the natural key is too complicated to be used as foreign keys in other tables, then the surrogate key is a good choice. Just remember to ALSO put a unique constraint on the natural key.

Thanks for all the fish

I had several people tell me on Twitter that they were going to write their first blog post for this t-SQL Tuesday blog party. I want to thank them ahead of time for taking the leap into the blogging world to share their experiences and expertise in their fields.

T-SQL Tuesday #72 Invitation – Data Modeling Gone Wrong

This month marks the 72nd T-SQL Tuesday.  Adam Machanic’s (b|t) started the T-SQL Tuesday blog party in December of 2009. Each month an invitation is sent out on the first Tuesday of the month, inviting bloggers to participate in a common topic. On the second Tuesday of the month all the bloggers post their contribution to the event for everyone to read. The host sums up all the participant’s entries at the end of the week. This month I’m the host and the topic is …

Data Modeling Gone Wrong

The purpose of SQL Server, is to make sure that the databases are kept safe and run as optimally as possible. The problem is, if the data model is flawed, or not maintained, then no matter how optimally the SQL Server is configured, the database won’t be able to function efficiently.

I would like to invite you to share some data modeling practices that should be avoided, and how to fix them when they do occur.

Rules for T-SQL Tuesday Blog Party

Rule 1: Make sure that you include the T-SQL Tuesday image at the top of the post which will help identify your post as a T-SQL Tuesday blog post.  Please include a link back to this invitation too.

Rule 2: Publish your post sometime next Tuesday using GMT. Here’s a link to a GMT time convertor.  For example, in California, that would cover 5 pm Monday to 5 pm (PDT) Tuesday.

Rule 3: Come back here and post a link in the comments so that I can find all the posts for the round up.

Rule 4: Don’t get yourself fired. Make sure that you either generalize your post or get permission to blog about anything from work.

Rule 5: If you roam the Twitterverse, then don’t forget to Tweet about your blog post with the hashtag #tsql2sday.

Rule 6: Go read someone else’s blog on the subject!

Final Rule: Have fun!

T-SQL Tuesday #66: Monitor ALL Reports

Catherine Wilhelmsen (b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009.

This month’s invitation is about monitoring.

This topic is a tough one. I didn’t want to pick just one topic. Especially since I just installed SQL Sentry’s Performance Advisor in production and the Redgate’s DLM Dashboard in Dev. Both products are helping me see my environments clearer.

But what about the need to monitor other items that don’t have a cool tool you can buy off the shelf? Or products like SSRS that have built in metrics collection?

That is when you need to roll your own code to capture metrics to help monitor your own, unique environment.  Take Excel reports as an example, or really any application that is used for reporting. It would be wonderful if you could capture metrics and monitor usage of ALL reports across your environment in a consistent manner. Let’s take a look at a way that can be accomplished.

In the beginning

The first thing you need to do is some planning. You need to know what kind of data you want to capture, and how you are going to update the existing reports to capture that data.

I happen to really like the way that SSRS collects usage metrics. So, for my Excel reports, I wanted to collect similar data. This will allow me to have one report to show how all my reports are performing. Maybe my top 10 slowest reports are from 3 different reporting platforms.

I created a couple of tables to hold the non-SSRS report metrics and meta data. I then created a stored procedure to insert the data into the tables. Finally, I added the stored procedure to the bottom of the stored procedures used by the Excel reports I wanted to keep track of.

Step 1

I created a table that held the information I wanted to collect.


CREATE TABLE dbo.ReportType
(ReportTypeID INT IDENTITY PRIMARY KEY
,ReportTypeName AS varchar(20) NOT NULL
)

INSERT INTO dbo.ReportType(ReportTypeName) VALUES ('Excel');
INSERT INTO dbo.ReportType(ReportTypeName) VALUES ('PowerPoint');

CREATE TABLE dbo.ReportLog
(ReportLogID INT IDENTITY PRIMARY KEY
,ReportTypeID INT NOT NULL
,ReportPath VARCHAR(255) NOT NULL
,ReportName VARCHAR(255) NOT NULL
,UserName NVARCHAR(255) NOT NULL
,SprocName VARCHAR(255) NOT NULL
,ReportParameters VARCHAR(500) NULL
,TimeStart DATETIME NOT NULL
,TimeEnd DATETIME NOT NULL
,Row_Count INT NOT NULL
)

Step 2

I created a stored procedure that I could have at the bottom of the stored procedures the reports are using.


CREATE PROCEDURE dbo.InsertReportUsage
(
@ReportTypeID INT
,@ReportPath VARCHAR(255)
,@ReportName VARCHAR(255)
,@UserName NVARCHAR(255)
,@SprocName VARCHAR(255)
,@ReportParameters VARCHAR(500)
,@TimeStart DATETIME
,@TimeEnd DATETIME
,@Row_Count int
)
AS
BEGIN TRY
SET NOCOUNT ON

INSERT INTO ReportLog
(
ReportTypeID
,ReportPath
,ReportName
,UserName
,SprocName
,ReportParameters
,TimeStart
,TimeEnd
,Row_Count
)
VALUES
(
@ReportTypeID
,@ReportPath
,@ReportName
,@UserName
,@SprocName
,@ReportParameters
,@TimeStart
,@TimeEnd
,@Row_Count
)
END TRY
BEGIN CATCH

DECLARE
@ErrorMessage AS nvarchar(3000)
,@ErrorSeverity AS int

SET @ErrorMessage = ISNULL(DB_NAME(DB_ID()) + N'.' + SCHEMA_NAME(SCHEMA_ID()) + N'.' + OBJECT_NAME(@@PROCID, DB_ID()),
N'SQL Object Name Not Available') + N': Error: ' + CONVERT(nvarchar(10), ERROR_NUMBER()) + N' Line: '
+ CONVERT(nvarchar(5), ERROR_LINE()) + N' - ' + ERROR_MESSAGE()

SET @ErrorSeverity = ERROR_SEVERITY()
RAISERROR(@ErrorMessage, @ErrorSeverity, 1)
END CATCH

SET NOCOUNT OFF
GO

Step 3

I added some common fields at the top of the stored procedure for my report. Then, at the bottom I called the stored procedure that will record the report usage metrics.

Since I didn’t want to hard code the report path or the report name in the stored procedure, I passed those values from Excel.

I was also able to dynamically capture the stored procedure name, and the system user’s credentials.

Here is an example of a report that lists the names of employees.


CREATE PROCEDURE dbo.PersonReport
(
@LastName varchar(100)
,@ReportPath VARCHAR(255)
,@ReportName VARCHAR(255)
)
AS
BEGIN TRY
--SET NOCOUNT ON  (Don't set)

--------------Excel Log Block--------------
DECLARE    @SprocName VARCHAR(255) = DB_NAME(DB_ID()) + '.' + SCHEMA_NAME(SCHEMA_ID()) + '.' + OBJECT_NAME(@@PROCID, DB_ID())
DECLARE    @ReportParameters VARCHAR(8000) = 'LastName=' + @LastName
DECLARE    @TimeStart DATETIME = GETDATE()
--------------Excel Log Block--------------

SELECT
p.BusinessEntityID
,p.Title
,p.FirstName
,p.MiddleName
,p.LastName
FROM
Person.Person AS p
WHERE
p.LastName LIKE @LastName + '%';

--------------Excel Log Block--------------
EXEC ReportManagement.dbo.InsertReportUsage 1, @ReportPath, @ReportName, SUSER_NAME(), @SprocName
,@ReportParameters, @TimeStart, Get_date(), @@RowCount
--------------Excel Log Block--------------
END TRY
BEGIN CATCH

DECLARE
@ErrorMessage AS nvarchar(3000)
,@ErrorSeverity AS int

SET @ErrorMessage = ISNULL(DB_NAME(DB_ID()) + N'.' + SCHEMA_NAME(SCHEMA_ID()) + N'.' + OBJECT_NAME(@@PROCID, DB_ID()),
N'SQL Object Name Not Available') + N': Error: ' + CONVERT(nvarchar(10), ERROR_NUMBER()) + N' Line: '
+ CONVERT(nvarchar(5), ERROR_LINE()) + N' - ' + ERROR_MESSAGE()

SET @ErrorSeverity = ERROR_SEVERITY()
RAISERROR(@ErrorMessage, @ErrorSeverity, 1)
END CATCH

SET NOCOUNT OFF
GO

Last Step

Finally, I created a stored procedure that joined the metrics from the ExecutionLog3 View that is provided in the ReportServer database for SSRS report usage metrics with the metrics I captured in the ReportLog table.

Now I can generate an overarching report for all reports in my environment.

Thanks for all the fish

Thanks go out to Catherine Wilhelmsen for hosting this month’s T-SQL Tuesday blog party. She is one of my favorite people to catch up with during PASS Summit. I hope you spend some time catching up with her on her blog.

T-SQL Tuesday #62 – Primary Key Constraints Are Good For Your Database

Robert Pearl (b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009.

This month’s invitation is about making sure your database is healthy. For me, that is not just making sure that patches are updated and backups restore properly. It also means the database schema is healthy.

In the beginning

One of the simplest things that we can do when creating a new table, is to make sure there is a primary key constraint. A primary key constraint is simply a specialized index. It can be a clustered or non-clustered index.  The important part is, it is created as a primary key constraint.

Why?

Two strong reasons come to mind on why you want to have primary key constraints on your tables. The first, is the accuracy of your data. The purpose of a primary key is to uniquely identify a record (just like your fingerprints). By using primary key constraints, SQL Server will guarantee that the primary keys are unique.

I’ve had application developers argue with me that the application will guarantee that the primary key is unique, so a primary key constraint is not needed. I’ve also had other developers tell me that using GUIDS or identities will guarantee uniqueness. Even if you use these data types or have checks in your application to verify the value is unique, it is still possible to go around the application to insert data and it is possible for multiple applications to simultaneously provide what they think is a unique value. A primary key constraint can will prevent duplication.

This does bring me to the second reason. Say, you do a have a spectacular way to keep all of your primary keys unique. Without primary key constraints, you can’t implement Transactional Replication. That’s right. Transactional Replication requires primary key constraints.

How can I find my straying tables?

Now that I’ve (hopefully) peeked your curiosity, let’s look at a way to find a list of tables that are missing primary key constraints.

SELECT
     t.name AS TableName
FROM
     sys.tables AS t
     LEFT JOIN sys.indexes AS i ON i.object_id = t.object_id
          AND is_primary_key = 1
WHERE
     is_primary_key IS NULL
ORDER BY
     t.name

Remember it is not a good idea to “just add” new constraints without determining the impact on the current system. Once you have the list of tables that are missing primary key constraints, you can go through the process of determining if you can add them into the environment without side affects. Some side affects can include having to deal with duplicate data, increase database size by the addition of new indexes, and better performance.

Thanks for all the fish

Thanks go out to Robert Pearl for hosting this month’s T-SQL Tuesday blog party.

T-SQL Tuesday #60 – Stretching Past My Comfort Zone

My good friend Chris Yates (b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009.

This month’s invitation is about something we’ve learned recently. Being that I just got back from Summit, I wasn’t quite sure what to pick. I ended up picking something prior to Summit. I learned how to recover from my Transaction Log filling up the disk via our SQL Community.

Silly rabbit, #SQLHelp is not just for kids

Hashtags have a WIDE variety of uses. They are used to express emotions, categorize tweets, to follow global discussions, and even to state the obvious. But my favorite use is to get help from our community with the use of #SQLHelp, #SSRSHelp, and others. And this is where my story begins… Freaking out and going immediately to #SQLHelp.

I am not a DBA. I’m a BI girl. I’m a Data Modeler. I’m a Database Developer. But please, oh please, don’t ask me to troubleshoot a serious problem on a failover cluster. That is not my cup of tea… and that is exactly what I found myself doing one Wednesday afternoon.

Before the internet, I would have been completely helpless. Today I’m not. Why? Because I have an entire community behind me. One that does not think twice before lending a hand. I posted a request on twitter requesting assistance for my problem and I included the #SQLHelp hashtag. Less than a minute passed before I had TWO community members offering to help. Wow! (I did take the help of only one of them, since it would have been confusing to have them both help.) He was kind enough to answer my questions and walk me through the solution over the course of a few hours. That is what I call Community. That is what I call #SQLFamily.

Earning my DBA card

I now know how to fix this problem. I even had the “opportunity” to try out my new knowledge a few weeks later due to some deadlocks and the lack of email notification setup on the our monitoring software. (Don’t ask.)

Lesson 1

I thought that a full backup also took care of the transactions stored in the Transaction Log. This is false. In order to have your Transaction Log truncated, you have to create Transaction Log backups. (Lesson Learned)

Lesson 2

Once the disk is full, you can’t make a backup of the transaction log. To get around this, you add a new, tiny, empty log file to the database. I don’t know the technical reasoning, but this will allow a backup to be created, thus truncating the very full log file. After which you can shrink the really big original log file to a reasonable size. Finally, you can remove the tiny, empty log file that you temporarily created.

When troubleshooting a log file that has filled up a disk on a Failover Cluster, you can only create a new log file on a shared resource. I had none. Luckily, I had a brand new database on that server. The database had been released into production, but was not used. Yet I was able to remove it from the server and create a new transaction log in its place. (And there was much rejoicing.)

Lesson 3

Fast forward to SQL Saturday Portland, right before Summit. I took a class from Randy Knight (b|t). His presentation was called Locks, Blocks, and Deadlocks Oh My! In his session he spoke about how the transaction log can get filled during a deadlock (been there, done that, got the T-shirt). He suggested creating some fake files on the drive to reserve space for when that awful day comes and your disk is full. I had been wondering if that would be a good solution. Not only did he confirm that it was, but he gave a great way to quickly create five 1G files that can be removed as needed to get the server back on its feet.

Conclusion

The moral of this story is not only to have Transaction Log Backups as well as Full Backups, but to also nurture your network. Your network is like gold. They are there to lift you up when you need it and to help you through technical challenges that are outside of your comfort zone.

Thanks for all the fish

Thanks go out to Chris Yates for hosting this month’s T-SQL Tuesday blog party. Thanks also go out to all the community members who quickly respond to people in need.

T-SQL Tuesday #59 – My Hero is Scary

Tracy McKibben(b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009. This month’s invitation is about who our Hero is.

I didn’t like him at first

That’s right. I was not happy with my hero when I first met him, but maybe I should start from the beginning. It was a dark and stormy fall daywhat am I saying. I live in Southern California. It was a beautiful day in Los Angels and Red Gate was hosting their first American SQL in The City. The day was Oct 28th, 2011. I have been a Red Gate addict for a while and I was very excited about attending the events. (It’s free after all.)

I listened to presentations all day from speakers I didn’t know (yet). You see, I was not involved in the SQL Community yet. (That would take another year to happen.) The day was winding down and there was a presentation by this guy named Grant Fritchey (b|t). He was speaking about performance tuning, execution plans, and other things I didn’t understand at that time. Then he brought up User Defined Functions, which I was in love with. And he told me how bad they were, and my jaw dropped. I was not happy that he just told me my favorite toy was not good for the performance of my queries.

Even though he just delivered me some bad news, he caught my attention. You see, Grant is an amazing speaker. He gets excited about what most of the human race considers boring. He engages the audience. And he loves questions.

Fast forward a few months

While I wasn’t happy with what Grant told me, he piqued my interest. Grant had a book that I found called SQL Server Execution Plans. I read that book cover to cover, and then I started using what I learned. Within a few months, I was the go to person at work for query tuning. I was able to tune a 12 minute query down to acceptable performance because of what I had learned from Grant’s book and I was rewarded for my great work. You see, I had learned about PASS Summit by then, and Grant had a pre-conand I wanted to be in that class. Since I had done an outstanding job that summer improving the performance of quite a few lagging queries, my manager let me go to my first PASS Summit and it changed my career.

718Fan girl

One of the goals I had for Summit, was to meet a list of people I had created. Some were speakers and some were people I had gotten to know on Twitter. Grant, of course, was at the top of the list… and I had met him within an hour of my plane landing in Seattle. You see, Red Gate had another SQL in the City event. This time on Monday of the week of Summit. (They do this year too.) I wasn’t able to attend the entire event, but I did make it to Grant’s session.

I won’t bore you with the details of that whole week, but needless stay that any time I had a question and Grant was free, I was able to talk to him. Why? Because that is the kind of guy he is. He loves helping people.

Not so scary after all

While it sounds like I just have a big “geek crush” on the guy, that isn’t the caseany more. Here is a short list why I call Grant Fritchey my Hero

  • Fabulous Presenter: Grant is a fantastic presenter. He is engaging, he loves questions, and he has no problem admitting that he doesn’t know everything.
  • Great author: Let’s admit it. Technical books make better monitor stands than interesting reading. Technical books can be so monotonous, but not Grant’s books. You can here Grant in his books. His sense of humor comes out between the lines.
  • Giver at heart: As mentioned earlier, he loves to help people. You can send him a question through his blog, and he’ll reply. He is constantly speaking at events and conferences, sharing his knowledge in the SQL Family style.
  • He gives it his all: Grant doesn’t do anything half way. He is prepared when he speaks. He is prepared when he runs for office. And he is prepared when he agrees to be a mentor.
  • My mentor: At the beginning of 2013, I wanted to have a mentor. I even blogged about it here. I got the courage up and I sent Grant an email, asking if he would mentor meand he said yes. I was shocked and honored to find out that I was the first person he had “officially” mentored. He mentored me all throughout 2013.
  • Frank: Grant is not scary, but he doesn’t sugar coat his answers. I love this about him. It means that I can trust his opinion, whether they are about SQL or about my abilities, and especially about the things I still need to work on.
Shout out to Grant

Thanks Grant, for lifting me up and telling me that I can accomplish things I thought were outside my reach. Thanks for being a fabulous role model as a speaker and community leader. Thanks for being my hero and my friend.

Thanks for all the fish

Thanks also go out to Tracy McKibben for hosting this month’s T-SQL Tuesday blog party.

T-SQL Tuesday #57 – SQL Family to the Rescue of a Local Community

My good friend Jeffrey Verheul(b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009. This month’s invitation is on the topic of SQL Family and Community.

I was very happy to see this topic this month since I just had an amazing SQL Family kind of day a week ago and this is the perfect venue to share it.

The Community

The beginning of this year I took a leap of faith and started a local PASS chapter focusing on BI in Irvine, CA. It’s called BIG PASS Community. I’ve been slowly growing the group steadily each month. I have a hand full of people who come every month and we usually have 1 or 2 new people as well. A couple of months ago, I was approached by a SQL Family member, Rob Hatton, who was a recent transplant from Florida. He wanted to help with my new community, so now we are co-leaders.

Before each meeting, we eat dinner together in the kitchen at our venue so that we can get to know each other better and network. Not only is our venue really nice, but Rosalyn, who is a Sales Rep there and stays late for us, is a wonderful hostess. She helps promote our meetings and helps take care of little details. We’re really blessed to have her.

The phone rang on a Friday afternoon

Rosalyn often calls me before our meetings to find out who our latest speaker is and if they will be presenting locally or not, so I wasn’t surprised to receive a phone call from her. This time she had some bad news for us. She had decided to leave the company to pursue a new opportunity, but she couldn’t find another person to host us in the evenings at their facility. We were now without a venue.

I was really worried about loosing our venue since I’ve seen other user groups unable to meet for months until a new location was found. Our group also doesn’t have any financial resources to pay for a venue either. What was a girl to do?

Twenty-four hours

I spoke with my co-leader about the situation and we developed a plan. We would change the next meeting to a networking event at a restaurant. That would allow us to still have a meeting and give us a month to find a new venue. I sent out emails late Sunday night to our community members letting them know what had happened and the new schedule for our next meeting.

Monday morning I received an email from David, who is one of our community members, “We have a classroom at work. Do you want me to see if we can use it?” I replied, “YES!” He kept updating me throughout the day with his progress on getting approval. Then I received an email from another community member named Ted. “We can use our classroom at work. We also have a nice break room for our dinners together.” We now had a venueand a potential backup venue.

This is a perfect example of the heart of SQL Family. They step in when someone needs help and lend a hand.

Building relationships

I want to share with you how I met each of the people I mentioned above.

In the spring of 2013 I was at the after party at the Orange County SQL Saturday. One of my friends wanted to introduce me to someone who recently moved near me. His name was Rob Hatton. I had the wonderful opportunity to get to know him and his lovely wife Barb better that night. We then crossed paths at two other SQL Saturdays over the last year.

Last year, at PASS Summit I helped host the Southern California User Group tables at lunch. I wanted to connect with more people in my area. We had several new people join us for lunch who weren’t aware of the local user groups. Two of those people were David and his co-worker James. We had a great time getting to know each other at lunch and was delighted to see them at many of the evening events where I had the opportunity to speak with them further. When I started the BIG PASS Community user group, they started attending it as well.

This past April was the local SQL Saturday event in Orange County. That was where I met Ted. He attended both of my morning presentation. We crossed paths again at lunch, where we had time to network further. I was able to tell him about the local user groups and encourage him to attend them. I was really happy to see him at the June meeting.

Why am I sharing this with you? Wellto show you how integrated our community is. To show you how the roots travel far. SQL Family connections don’t all start in a classroom or on twitter. They start at lunch tables, Karaoke bars, and walking between presentations.

Thanks for all the fish

Thanks go out to Jeffrey Verheul for hosting this month’s T-SQL Tuesday blog party. He is one my favorite SQL Family members who I met through my T-SQL Tuesday participation. While seven time zones and a large ocean separate us, technology has allowed us to be friends, co-bloggers, and SQL Family members.

T-SQL Tuesday #56 – Starting Off With Assumptions

Dev Nambi (b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009. This month’s invitation is on the topic of Assumptions. Dev gave some great examples for possible T-SQL Tuesday posts with Assumptions as the focus. Here is the invitation:

For this T-SQL Tuesday, the topic is assumptions. For example:

  • The sun will come up tomorrow.
  • Firewalls and anti-virus are enough to protect my computer.
  • My backups work even if I don’t restore them.
  • I don’t need to check for that error, it’ll never happen.

Your assignment for this month is to write about a big assumption you encounter at work, one that people are uncomfortable talking about. Every team has an elephant in the room.

This is a great topic, but I want to take this in another direction. I want to talk about the assumptions that should be stated at the beginning of a project.

In the Beginning

When we start a new project, we are given requirements. You might be laughing at me and saying, “Mickey, I’m never given proper requirements. I just get an email with a couple of sentences or I’m told verbally what to go do.” Well, while I agree that an email or a verbal assignment is a crappy way to be given requirements, if that is what you have, then that is what you have.

The good news is, you don’t have to leave it at that. No matter how bad of a writer you think you are, if you can expand on what little information you are given, then you will be better off in the end. And this is where assumptions come in.

Assumption
Dictionary.com
1. something taken for granted; a supposition: a correct assumption. Synonyms: presupposition; hypothesis, conjecture, guess, postulate, theory.
4. the act of taking possession of something: the assumption of power. Synonyms: seizure, appropriation, usurpation, arrogation.
Covering your behind with assumptions

Whenever I’m given an assignment, whether it is in a full blown requirements document (angels get their wings when this happens), or when I’m given some vague guidelines, I always create additional documentation. Sometimes this documentation is for me. Sometimes it is for my teammates. Sometimes it is used only as a discussion point in a meeting. No matter who the documentation is for, I always include a section called Assumptions. This section is at the beginning of the document and it lists all the elephants in the room that no one wants to look at. Sometimes these elephants are actually really drunk. (Those also get listed under a different section called Risks.)

You should list all the various pieces of information that you think are important, but are missing from what you were given. This information could be facts that you need, such as all dates will be based on UTC time. Or it could be tasks such as a new web service needs to be built by another team.

Some of your assumptions might be wrong. That is not a bad thing to have happen. If you hadn’t listed XYZ as an assumption and you coded to XYZ, then more time will need to be taken to fix your work, or worse, XYZ ends up in production. By having your incorrect assumption listed and corrected, you now know more information about your project.

Some of your assumptions will take the project owner by surprise. This is also not a bad thing to have happen. It means that you have information they were not aware of. It could also mean that they didn’t think the project all the way through. Either way they now have information they didn’t have before.

By listing out all the missing pieces of information and tasks that need to be accomplished to make your project successful, you are shedding light on the entire scope of the project. You are taking control by assuming ownership of your situation and helping the end result of the project to be a successful one.

NOTE: There is one type of document that I don’t include an Assumptions section in and that is for my SSRS Reports. Those requirements are gathered using a template so that all the answers I need are not missed. For me, this is my only exception.
Thanks for all the fish

Thanks go out to Dev Nambi for hosting this month’s T-SQL Tuesday blog party. Please visit his website at http://devnambi.com.

%d bloggers like this: