Tag Archive for Database Modeling

T-SQL Tuesday #72 Summary – Data Modeling Gone Wrong

This month marks the 72nd T-SQL Tuesday.  Adam Machanic’s (b|t) started the T-SQL Tuesday blog party in December of 2009. Each month an invitation is sent out on the first Tuesday of the month, inviting bloggers to participate in a common topic. On the second Tuesday of the month all the bloggers post their contribution to the event for everyone to read. The host sums up all the participant’s entries at the end of the week. This month I’m the host and the topic is …

Data Modeling Gone Wrong

The purpose of SQL Server, is to make sure that the databases are kept safe and run as optimally as possible. The problem is, if the data model is flawed, or not maintained, then no matter how optimally the SQL Server is configured, the database won’t be able to function efficiently.

Below you will find the summary of the 19 posts from this blog party.

Summary

Note: This is a heap. The summary is based on the order received… Well, except Ed’s I inserted him at the top. You’ll see why.

Ed Leighton-Dick
SQL New Blogger Challenge: Week 2 ideas
I found several posts in my comments that simply reiterated my invitation. I didn’t list them below. I am, however listing this one. Ed is challenging new bloggers. I love this idea. He not only broke down my invitation for new bloggers, but gave some ideas on how to find a topic that will fit into the prompt. Thanks go out to Ed for encouraging the “next generation” of SQL Bloggers.

Thomas Rushton – The Lone DBA
T-SQL Tuesday #72 – Data Modeling Gone Wrong
He wrote about generically named Fields and overloaded Fields. This is a very important topic and I’m glad it’s at the top.

Robert Pearl
T-SQL Tuesday No.72 – Is Your Data Model, Normal?
Robert started off his post with, “Well, I’m not a data modeler, nor do I play one in the corporate world.” He then goes on to tell us how important data modeling is. Finally, he goes over the basics of the 3 forms of normalization, which he feels everyone should know. (And I wholeheartedly agree with him.)

Chris Yates

T-SQL Tuesday #72 Invitation – Data Modeling Gone
Wrong

Chris, one of my dearest friends, wrote a fabulous topic on what you should think about and do prior to creating that first table. He then went on to talk about some good general practices to keep in mind while creating the data model.

Tamera Clark
#TSQL2SDAY – Data Modeling Gone Wrong

I really liked Tamera’s approach to the topic. She goes into the realities of teams not admitting there are problems due to

  • “Reasons”
  • Applications
  • Egos

I’ve been there, seen that and have the t-shirt.

Andy Galbraith
T-SQL Tuesday #72 – Implicit Conversion Problems

This is one of those topics that people, who don’t write SQL, often forget about. Implicit conversations and how they impact queries. This is a great read.

Tim Peters

That time a beer broke my database – T-SQL Tuesday
#72

Tim has a great example of one of his data modeling experiences about finding out (after the data model has been deployed) that another table to hold multiple breweries was needed.

Side note: He has a great website, Beer Intelligence Project, where he has documented and visualized new beers. I think he probably enjoyed the research tremendously too.

Rob Volk
T-SQL Tuesday #72: Data Modeling
As usual, Rob tells a great story. This one is a fictitious story about how a small ice cream store grew over time. It’s told from the point of view of the database. The database started as a small database and grew until paralysis hit, I mean performance issues hit.

Steve Jones
Poor Data Modeling – T-SQL Tuesday #72
Steve also has a great topic about data modeling morphing into a difficult model. I’ve worked with the same data models that he describes here. That of a hierarchal data model that can get out of hand if it is not planned properly. The moral of the story is, when a data model is being designed, be forward thinking.

Mike Fal
#TSQL2SDAY: You’re Gonna Cluster that GUID
Mike attacked a very controversial data type: The GUID <sinister music inserted here>. This is a great topic to add to this collection of posts. Mike makes a great argument on why GUIDs should be indexed. He also specifies a specific use case for it.

Aaron Bertrand

T-SQL Tuesday #72 : Models Gone Wild!

I think Aaron and I were separated at birth. I preach the same list of “database sins”. In Aaron’s post he goes over two categories of “sins”, bad names and incorrectly used data types. I think the only thing I would add to his list would be to not use the default SQL Server constraint names. Give them meaningful names!

Kenneth Fisher
Why are you still using datetime?
Kenneth brings a great question to the table. Why are you still using datetime? He goes into the different date data types that are available and why we should be taking advantage of them.

Anders Pedersen
T-SQL Tuesday #72: Data modeling gone extremely wrong
Anders gives a good example of how an over-normalized database can slow down your application.

Terry McCann
Tally/Number tables in SQL Server #TSQL2sday
Terry took a different approach. He looked at the prompt from the point of implementing some best practices instead of identifying bad ones. He wrote about how Tally/Number tables can help with queries to find missing data.

Rob Farley
What’s driving your data model?
Like Chris Yates, Rob wrote about considering the data models purpose BEFORE creating it. The difference between the two posts, is Rob took it from the data warehouse data model point of view.

Malathi Mahadevan
TSQL Tuesday #72 – Data Modeling gone wrong
My good friend Mala, had a great topic for her post. She talked about the “one table to rule them all” pattern that she encountered at one of her jobs. I really liked one of her sentences in her summary, “The big lesson that came out of it was that the size of the schema or fewer tables do not really mean a simpler/easier to manage design, in fact it can be the exact opposite.”

Sander Stad
Data Modeling Gone Wrong
Sander painted a story of XML woe. I want you to be prepared when you read his excellent post. He has an image of an execution plan that will make your hair stand on end. Make sure you are sitting down.

Jens Vestergaard
#TSQL2SDAY – Data Modeling Gone Wrong
Jens has a horrifying tale of bad field name combined with the lack of foreign keys. I may have bad dreams about this one tonight. He does have a great reaction GIF that summarizes how any of us would feel about being faced with the database he describes.

(Update…I forgot to add my own post. Oopse.)

Mickey Stuewe
T-SQL Tuesday #72 – Bad Decisions Made With Surrogate Keys
In my post, I go into when you should and should not use Surrogate Keys as primary keys. I then give a case on how surrogates can cause duplicate data in a many to many table.

Thanks for all the fish!

First, I would like to point out how cool this blog party is. Even though Adam lives in the US, this is not just a US blog party. It’s international. In my list I have posts from The Netherlands, the UK, Denmark, and Australia. I think that is really cool. The one thing that I would like to see is, more women joining the blog party. Including myself, there were only three women who participated. While I’m very happy that Tamera and Mala joined the party, I would like to see more in the future.

T-SQL Tuesday #72 – Bad Decisions Made With Surrogate Keys

This is my second time hosting the t-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009.

This month’s invitation topic is on Data Modeling Gone Wrong. Being a Database Developer, I deal with bad database design decisions daily. One of my app-dev teammates loves to tell me that the bad decisions were made because I didn’t work there yet. (That makes me laugh.)

Surrogate Keys vs Natural Keys

The point of surrogate keys is to represent complicated natural keys as the primary key of the table. Both the surrogate key and natural key will yield a unique key for the row. Sometimes that unique natural key is the entire row. When possible, it is better to use the natural key since it is the true representation of the row. Unfortunately, this is not always practical. Let’s look at some examples.

TSQLTuesday72 Image1

In the employee table it would take four fields to make a primary key from the natural key (first name, last name, social security number, and birthdate). Note: This is assuming this table is only used in the US and the employees have social security numbers. The reason the birthdate is also needed is due to the fact that social security numbers can be reused after someone has passed away. For the employee table it makes sense to have a surrogate key since it would be cumbersome to use all four fields as foreign keys in other tables.

The StateList (representing each state in the United States) is a good example of using the natural key as the primary key. Each state only uses two characters to represent the State so CHAR(2) can be used for the natural key and the primary key. This would provide the added benefit of not needing to join back to the StateList to get the two character representation of the State abbreviation…unless additional information about the state is needed. So what is the point of this table? Well, by having it, you are guaranteed referential integrity on the StateCode field by having a foreign key back to the StateList table. You don’t have to worry that someone puts ZZ as a StateCode.

Danger, Will Robinson!

One of the problems I’ve seen with careless use of surrogate keys are the duplication of natural keys. Quite often it’s overlooked that the natural key still needs to have a unique constraint. Without it, the reporting team ends up having to use MAX or DISTINCT to get the latest instance of the natural key, or SSIS packages are needed to clean up the duplicates. This can be compounded with many-to-many tables.

Many-to-many tables allow two tables to be joined multiple times. An example can be seen in the car insurance industry.  If you have multiple people on the same insurance and they are registered to drive multiple cars, then a many-to-many table would be created to capture the data.

If a surrogate key is used on the many-to-many table in order to provide uniqueness and if the natural key does not have a unique constraint, then duplicate natural key combinations can occur. This can be obfuscated if there is additional information in the table. Maybe the amount the car is insured, is also maintained in this table. Let’s take Victoria’s insurance as an example. If Victoria is in the table with her 1971 Corvette listed twice with two different insurance amounts listed, which one is the current one? The better pattern in this case would be to use the natural key.

TSQLTuesday72 Image2

Conclusion

Surrogate keys are very useful, but it should not be assumed that they should be used for all tables. The natural key should always be considered first. If the natural key is too complicated to be used as foreign keys in other tables, then the surrogate key is a good choice. Just remember to ALSO put a unique constraint on the natural key.

Thanks for all the fish

I had several people tell me on Twitter that they were going to write their first blog post for this t-SQL Tuesday blog party. I want to thank them ahead of time for taking the leap into the blogging world to share their experiences and expertise in their fields.

T-SQL Tuesday #72 Invitation – Data Modeling Gone Wrong

This month marks the 72nd T-SQL Tuesday.  Adam Machanic’s (b|t) started the T-SQL Tuesday blog party in December of 2009. Each month an invitation is sent out on the first Tuesday of the month, inviting bloggers to participate in a common topic. On the second Tuesday of the month all the bloggers post their contribution to the event for everyone to read. The host sums up all the participant’s entries at the end of the week. This month I’m the host and the topic is …

Data Modeling Gone Wrong

The purpose of SQL Server, is to make sure that the databases are kept safe and run as optimally as possible. The problem is, if the data model is flawed, or not maintained, then no matter how optimally the SQL Server is configured, the database won’t be able to function efficiently.

I would like to invite you to share some data modeling practices that should be avoided, and how to fix them when they do occur.

Rules for T-SQL Tuesday Blog Party

Rule 1: Make sure that you include the T-SQL Tuesday image at the top of the post which will help identify your post as a T-SQL Tuesday blog post.  Please include a link back to this invitation too.

Rule 2: Publish your post sometime next Tuesday using GMT. Here’s a link to a GMT time convertor.  For example, in California, that would cover 5 pm Monday to 5 pm (PDT) Tuesday.

Rule 3: Come back here and post a link in the comments so that I can find all the posts for the round up.

Rule 4: Don’t get yourself fired. Make sure that you either generalize your post or get permission to blog about anything from work.

Rule 5: If you roam the Twitterverse, then don’t forget to Tweet about your blog post with the hashtag #tsql2sday.

Rule 6: Go read someone else’s blog on the subject!

Final Rule: Have fun!

Data Models, SQL Server, SQLite, and PowerShell

The Fourth Mission

In this installment of SQLCoOp, we are sharing our views on something we’ve learned this month. While I learned quite a bit this month in order to create a new SQLite Database with test data that mapped back to my SQL Server database, I will limit this post to how my first PowerShell script aided me in creating a new SQLite Database from Embarcadero’s ER/Studio Data Architect.

MCJJ_ALL

The Project

I’ve been working on a project recently where we have a SQL Server backend on our server and SQLite databases on our client machines. The two databases have quite a few tables in common, but the two database platforms don’t share the same toolsets. Here are the hurdles I had to overcome.

  1. The SQL language that SQLite uses is similar to T-SQL, but not the same. This means scripting tables, constraints, and other SQL objects are slightly different.
  2. I use Embarcadero’s ER/Studio Data Architect (DA) to do all of my data modeling, but there isn’t a driver that will work with DA to compare my database model to a SQLite database so that I can keep them in sync.
  3. I needed to create test data in both databases, but I don’t have an environment where the databases can coexist so that I can write the proper insert scripts.

In this post, I’ll focus on the first two hurdles. The third hurdle was solved using Red Gate’s SQL DataGenerator, Python, and SSIS. I will save this story for another time since it is worthy of a blog post all on its own.

Hurdle 1: Understanding SQLite

The first thing I had to learn was the ins and outs of SQLite. I acquired a great book, Using SQLite and started reading. I also searched the depths of the internet. What I found was that SQLite likes to store almost everything as integer, numeric, and text, BUT there are data types that we are familiar with like varchar, datetime, etc. They just get “interpreted” as integers, numeric, and text depending on what the data types are. This is a good reference for understanding how the more familiar data types are actually stored in the SQLite database.

When I was first learning about SQLite, I mistook the storage classes as actual data types. I thought SQLite was going to use a type of implicit conversion, but SQLite is not the same as SQL Server. There is also a big benefit to using the more familiar data types instead of sticking to the storage classes that SQLite uses. The benefit has to do with the ADO.NET class and making life easier for your application developer who has to take data from SQL Server and get it into SQLite.

NOTE: You REALLY need to understand how dates work in SQLite. Depending on how the date and time are stored in the underlying database (string, integer, numeric) will determine what “zero” is equivalent to. Read this to find out more.

Hurdle 2: Creating SQL Scripts for Schema Changes From ONE Environment

There are several software packages out in the world that will allow you to get visual data models of SQLite tables, but I already use Embarcadero’s ER/Studio Data Architect (DA) and I didn’t want to learn another one. I also didn’t want to have my SQL Server data models in one software package and my SQLite data models in a different one. So I had to figure out how to get around this hurdle…and I used PowerShell to do it.

The first thing I did, was to create my data models. My first SQLite data model was based on my initial research which implied I should store all my data in integer and text data types since that is what the engine will be using to store the data. This was a poor assumption on my part. Since I had both my SQL Server and my SQLite database models in DA, I was able to easily update my data types in my SQLite database with their SQL Server counter parts. I did have to inspect each field manually since not all attributes were the same between the two databases. For instance, I have to use CURRENT_TIMESTAMP instead of getdate() and I had to model auto incrementing columns differently.

Note: This is a sample database

SQLCoOp_Image3

SQL Server database

SQLCoOp4_Image2

First iteration of my SQLite database

NOTE: I was able to use the Compare and Merge Utility by selecting the option to compare against another model.

SQLCoOp4_Image1

 

SQLCoOp_Image4

Second iteration of my SQLite database

Making a SQLite Script from ER/Studio Data Architect

Before we go down this rabbit hole, there is another piece of information that you need to know about SQLite. You can’t add foreign key constraints after the table is created. This is very frustrating for me since I know of some existing tables that need foreign key constraints and it will be a bigger task than it would be in SQL Server to get foreign key constraints applied. For this project, it means I need to script the foreign keys as well as the primary keys in the table creation script.

There are two operations that need to be done in order to generate a SQLite script from DA. The first is to use the Generate Database Wizard to generate a whole database instead of using the Compare and Merge Utility to compare the database model with the actual database. This operation is outlined below in steps 1 though 6. The second operation is to use a simple PowerShell script to make the necessary changes to the script so that it will no longer be a T-SQL script made to run in SQL Server, but a SQL script ready to run in SQLite. This is outlined in step 7 below.

Here are the steps you’ll need to do:

Step 1

Make sure that all auto incrementing fields are not set up as primary keys in the database model. The reason for this, is all auto incrementing fields are primary keys, so the syntax for creating an auto incrementing field is very specific.

Take a look at the EmployeeAward table in DM SQLite above. It is setup as an identity column, but not as a primary key. In order for me to have a visual cue that the field is a primary key, I used an object called a Domain. Domains are templates that can be used throughout the database model so that data types, defaults, and even definitions remain consistent and can easily be changed throughout an entire database model or across all database models if the Repository feature of DA is being used.

Step 2

Right click the physical model that you will be working with, and select Generate Database. You will be walked through a wizard. At the end you are going to want to save your settings so that you don’t have to select them again.

SQLCoOp_Image6

Step 3

On Page 1 of the wizard, you can create individual scripts or a single long script. I’ve been doing both so that I can select a single table later on if I need to regenerate it. You will also need to select the folder that you would like the scripts to be created in. I created 3 folders for my project.

  • MultiScripts (This will hold scripts for each object separately)
  • SingleScript (This will hold a single script that contains all the objects.)
  • Standardize (This holds the PowerShell script called StandardizeSQL.ps1 which will be explained in step 7.)
Step 3

I don’t do anything on page 2 of the wizard since this script will not run in SQL Server.

Step 4

Page 3 of the wizard is where the fun starts. Set the following options for the tables you want to include in the script. These options are found on the General Options tab. All the other tabs don’t need to have their options modified…unless you used features like sparse columns that can’t be used in SQLite. Then you might need to fiddle with some of the other settings.

 

SQLCoOp_Image5

Step 5

Save your settings on page 4 of the wizard so that you can quickly run through the wizard next time.

Step 6

If you are asked to open your newly created script, then say no. We’re not done yet.

Step 7

Run the following PowerShell script. You can do this by right clicking the file containing the PowerShell script and selecting Run with PowerShell. Note: The ability to run PowerShell scripts needs to be enabled first.

The script will go through all the *.sql files in the indicated folder and change the T-SQL specific syntax to the SQLite syntax. Here are the syntactical changes.

  • Int will be changed to integer.
  • Identity columns will be changed to primary key autoincrement columns.
  • The words nonclustered and clustered will be removed from primary and foreign key constraints since all indexes are nonclustered and those keywords are not supported in SQLite.

Before you run the script below, you’ll need to change the path to your scripts on line 8.

Note: there is a space before the asterisk on line 8 and there are spaces on either side of “int” on line 12.

# To Run this in the Powershell it would be written as ./StandardizeSQL.ps1
# Note: nonclustered needs to come before clustered since clustered is in the word nonclustered.

$wshell = New-Object -ComObject Wscript.Shell -ErrorAction Stop

Try
{
$configFiles=get-childitem -Recurse C:\SQLiteScripts *.sql

foreach ($file in $configFiles)
{
(Get-Content $file.PSPath) |
Foreach-Object {$_ -replace " int ", " integer "} |
Foreach-Object {$_ -replace "IDENTITY\(1,1\)", "PRIMARY KEY AUTOINCREMENT"} |
Foreach-Object {$_ -replace "nonclustered", ""} |
Foreach-Object {$_ -replace "clustered", ""} |
Set-Content $file.PSPath

}

$wshell.Popup("Success! And there was much rejoicing!", 0,"Done.",64)
}
Catch
{

$wshell.Popup($_.Exception.Message,0, "Oops! An error occurred.",48)
Break
}

After you are done running the PowerShell script, you’ll have a SQLite script that you can run in your favorite SQLite querying tool.

Here is the before and after scripts for two of the tables from my example.

CREATE TABLE Employee(
EmployeeID int NOT NULL,
FirstName varchar(50) NOT NULL,
LastName varchar(50) NOT NULL,
isActive bit DEFAULT 0 NOT NULL,
InsertedDTS datetime CONSTRAINT [DF_Employee_InsertedDTS] DEFAULT CURRENT_TIMESTAMP NOT NULL,
ModifiedDTS datetime NULL,
CONSTRAINT PK_Employee PRIMARY KEY CLUSTERED (EmployeeID)
)
;

CREATE TABLE EmployeeAward(
EmployeeAwardID int IDENTITY(1,1),
EmployeeID int NOT NULL,
AwardName varchar(100) NOT NULL,
AwardDate datetime NULL,
CONSTRAINT FK_EmployeeAward_Employee FOREIGN KEY (EmployeeID)
REFERENCES Employee(EmployeeID)
)
;

Before

CREATE TABLE Employee(
EmployeeID integer NOT NULL,
FirstName varchar(50) NOT NULL,
LastName varchar(50) NOT NULL,
isActive bit DEFAULT 0 NOT NULL,
InsertedDTS datetime CONSTRAINT [DF_Employee_InsertedDTS] DEFAULT CURRENT_TIMESTAMP NOT NULL,
ModifiedDTS datetime NULL,
CONSTRAINT PK_Employee PRIMARY KEY (EmployeeID)
)
;

CREATE TABLE EmployeeAward(
EmployeeAwardID integer PRIMARY KEY AUTOINCREMENT,
EmployeeID integer NOT NULL,
AwardName varchar(100) NOT NULL,
AwardDate datetime NULL,
CONSTRAINT FK_EmployeeAward_Employee FOREIGN KEY (EmployeeID)
REFERENCES Employee(EmployeeID)
)
;

After

Don’t Stop Yet

If you want to read more about the newly released SQL Search, don’t forget to check out these blog posts:

To follow our quest for SQL knowledge through this collaborative project, follow the #SQLCoOp tag on Twitter.

See you next time!!

On a SQL Collaboration Quest

Four SQL professionals gathered from the four corners of the world to share their SQL knowledge with each other and with their readers: Mickey Stuewe from California, USA, Chris Yates from Kentucky, USA, Julie Koesmarno from Canberra, Australia, and Jeffrey Verheul from Rotterdam, The Netherlands. They invite you to join them on their quest as they ask each other questions and seek out the answers in this collaborative blog series. Along the way, they will also include other SQL professionals to join in the collaboration.

Original Post: On a SQL Collaboration Quest

SQL Advent Calendar – Day 14 – Working With Forms in a Macro in ER Studio Data Architect

AdventCalendar14

First Post in this series: SQL Advent Calendar 2013-Day 1- Placeholders in SQL Prompt Snippets

As I was hanging our family Advent Calendar up, I thought I would make one for you, my readers. I’ll be sharing my scripts from two of my favorite products as well as tips on how to create your own.

Creating a Form

One of the options that you have when creating a macro in ER Studio Data Architect (ER Studio DA),  is the ability to create a form known as a dialog. The form can gather information for the macro to use, display information to the user after an action has been performed, or interact with the user while the objects in the diagram are updated.

There are three steps to perform when setting up a form.

  1. Creating the layout of the form, including placing the controls on the form.
  2. Writing the code to pre-populate the controls.
  3. Writing the code to perform actions after the dialog is used by the user.
Don’t recreate the wheel

ER Studio DA comes with quite a few working macros. They provide great examples on how to work with the objects and how to work with forms. I would recommend looking at the Definition Editor macro and the Index Naming macro. They both have highly interactive forms which show how to work with the different controls and how to interact with the events of the controls.

Note: While the objects representing the various aspects of the data model are well documented, I have yet to find documentation on the dialog object or the controls that are placed on it. I do admit, I have not tried contacting Embarcadero as of yet either.

Let’s look at the code from the Definition Editor macro.  The first line below begins the definition of the form followed by a list of controls that will appear on the form. The order they are listed will be the sequential order of the controls if you TAB through them. Below I numbered a few areas to look at.

  1. The first two numbers are the height and width of the form followed by the name that will appear in the title bar.
  2. The last parameter contains the name of the Sub that you would need to write to consume any actions generated by the form. (e.g. When you click the OK button, the MyDialogFunc procedure would be triggered.)
  3. At the end of controls that will cause an event, such as when a button is pushed, you will find a name preceded by a period. This is not a procedure name like we saw with #2 above. This will be the keyword you search for in the MyDialogFunc. We’ll see this in a minute.
  4. After the definition of the form is created, a variable is created to represent the form during its life.
  5. Next the controls that need to be initialized and pre-populated are coded.
  6. And Finally the call to the form is called which will display the form to the user.

 

AdventCalendar2013Day14_Img1

You can manually create and modify the form, or you can use the Editor User Dialog button on the toolbar. Make sure your cursor is in the middle of the code for the form, or you will create another form.

AdventCalendar2013Day14_Img2

When the myDialogFunc procedure is created, it requires three parameters.

  • DlgItem is the name of the control that caused an event, which caused the MyDialogFunc to be called.
  • Action is the action of the event. (e.g. A button was pushed.)
  • SuppValue contains the data of the DlgItem.

If you were to capture the pushbutton event for the UpdateTable, you would create an If statement to look at the DlgItem value when Action = 2.

There are some other functions that are needed while working the controls. In the screenshot below, you can see DlgValue and DlogText. These allow you to get information from other controls on the form. There are others that you look at in other macros as well.

AdventCalendar2013Day14_Img3

Can’t leave it alone

I really like this macro, but there was one issue. I use schemas (called owners in ER Studio DA), and I wanted them to be displayed with the table names… So I modified the code.

The first thing I did was to widen the list boxes. I like long names and they just didn’t fit. I used the User Dialog Editor to accomplish this.

The second thing I did, was add MyEntity.Owner to the name of the tables. This was done in the procedure called getTables.

 

Note: Tables are referenced differently in the logical model than the physical model. Make sure you are checking to see if the user executed the macro from the logical or physical model and then use the correct object.

 

 

AdventCalendar2013Day14_Img4

 

Previous Post In Series: SQL Advent Calendar – Day 13 – Snippet for New Transactional Procedure

Next Post In Series:  SQL Advent Calendar Day -15 – Naming Default Constraints

SQL Advent Calendar 2013 – Day 9 – Creating Change Tracking Fields (2 of 3)

AdventCalendar09First Post in this series: SQL Advent Calendar 2013-Day 1- Placeholders in SQL Prompt Snippets

As I was hanging our family Advent Calendar up, I thought I would make one for you, my readers. I’ll be sharing my scripts from two of my favorite products as well as tips on how to create your own.

Second Step Is Creating The Change Tracking Fields

Today I’ll share with you my macro called Add Change Tracking Attributes. This script will add the attributes InsertedBy, InsertedDTS, ModifiedBy, and ModifiedDTS to the selected entities. These attributes are based on the domains that were created by the macro in yesterday’s post found here. If you add these fields, and then decide you don’t want them, you can use my macro called Remove Attribute From Selected Tables, which I blogged about here to remove them.

'----------------------------------------------------------------------------
' PURPOSE:  	This macro adds Change Tracking Attributes to Selected Entities.
' Author:   	Mickey Stuewe
' Date:     	12/9/2013
'----------------------------------------------------------------------------

Sub Main

	Dim objModel As Model
	Dim objSelObject As SelectedObject
	Dim objEntity As Entity
	Dim objAttribute As AttributeObj
	Dim iCounter As Integer
	Dim dict As Dictionary
	Dim iInsertedDateID As Integer
	Dim iInsertedByID As Integer
	Dim iModifiedByID As Integer
	Dim iModifiedDateID As Integer

	' Get the Local Data Dictionary.
	Set dict = DiagramManager.ActiveDiagram.Dictionary

    ' Determine Domains.
    Set MyDomain = dict.Domains.Item("InsertedDTS")
    iInsertedDateID = MyDomain.ID

    Set MyDomain = dict.Domains.Item("InsertedBy")
    iInsertedByID = MyDomain.ID

    Set MyDomain = dict.Domains.Item("ModifiedDTS")
    iModifiedDateID = MyDomain.ID

    Set MyDomain = dict.Domains.Item("ModifiedBy")
    iModifiedByID = MyDomain.ID

	Set objModel = DiagramManager.ActiveDiagram.ActiveModel

	' Iterate through all the selected objects in the current
	' model.
	For Each objSelObject In objModel.ActiveSubModel.SelectedObjects

		'Get the object type - we are only concerned
		'with entities.
		If objSelObject.Type = 1 Then

			' Get the actual entity object with this ID.
			' The model contains the collection of all the
			' entities.
			Set objEntity =  objModel.Entities.Item(objSelObject.ID)

			'Create Tracking fields and set DomainIDs
			Set objAttribute = objEntity.Attributes.Add("InsertedBy", False)
			 SetDefaultDomain objAttribute, iInsertedByID,objEntity.EntityName

			Set objAttribute = objEntity.Attributes.Add("InsertedDTS", False)
			SetDefaultDomain objAttribute, iInsertedDateID,objEntity.EntityName

			Set objAttribute = objEntity.Attributes.Add("ModifiedBy", False)
			SetDefaultDomain objAttribute, iModifiedByID,objEntity.EntityName

			Set objAttribute = objEntity.Attributes.Add("ModifiedDTS", False)
			SetDefaultDomain objAttribute, iModifiedDateID,objEntity.EntityName

			iCounter = iCounter + 1
		End If
	Next objSelObject

	MsgBox "Four new Attributes were added to " & iCounter & " Table(s).", vbOkOnly + vbInformation, "Attributes Added To Tables"

End Sub

Sub SetDefaultDomain(objAttribute As AttributeObj, iDomainID As Integer, strEntityName As String)

	objAttribute.DomainId = iDomainID

	If Len(objAttribute.DeclaredDefault) > 0 Then

		'if the attribute default is bound, then unbind it so that it can be named.
		If objAttribute.DomainId > 0 Then
			'You have to override the bound default value and the default text (name).
			objAttribute.EnableOverride (1,True)
			objAttribute.EnableOverride (2,True)

		End If

		If Len(objAttribute.DefaultName) = 0 Then
			objAttribute.DefaultName = "DF_" & strEntityName & "_" & objAttribute.AttributeName

		ElseIf objAttribute.DefaultName <> "DF_" & strEntityName & "_" & objAttribute.AttributeName Then
			If MsgBox("The name of the Default Constraint does not match our naming standards. Would you like to replace it?" & vbCrLf & _
				vbCrLf & vbCrLf & "Attribute: " & objAttribute.AttributeName & " - Default Name: " & objAttribute.DefaultName & " - Value: " & _
				objAttribute.DeclaredDefault & vbCrLf & " Standardized Name: " & "DF_" & strEntityName & "_" & objAttribute.AttributeName, _
				vbYesNo + vbQuestion, "Stadardize Name?") = vbYes Then
					objAttribute.DefaultName = "DF_" & strEntityName & "_" & objAttribute.AttributeName

			End If
		End If
	End If
End Sub

Take a look at lines 81 and 82. These two lines are needed to be able to give the default a name and here is why: This macro creates attributes based off of domains. The Inserted attributes have defaults. If we don’t name the defaults, then SQL Server will provide names for them…and they are not pretty. In order for us to be able to override the domain, we have to set the enableoverride property for two properties. Then we’ll be able to provide a name to the default. This step can’t be done in the domain, because defaults are a type of constraint and all constraints need to have unique names.

Tomorrow I’ll show you a Red Gate Snippet used to create the triggers for the tables with these fields.

Previous Post In Series: SQL Advent Calendar 2013-Day 8-Creating Change Tracking Domains (1 of 3)

SQL Advent Calendar 2013 – Day 8 – Creating Change Tracking Domains (1 of 3)

AdventCalendar08First Post in this series: SQL Advent Calendar 2013-Day 1- Placeholders in SQL Prompt Snippets

As I was hanging our family Advent Calendar up, I thought I would make one for you, my readers. I’ll be sharing my scripts from two of my favorite products as well as tips on how to create your own.

First Step Is The Foundation

Whenever I need to have minimal auditing on a table, I add four fields to the table. InsertedBy, InsertedDTS, Modifiedby, and ModifiedDTS. The Inserted fields tell me who created the row. These two fields are updated based on the defaults set on the columns. The Modified fields tell me who changed the fields and are updated by a trigger.

Over the next couple of days I will show you two ER/Studio Data Architect (ER Studio DA) macros and one SQL Prompt Script that will help you create these four fields and the needed triggers.

The first step is making sure that your data model has four Domains. They reside in the Data Dictionary on the Data Dictionary tab. Domains are templates that fields can be created against.

There are two types of Data Dictionaries. There are local Data Dictionaries and Enterprise Data dictionaries. A data model can only have one local Data Dictionary and it always exists. If you are using ER Studio DA’s Repository feature, then you can create an Enterprise Data Dictionary to use across al Data Models. A Data Model can have multiple Enterprise Data Dictionaries. This is a great feature and I have used it in the past, but it isn’t available if you are not using the Repository.

 

AdventCalendar2013Day8_Img1

 

I’ve created a macro called Create Change Tracking Domains to use the local Data Dictionary.

The macro does the following: If a folder called Change Tracking doesn’t exist in the Data Dictionary, then it is created. If the four change tracking domains don’t exist, then they will be created. Either way, all the properties of these four domains will either be created or reset when this macro is run.

Note: You can modify it to use an Enterprise Data Dictionary, or you can see if I have time to add it to the macro for you.

Below, in the code. You can see the properties that are set for each field (Data type, name, definition, nullability, and default.)

'----------------------------------------------------------------------------
' PURPOSE:  	This macro adds four domains to aid in creating Change Tracking
'				fields.
' Author:   	Mickey Stuewe
' Date:     	12/9/2013
'----------------------------------------------------------------------------

Sub Main

	Dim dict As Dictionary

	' Get the Local Data Dictionary.
	Set dict = DiagramManager.ActiveDiagram.Dictionary

	'If the Change Tracking folder does not exist, then add it.
	If (dict.DomainFolders.Item("Change Tracking") Is Nothing) Then
		dict.DomainFolders.Add("Change Tracking","Domains")
	End If

    ' if Domain doesn't exist, then create it.
	If (dict.Domains.Item("InsertedDTS") Is Nothing) Then
		dict.Domains.Add("InsertedDTS", "InsertedDTS")
	End If

	With dict.Domains.Item("InsertedDTS")
		.AttributeName = "InsertedDTS"
		.Datatype="DateTime"
		.DeclaredDefault = "GETDATE()"
		.Definition = "This is the date the row that was inserted."
		.DomainFolder = "Change Tracking"
		.Nullable = False
	End With

    ' if Domain doesn't exist, then create it.
	If (dict.Domains.Item("InsertedBy") Is Nothing) Then
		dict.Domains.Add("InsertedBy", "InsertedBy")
	End If

	With dict.Domains.Item("InsertedBy")
		.AttributeName = "InsertedBy"
		.Datatype = "nvarchar"
		.DataLength = 255
		.DeclaredDefault = "SUSER_NAME()"
		.Definition = "This is the system user who created the row."
		.DomainFolder = "Change Tracking"
		.Nullable = False
	End With

	 ' if Domain doesn't exist, then create it.
	If (dict.Domains.Item("ModifiedDTS") Is Nothing) Then
		dict.Domains.Add("ModifiedDTS", "ModifiedDTS")
	End If

	With dict.Domains.Item("ModifiedDTS")
		.AttributeName = "ModifiedDTS"
		.Datatype="DateTime"
		.Definition = "This is the date the row was modified. It is updated via a trigger."
		.DomainFolder = "Change Tracking"
		.Nullable = True
	End With

 	' if Domain doesn't exist, then create it.
	If (dict.Domains.Item("ModifiedBy") Is Nothing) Then
		dict.Domains.Add("ModifiedBy", "ModifiedBy")
	End If

	With dict.Domains.Item("ModifiedBy")
		.AttributeName = "ModifiedBy"
		.Datatype = "nvarchar"
		.DataLength = 255
		.Definition = "This is the system user who modified the row. It is updated via a trigger."
		.DomainFolder = "Change Tracking"
		.Nullable = True
	End With

	Set dict = Nothing
End Sub

Tomorrow I’ll share the macro that will add these four fields to selected tables.

Previous Post In Series: SQL Advent Calendar – Day 7 – Verify All Tables and Fields Have a Definition

Next Post in Series: SQL Advent Calendar 2013 – Day 9 – Creating Change Tracking Fields (2 of 3)

SQL Advent Calendar – Day 7 – Verify All Tables and Fields Have a Definition

First Post in this series: SQL Advent Calendar 2013-Day 1- Placeholders in SQL Prompt Snippets

As I was hanging our family Advent Calendar up, I thought I would make one for you, my readers. I’ll be sharing my scripts from two of my favorite products as well as tips on how to create your own.

Oops, I missed one.

I try to remember to add definitions as I create my tables so that I don’t forget later, but sometimes I miss one…especially definitions for attributes. Here is a great macro for ER/Studio Data Architect to find all the Entities and Attributes that are missing definitions. It even has a feature to colorize the tables that are missing definitions. I call it Verify Definitions For Selected Objects.

Note: Right now it only provides definitions for Entities and Attributes. I am happy to expand it if you need the other objects.
Option Explicit
'----------------------------------------------------------------------------
' PURPOSE:  	Creates a list of Entities and Attributes that are missing
'				definitions.
' Author:   	Mickey Stuewe
' Date:     	9/7/2013
'----------------------------------------------------------------------------
Sub Main
	'Defines the dialog box
	Begin Dialog UserDialog 680,322,"Verify Definitions for All Objects", .MyDialogAction
		Text 20,7,360,14,"Entities and Attributes that need Definitions",.Text1
		TextBox 20,28,500,287,.txtList,1
		PushButton 540,28,130,21,"Check",.CheckForDefinitions
		OKButton 540,49,130,21
		CheckBox 540,84,130,14,"Colorize Entities",.chkColor
	End Dialog

	Dim dlg As UserDialog

	'Calls the dialogbox
	Dialog dlg, -2
End Sub

Sub CheckForDefinitions()
    Dim objDiagram As Diagram
	Dim objModel As Model
	Dim objSubModel As SubModel
	Dim objSelObject As SelectedObject
	Dim objEntity As Entity
	Dim objAttr As AttributeObj

	Set objDiagram = DiagramManager.ActiveDiagram
	Set objModel = objDiagram.ActiveModel
	Set objSubModel = objModel.ActiveSubModel

	'Clears list
	DlgText("txtList","")

	' Iterate through all the selected objects in the current
	' model.
	For Each objSelObject In objSubModel.SelectedObjects
		Select Case objSelObject.Type
			Case 1 'Entity
				If objModel.Entities(objSelObject.ID).Definition = "" Then
					Call AddToList("Entity", objModel.Entities(objSelObject.ID).Owner & "." & objModel.Entities(objSelObject.ID).EntityName)
					If DlgValue("chkColor") = 1 Then
						objSubModel.EntityDisplays.Item(objModel.Entities(objSelObject.ID).TableName).BackgroundColor = RGB(243,222,248)
					End If

				End If
				For Each objAttr In objModel.Entities(objSelObject.ID).Attributes
					If objAttr.Definition = "" Then
						Call AddToList("Attrib", objModel.Entities(objSelObject.ID).Owner & "." & objModel.Entities(objSelObject.ID).EntityName & "." & objAttr.AttributeName)
						If DlgValue("chkColor") = 1 Then
						objSubModel.EntityDisplays.Item(objModel.Entities(objSelObject.ID).TableName).BackgroundColor = RGB(243,222,248)
						End If
					End If
				Next

			Case 16 'View
				MsgBox "It's time to write code for looking at Definitions for Views."
			Case 54 'Function
				MsgBox "It's time to write code for looking at Definitions for Functions."
		End Select
	Next objSelObject

End Sub

Sub AddToList(strObject As String, strObjectName As String)
	'Adds Entity or Attribute to list.
	DlgText("txtList",DlgText("txtList") & strObject & ": " & strObjectName & vbCrLf )

End Sub

'This is called for all actions in the dialog box. I will be traping for the button being clicked.
Private Function MyDialogAction(DlgItem As String, Action As Integer, SuppValue&) As Boolean

	If Action = 2 Then ' Value changing or button pressed
		Rem MyDialogFunc = True ' Prevent button press from closing the dialog box

		If DlgItem = "CheckForDefinitions" Then
			CheckForDefinitions
			MyDialogAction = True
		Else
			MyDialogAction = False
		End If
	End If
End Function

 

AdventCalendar2013Day7_Img1

 

Previous Post In Series: SQL Advent Calendar – Day 6 – What’s In All These Indexes?

Next Post In Series: SQL Advent Calendar 2013 – Day 8 – Creating Change Tracking Domains (1 of 3)

SQL Advent Calendar 2013 – Day 2 – Macro to Remove Common Fields in ER Studio Data Architect

First Post in this series: SQL Advent Calendar 2013-Day 1- Placeholders in SQL Prompt Snippets

AdventCalendar02As I was hanging our family Advent Calendar up, I thought I would make one for you, my readers. I’ll be sharing my scripts from two of my favorite products as well as tips on how to create your own.

Behind Door Number Two

ER Studio Data Architect by Embarcadero (ER Studio DA) has the ability to create macros, which makes me absolutely happy. Why? Because I don’t like repetitive work. If I can create a macro, then I can have the computer do the repetitive work while I go get a peppermint mocha.

One day I was working on a macro that created fields for multiple tables. I realized during the development process that I also needed a macro to delete fields for multiple tables. So I created the macro below. It has come in handy many times.

'----------------------------------------------------------------------------
' PURPOSE:  	This macro adds base attributes to entities that are selected.
' Author:   	Mickey Stuewe
' Date:     	9/3/2013
' version:      1.0
'----------------------------------------------------------------------------

Sub Main

Dim objModel As Model
Dim objSelObject As SelectedObject
Dim objEntity As Entity
Dim iCounter As Integer

Begin Dialog UserDialog 440,98,"Continue?" ' %GRID:10,7,1,1
	Text 20,21,250,28,"Are you sure you want to remove a field from the selected tables?",.lblOwner
	OKButton 300,21,120,28
	CancelButton 300,56,120,28
	Text 20,56,80,14,"Field Name:",.Text1
	TextBox 100,56,170,14,.txtFieldName
End Dialog

Dim dlg As UserDialog

If Dialog(dlg) = -1 Then

	Set objModel = DiagramManager.ActiveDiagram.ActiveModel

	' Iterate through all the selected objects in the current
	' model.
	For Each objSelObject In objModel.ActiveSubModel.SelectedObjects

		'Get the object type - we are only concerned
		'with entities.
		If objSelObject.Type = 1 Then

			Set objEntity =  objModel.Entities.Item(objSelObject.ID)

			objEntity.Attributes.Remove(dlg.txtFieldName)

			iCounter = iCounter + 1
		End If
	Next objSelObject

	MsgBox dlg.txtFieldName & " was removed from " & iCounter & " Table(s).", vbOkOnly + vbInformation, "Attributes Removed From Tables"
End If

End Sub

To use this macro, follow the steps below.

  1. Add the macro to the library. (Tomorrow I’ll be expanding on this step.)
  2. Add the macro to the shortcut menu for Tables.
  3. Select several tables that have a common field you would like to remove.
  4. Implement your new macro
  5. Watch the unwanted fields go away. (When I do this to 30 tables at one time, I actually feel giddy. True story.)
Note: You can download this macro from my Script Library under Resources. It’s called Remove Attribute From Selected Tables.

Next Post In Series: SQL Advent Calendar 2013 – Day 3 – Organize Before All The New Toys Come In

SQL Tidbits: No.1 – Outputting from ER/Studio Data Architect Directly to SQL Server Management Studio

I’ve been tossing around some ideas for my blog. One of which is to provide a quick tech tip… or a SQL Tidbit. The idea is to keep me writing until I get used to blogging EVERY week. Hopefully in a few months I’ll have more SQL Tidbits than there are Grape Leaves in this picture.

So let’s get started with the first SQL Tidbit.

In the beginning…

I’ve been using ER/Studio Data Architect for a few years now. If you’re not familiar with it, it’s used for modeling and maintaining database schemas. I love this product, but I’m not a big fan of the default application (Universal ISQL) that the change script is sent to. Mostly because Ctrl+A doesn’t work in the query window and I find it clunky.

At my previous company, one of my coworkers created an application that ran in the background. When it detected that a change script was ready to be deployed, it would capture it and open it in SSMS.

But there is an easier way

When I started at my current job, I was setting all the defaults that I like in ER/Studio Data Architect and I stumbled upon this setting that I didn’t know existed. This setting lets YOU pick the editor for the change scripts to be deployed to. I changed the path to SSMS and I didn’t have to see the default application any more. (WIN!)

To get to the setting, do the following.

1. Click on the Tools/Options menu item.

2. Click on the Tools tab on the right hand side of the dialog box.

3. Change the ISQL Path setting.

 

And that’s a wrap

In the next SQL Tidbit I’ll share a script for cleaning up temporary SQL objects.

%d bloggers like this: