Archive for T-SQL

T-SQL Tuesday #47: SWAG And A Hit

Kendal Van Dyke (b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009.  The topic this month is a fun one, just like last month. Kendal wants us to brag about our favorite SWAG.

Wow, my favorite SWAG. Only one? Can’t do that. So I will give you my top 3.

Number 3 on the list

Tim Ford (b|t) gives the best All Inclusive Package of SWAG when you partake in his SQL Cruise conference (b). When I went on SQL Cruise this past January curtsey of SQL Sentry (b) (Thanks guys for paying for my classes!), we received the coolest fabric cooler. Inside this cool bag, was more awesome SWAG. There was a beach towel, several gifts from various vendors, and the coolest little multi-tool. (I unfortunately learned the hard lesson that multi-tools can’t be part of carry on luggage. Sad Panda.)

I use this bag when ever I need a simple cooler, which is ever SQL Saturday. I have so many food allergies, that I usually need to bring something for me to munch on. This bag does the trick.

Number 2 on the list

The PASS Summit 2012 backpack. I love this backpack. It’s sitting next to me right now while I type this post at the Las Vegas airport. This backpack is comfortable to wear and full of great pockets. My absolute favorite pocket, is the little pocket right under the handle. It’s great for putting my phone and wallet.

This bag is my laptop’s home. Timmy my Think Geek monkey gets to ride on the side of the bag, and my laptop gets to ride in the center. I can quickly store all my odds and ends for my speaking needs.

 

And Number 1 on the list

When I spoke at my first SQL Saturday in Silicon Valley (b), the speakers weren’t given speaker shirts, but were given jackets. The jackets were made by Port Authority and are absolutely wonderful.

Being that Southern California doesn’t “really” ever get cold, this jacket is now my new “winter” jacket…which means I wear it a couple of times a year.

 

Thanks for all the fish

Thanks go out to Kendal Van Dyke for hosting this month’s T-SQL Tuesday blog party. Please visit his website at http://www.kendalvandyke.com.

T-SQL Tuesday #46 – And I shudder

Rick Krueger (b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009.  The topic this month is a fun one. He has asked us to write about a Rube Goldberg type of solution (aka a hack job) we created at some point in our careers.

Like many of you I worked at a Start-up back in 2000, but I wasn’t a database developer back then. Back then I was a web developer who specialized in the business layer which I wrote in VB and later VB.net. Back then I never thought about consulting with the DBA to see if my idea was good idea from a database point of view. I can tell you that I shudder at the mere thought of the “brilliant” solution I came up with back then…It was actually a great idea…until you put in a database and queried it.

The Plan

The company I worked for was building a shopping cart from the ground up. We needed a way to keep track of the state of each item in an Order and display that information at the Order level. My idea was to keep track of each Order Item state with an alphanumeric letter and we would string them all together into one field at the Order level so that we could query for Orders with Items at a particular state. (This is where we all shudder.)

When I thought of the idea, I thought there would only be a couple of letters. R for return, O for ordered, C for Canceled, etc. As we started going through all the permutations we found a ton of exceptions. I think by the time I handed the responsibilities over for this one field, we had almost 10 different designations.

The Flaw

Since (back then) I could read execution plans as well as I could read Japanese (which means not at all), I had no idea the impact of my design. I didn’t know that indexes on that field would become pretty much useless since a Like operator would have to be used to find a particular part of  the concatenated field.

A better approach for this same solution would be to leave the designations at the item level and for reporting concatenate the values at the Order level.

Second Verse, Same as the First

Since I never learned at the first company that my idea was flawed from a database perspective, I pitched the same plan at the next company…where we implemented it…again. I have learned my lesson since then and advocate to all .net developers that working with the DBA or DBD from the beginning can help achieve a better application.

Thanks for all the fish

Thanks go out to Rick Krueger for hosting this month’s T-SQL Tuesday blog party. Please visit his website at http://www.dataogre.com.

Fall Speaking Engagements

A year ago I had no intentions of speaking this year. I didn’t think I was mentally ready for public speaking once again. Boy was I wrong. Not only was I ready, but I’ve become addicted to speaking at various SQL Events. I even found a SQL Saturday to speak at on my family vacation in Louisville, Kentucky. Between September and October I will be speaking at 5 venues and moderating two Women in Technology (WIT) panels. That will be a total of 8 sessions. Here is a bit about each of them.

SQL Saturday #249 takes place in San Diego, CA on Saturday, September 20th. This year, instead of a pre-con the day before , Red Gate will be hosting a half day called SQL in the City Seminar.

Dev Connections takes place in Las Vegas, NV between September 30th and October 4th at Mandalay Bay. I’m very excited about my two brand new presentations.

SQL in the City takes place in three different cities in October. I will be speaking at two of them. I’ll be in Pasadena, CA on October 9th and I’ll be in Charlotte, NC on October 14th the week of the PASS Summit.

PASS Summit takes place in Charlotte, NC between October 15th and 18th. This is the most amazing conference for the SQL community. While I’m not speaking at it this year, I am participating in other ways including being a buddy to a new attendee or two.

SQL Saturday #237 also takes place in Charlotte, NC on Saturday October 19th.

  • This is my last SQL Saturday of the year and it is the “BI Edition”, so I had to do my session, Scalable SSRS Reports Achieved Through the Powerful Tablix.

I’m looking forward to these sessions and to meeting new people as well as reconnecting with my old friends. I hope to find you in my sessions and that you have the time to introduce yourself. I really enjoy meeting new people and sharing the knowledge that I have.

SQL on my friends.

T-SQL Tuesday #45 Summary–Follow the Yellow Brick Road

This is the summary of blog posts for T-SQL Tuesday #45 – Follow the yellow Brick Road. I was the lovely host this month with the topic being Auditing. As always, I enjoyed all the posts. It reminds of what Forest Gump once said:

 “It’s like a box of chocolates. You never know what you’re going to get.”

So, let’s meet up with Dorothy to find out what kind of auditing related advice and stories she found on her journey.

TSQLTuesday45 - YellowBricks

Summary

Jason Brimhall had a wonderful post about using the information that SQL Server automatically gathers for us. He used the  sys.FN_TRACE_GETINFO function to find out when and who executed the DBCC SHRINKFILE command  in his environment.

Rob Farley wrote a thought-provoking post about how you should think about your auditing data you do have and how you shouldn’t make excuses when you don’t have it.

Boris Hristov wrote a piece about the impact of external auditing entities on our activities. In this case, it was a story about unintentionally scaring the client about database security levels and how that could have impacted a SOX audit. Side note: I always enjoy Boris’ enthusiasm about T-SQL Tuesday. (keep it up Boris. Smile )

Chris Yates wrote a great summary piece about what to audit and a list of various tools to use to accomplish the auditing.

Steve jones wrote a wonderful piece on why auditing is important and how he caught a “cowboy coder”.

Steven Ormrod wrote an excellent piece on how to use a DDL Trigger to not only find out who modified the database, but from what IP address and application they did the deed. This is definitely one to bookmark.

Glenda Gable shares with us how she learned about writing triggers that audit. She then goes on to share with us how she had an opportunity (and seized it) to learn and implement Change Data Capture (CDC).

Jeffrey Verheul wrote about taking the initiative at a new company and creating a baseline on the system health of the 70 databases he was responsible for… and it paid off in a relatively short amount of time.

Dave Green wrote a thought-provoking post about what to do with the audit data during a data migration. I particularly enjoyed this post, since I get to ask my clients those very same questions next week. (Can we say perfect timing?)

Robert L Davis wrote about a type of auditing that I hadn’t even considered, and I’m happy he did. He wrote about a new feature in SQL Server 2012. SQL Server 2012 will automatically audit the execution of your SSIS package. He goes on to share how he was able to use the feature.

Mickey Stuewe (that would be me), wrote about a simple way to audit at the data level. She also made a plea at the end of her post to all the women bloggers out there. She asked them to contact Adam Machanic (b|t)  about being a T-SQL Tuesday blog party host.

…And That’s A Wrap

Thank you to the 11 bloggers who participated this month. We couldn’t have T-SQL Tuesday without you. Thanks also to Adam Machanic who put this brilliant idea in action and makes sure it continues each month. … Only 14  more days until the next invitation goes out.

T-SQL Tuesday #45–Follow the Yellow Brick Road

This month I’m the lovely host of the T-SQL Tuesday blog party which Adam Machanic (b|t) in December of 2009.  The topic this month is Auditing. I chose this topic because auditing, in some form, is needed for all databases (IMO).

Was it the lion, tiger or bear? Oh my!

One of the databases I use to manage had a schema where each table had four fields to keep track of who originally created the record and who last modified the record. The date was also stored for these two events. These four fields were populated by the .Net application that used the table.

TSQLTuesday45 - Table

We found, over time that this solution had a couple of holes.

1. Values were missing when someone inserted or updated data directly in the database.
2. Values were missing when the .Net programmer forgot to add code to populate the fields.

Luckily, the solution was a quick and easy one.

First, defaults were set on the InsertedBy and InsertedDts fields, SUSER_NAME() and GETDATE() respectively. This provided the user’s account that was used to access the table and the current date.

Second, an update trigger was created to populate ModifiedBy and ModifiedDts. This allowed the application to insert values if they were provided, but default values were available when they weren’t. More importantly, when someone modified the data directly, their login was captured.

CREATE TRIGGER dbo.trg_ClientOrder_iu ON dbo.ClientOrder
	AFTER UPDATE
AS 
	SET NOCOUNT ON 

	UPDATE dbo.ClientOrder 
	SET
		ModifiedDTS = ISNULL(i.ModifiedDTS, GETDATE())
		,ModifiedBy = ISNULL(i.ModifiedBy, SUSER_NAME())
	FROM
		dbo.ClientOrder AS tb
		INNER Join inserted AS i ON tb.ClientOrderID = i.ClientOrderID
GO

Here you can see that when I inserted the first two records, my login was captured. On the third record I provided values for InsertedBy and InsertedDts.

TSQLTuesday45 - Data1

Next I updated the first record without providing a ModifiedBy or ModifiedDts. On the second row, I did provide a ModifiedBy and ModifiedDts with my update.

TSQLTuesday45 - Data2

This simple solution allows you visibility into who and when data was changed. While this is not a comprehensive solution, it did solve questions that we had over the years.

Thanks for all the fish

Usually this is where I thank the host, but as Spock would say, “That would be self-serving.” So instead I am making a plea to all the amazing female bloggers out there. You know who you are. I want you to ask Adam Machanic (b|t) if you can host the T-SQL Tuesday blog party. Why am I making this plea? I’m glad you asked.

Being a data geek, I went through all the past T-SQL Tuesday invitations, which Steve Jones (b|t) so graciously keeps up to date. (Thanks Steve!) I found that out of the 45 invitations, only FIVE hosts were women.  The other amazing fact was that three of us didn’t host until this year!

  • Jes Schultz Borland (b|t)
  • Erin Stellato (b|t)
  • Jen McCown (b|t)
  • Wendy Pastrick (b|t)
  • And yours truly

Ladies, that is only an 11% representation. Please, please, please email Adam Machanic (b|t) and ask him if you can host. The rules are here.

Ok, I’m getting off my soap box. 🙂

T-SQL Tuesday #43: Give Me a Key Lookup Operator for $1200, Please

My good friend Rob Farley (b|t) is hosting this month’s T-SQL Tuesday blog party. The party was started by Adam Machanic (b|t) in December of 2009.  The topic this month is on Plan Operators that are used in execution plans to tell us how the Optimizer is going to run our query statements.

What is a Key Lookup Operator?

The Optimizer uses indexes to retrieve the fields needed from a table. If there are missing fields in the index being used, then the Optimizer has to go back to the Clustered Index to get the other fields. This has to be done for every row in the table. This action is noted in the execution plan by the Key Lookup Operator. The RID Lookup Operator is used instead of the Key Lookup Operator when a Clustered Index is not used and a Heap is used instead.

Show Me the Money

For my example I used the AdventureWorks2008R2 database. I ran the following query and looked at the execution plan.

SELECT
	c.CustomerID
   ,c.PersonID
   ,c.StoreID
   ,c.TerritoryID
   ,c.AccountNumber
   ,s.BusinessEntityID AS StoreID
   ,s.Name AS StoreName
   ,s.SalesPersonID
   ,st.Name AS TerritoryName
   ,st.CountryRegionCode
   ,st.[Group]
   ,st.SalesYTD
   ,st.SalesLastYear
   ,st.CostYTD
   ,st.CostLastYear
FROM
	Sales.Customer AS c
	JOIN Sales.Store AS s ON c.StoreID = s.BusinessEntityID
	JOIN Sales.SalesTerritory AS st ON c.TerritoryID = st.TerritoryID
WHERE
	c.StoreID IN (1284, 994, 1356, 1282, 992, 1358, 1280)

 

TSQLTuesday43 - KeyLookup

Two indexes were used to retrieve all the fields needed from the Sales.Customer table. The ix_Customer_StoreID index was missing the TerritoryID field so the Optimizer had to go to the PK_Customer_CustomerID Clustered Index to retrieve it. If you add the cost of both operators, then 66% of the cost of the query was used to retrieve fields from the Sales.Customer table.

For reference I removed the Clustered Index to show you what the execution plan would look like when a Heap is involved.

TSQLTuesday43 - RIDLookup

 

Since there was already a good index for the Customer table, I added the TerritoryID to the INCLUDE part of the index script. This turned the index into a covering index. A covering index is an index that contains all the fields from a table that are needed by a query statement. The INCLUDE part of an index allow extra fields to be part of the index, without the overhead of the data being sorted. Any fields that are part of predicates or filters should be part of the index, all other fields from the table should be part of the INCLUDE. Be cautious though, don’t throw the whole kitchen sink there. Those fields still take up space.

TSQLTuesday43 - Index

When I ran the execution plan again, I saw that the Key Lookup Operator was removed and the total cost to retrieve the fields from the Customer table was now reduced to 12%. This would also be true if the table was using a Heap instead of a Clustered Index.

TSQLTuesday43 - CoveringIndex

The Bottom Line

When you see a Key Lookup Operator or a  RID Lookup Operator, look to see if it makes sense to modify the corresponding index to be a covering index.

Thanks for all the fish

Thanks go out to Rob Farley for hosting this month’s T-SQL Tuesday blog party. Please visit Rob’s blog at http://sqlblog.com/blogs/rob_farley/default.aspx.

Shameless Plug for Grant Fritchey’s (b|t) Book: SQL Server Execution Plans, Second EditionYup, this is a shameless plug for one of my FAVORITE books. Grant did an outstanding job on this book. He goes through the majority of the execution plan operators, discussing what they represent. He also goes into how best to read an execution plan and how changing your query can affect the operators that are shown in the execution plan. I highly recommend adding this book to your collection if you don’t already have it.This book is available from Red Gate. You can download a FREE eBook, buy a hard copy from Amazon, or if you are lucky you can pick up a copy from a Red Gate event.

 

Using Set Theory Instead of ISNULL To Filter Data

I have written dozens upon dozens of reports for my users. I love the challenge of finding the data and wrangling it into a report in a way that doesn’t take a hundred years to run when they decide to return a year’s worth of data. One of the common requests that I get is to provide a parameter that will allow them to choose a single client or all clients. At first blush this is a very simple request and can be accomplished by using the ISNULL function.

ISNULL - ISNULL Code

Unfortunately there are performance implications using the ISNULL function. The ISNULL function is non-sargable. This means that when the function is used as part of a predicate, it can’t utilize the necessary index in an optimal way to filter the data. In this example, the Execution Plan shows that a full index scan was used to return one row. This equated to 80 Logical Reads for my dataset. This is the same number of Logical Reads whether one row was returned or all rows were returned. Eighty Logical Reads may not be that big a deal, but what about on a table that has hundreds of thousands of rows?

ISNULL - ISNULL Execution Plan
Execution Plan

ISNULL - ISNULL Reads 1
Statistics on the Table I/O Tab in SQL Sentry Plan Explorer Pro

 

An Alternate Universe

There is an alternate way of accomplishing this same request and it uses Set Theory to solve the problem. I can accomplish the same request by separating out the ISNULL logic into two different SQL statements and using UNION ALL to join the two result sets into one result set.  Even though I’m using two different SQL statements, only one will return data during the execution of the logic.

ISNULL - Union All Code

The first SELECT statement returns rows when the @ClientID variable has a value other than NULL. The second SELECT statement returns rows when @ClientID is NULL. By using UNION ALL instead of UNION we forgo the task of checking if there are duplicate rows in the second SELECT statement.

The Execution Plan now contains the ability to handle both cases, and the Logical Reads changes based on the value of @ClientID. There are still 80 Logical Reads when @ClientID is NULL because the whole table is being returned, but the Logical Reads are reduced to two when @ClientID is set to a single client number.

 

ISNULL - Union All Execution Plan 1
Execution Plan when @ClientID =7890

ISNULL - Union All Reads 1

 

Statistics when @ClientID = 7890
ISNULL - Union All Execution Plan 2
Execution plan when @ClientID IS NULL

ISNULL - Union All Reads 2
Statistics when @ClientID IS NULL

 

In this example I was able to use UNION ALL to filter on one or all clients, but it isn’t always the best solution. If the table being filtered fits on one page, then a table scan will occur regardless of how the predicate is setup. In that case, the ISNULL statement is easier to maintain so it would be the better solution.

Conclusion

It is always best to take a look at the Execution Plan of a SQL statement to see if there are any Index Scans on filtered data. If there are, you should take a look to see if there is another, set based approach to solve the problem.

Speaking Engagements–Engaging Speaking

71 VetteThis past September (2012) I attended my first SQL Saturday in San Diego. After the SQL Saturday event I went to the after party where I had the opportunity to network with some wonderful people. One of them being Benjamin Nevarez (b|t). He asked me if I had thought about speaking at a SQL Saturday. I told him that I wasn’t considering it for another year. He told me, “Why wait? Speak at the next SQL Saturday. There will be one in Orange County in the spring.”

I thought the idea was insane. I hadn’t done any technical speaking in years, like over 15 years….and I’ve never written my own content.  OK, once I wrote my own content, but that was also over 15 years ago.

But he got me thinking…

By November I had decided I would submit an abstract to TWO SQL Saturdays one in Orange County, and the other in San Diego in the fall. By December I had bought myself a laptop and had decided to speak at FIVE events in 2013. It’s the end of March, and guess what? I’ve already spoken twice, I will be speaking again on Thursday, and then I finally get to speak at the Orange County SQL Saturday. I added up all the possible engagements and I have a possible NINE events for the year (4 User Group Meetings, 4 SQL Saturday Events, and if I’m a lucky ducky the coup de grace… PASS Summit). I’ve also just agreed to organize and moderate a Women’s In Technology (WIT) panel for the Orange County SQL Saturday. I’ve very excited to have a panel of women talk about how to get our youth involved in STEM programs (Science Technology Engineering and Mathematics).

I’m addicted

I’ve enjoyed myself so much speaking at the events. Yes, I was nervous, but it is so gratifying to share knowledge, to help solve problems, and to watch the light bulb go on when they see how they can leverage their new found knowledge.

Where to find me this year

If you would like to hear me speak about SQL Server Reporting Services, you can find me at these events.

T-SQL Tuesday #37 – RIGHT JOIN, LEFT JOIN, raw, raw, raw

T-sql TuesdayThis months T-SQL Tuesday blog party is being hosted by Sebastian Meine, PhD (blog). The topic is on JOINS. I’ve chosen to blog about how to rewrite a RIGHT JOIN I found this week in a stored procedure.The query was simple, but it was difficult to read without creating a data model and understanding the relationships of the tables. I decided to rewrite it to make it easier to maintain in the future. I have created a sample data model to demonstrate the problem and the solution. The data model has a table for Sales Reps,a table for Regions which the Sales Reps belong to (optionally), a list of Clients, and a linking table which links the Clients and the Sales Reps.

The query that was created, returns all the Clients and any associated Sales Reps with their regions, even if the Sales Rep is not assigned to a region. The original programmer used a RIGHT JOIN  to join the SalesRep table to the ClientSalesRep table. They also put the predicate for the SalesRegion table after the SalesRep table. While the Optimizer has no problem reading this query, I had to stand on my head to figure it out.

SELECT
   c.ClientID
   ,c.ClientName
   ,sr.Region
   ,srep.FirstName
   ,srep.LastName
FROM
   dbo.Client AS c
   LEFT JOIN dbo.ClientSalesRep AS cus ON c.ClientID = cus.ClientID
   LEFT JOIN dbo.SalesRegion AS sr
   RIGHT JOIN dbo.SalesRep AS srep ON srep.SalesRegionID = sr.SalesRegionID
										ON cus.SalesRepID = srep.SalesRepID
GO

I rewrote the query using only LEFT JOINS and each table had its own predicate. I found the LEFT JOINS made it easier to read and didn’t give me a headache.

SELECT
   c.ClientID
   ,c.ClientName
   ,sr.Region
   ,srep.FirstName
   ,srep.LastName
FROM
   dbo.Client AS c
   LEFT JOIN dbo.ClientSalesRep AS cus ON c.ClientID = cus.ClientID
   LEFT JOIN dbo.SalesRep AS srep ON cus.SalesRepID = srep.SalesRepID
   LEFT JOIN dbo.SalesRegion AS sr ON srep.SalesRegionID = sr.SalesRegionID

GO

I populated the tables with a million rows to see if the Optimizer would treat these queries differently. It didn’t. They had the same query plan, the same number of reads, and the same statistics, but it was easier to read.

Thanks go to Sebastian for hosting T-SQL Tuesday this month. Check out Sebastian’s blog, because he is blogging about JOINS all month.

The ROW_NUMBER Function As An Alternate To The MAX Function

It has taken a month to fuss over my new blog, but I finally made my first SQL entry. Since I’m excited about the upcoming SQL Pass conference, I thought I would show a fictitious problem about employees and their interests in SQL Conferences.

Problem: You are given two tables. The first table contains employees. The second table contains all the SQL Conferences each employee has been interested in along with the date they showed interest in the conference and whether or not they are still interested. You are asked to find the last SQL Conference that was added for each employee. Only conferences the employees are still interested in should be included, and only one conference per employee should be listed. The returned data should be ordered by the employee’s last name and first name.

Employee and Interest Data Model

The first solution that came to mind, was to use the MAX function on the InterestAddDate field to find the last added interest. There are two issues with this approach though.

1. In order to get the activity field returned, the Interest table has to be joined a second time on the MAX(InerestAddDate).
2. Multiple rows will be returned if the employee had an interest in two SQL conferences on the same date. While this could be a valid result set, in this case only one activity should be returned.


WITH CTE_InterestsByMax
AS
(
      SELECT
         EmployeeID
         ,MAX(InterestAddDate) AS LastInterestAdDate
      FROM
            dbo.Interest AS i
      WHERE
            isActive = 1
      GROUP BY
            EmployeeID
)
SELECT
      e.FirstName + ‘ ‘ + e.LastName AS EmployeeName
      ,i.InterestAddDate
      ,i.Activity
FROM
      CTE_InterestsByMax AS im
      JOIN dbo.Interest AS i ON im.LastInterestAdDate = i.InterestAddDate
                                                AND im.EmployeeID = i.EmployeeID
      JOIN dbo.Employee AS e ON im.EmployeeID = e.EmployeeID
ORDER BY
      e.LastName
      ,e.FirstName

Solution: To address these two issues, I used a Common Table Express (CTE)  and the ROW_NUMBER function. This function will number each row with a unique sequential number based on the OVER clause. Inside the OVER clause, I will order the data by the InterestAddDate field in descending order. Since I want to find the last SQL Conference of interest for each employee, I’m going to add the PARTITION statement on the EmployeeID field to the OVER clause. This will cause the ROW_NUMBER function to start over for each EmployeeID. Since I’m not using an aggregate function, I can return all the data from the Interest table that I need.

In the next part of the query , I join the CTE to the Employee table and add a WHERE clause. Since I ordered each partition in descending order, I know that the first row of each partition will have a rowindex of 1. I can now filter my data by rowindex = 1.

WITH CTE_InterestsByRow_Number
AS
(
      SELECT
         i.EmployeeID
         ,i.InterestAddDate
         ,i.Activity
         ,ROW_NUMBER() OVER (PARTITION BY i.EmployeeID ORDER BY i.InterestAddDate DESC) AS RowIndex
      FROM
         dbo.Interest AS i
      WHERE
         i.IsActive = 1
 )
SELECT
      e.FirstName + ‘ ‘ + e.LastName AS EmployeeName
      ,i.InterestAddDate
      ,i.Activity
FROM
      CTE_InterestsByRow_Number AS i
      JOIN dbo.Employee AS e ON i.EmployeeID = e.EmployeeID
WHERE
      rowindex = 1
ORDER BY
      e.LastName
      ,e.FirstName

When I looked at the logical reads for these two separate queries, the query using the MAX function had twice as many logical reads as the query with the ROW_NUMBER function. When I looked at the Execution Plan for both queries, I found the query using the MAX function had a higher Query Cost relative to the batch. My first run with the data, I used 20 Employees and 40 Interests. For the second run, I used 1000 employees and 4000 interests. I found that the Query Cost for the query using the MAX function increased with the larger datasets.

Execution Plan

%d bloggers like this: