Dave + SQL Server = Ingenious SQL….

SQL Thoughts from @IngeniousSQL

T-SQL Bitwise Matching Logic – #TSQL2sDay

TSQLTUES

A good friend of mine Rick Kruger (@DataOgre|blog) is hosting this month’s T-SQL Tuesday Blog party. The T-SQL Tuesday Blog parties were started by Adam Machanic (@AdamMachanic|blog) back in December of 2009. This month’s topic is on Rube Goldberg Machines. Rick asked us to look through our closets to dig up a skeleton, something that we made T-SQL do, which might make other DBA’s cringe a bit. After hearing this, I knew exactly what I was going to blog about.

Bitwise Matching!

I remember when I implemented this feature and shook my head when I came up with the idea of using bitwise matching. The requirements I was given was to match two entities based on assigned lookup values. This would not have been that bad, except the lookup values were not static, they were dynamically entered through a web UI. When I first pitched the idea I used a dating site as a way to explain the logic, so in this blog post I will use that same concept.

Concept

Within a dating site one person is matched to another based on activities they both enjoy. Activities can be grouped into different categories. In this example we have two different categories; Exercise and Things to do. People must have at least one matching activity in each category to be a match. We are going to try to match Bob with another person.

People and their activities

  • Bob enjoys Walking, Biking, and Camping.
  • Sally enjoys Walking and Biking
  • Nancy enjoys Camping
  • Beth enjoys Walking, Running, Golfing, and Camping.

The following T-SQL implements a few temporary tables we will use to implement the bitwise matching logic.

-- Create the People that can be matched.
CREATE TABLE #Person (
   PersonID smallint
 , Name nvarchar(25));
INSERT   INTO #Person
         (Name)
VALUES   (1, N'Bob'), (2, N'Sally'), (3, N'Nancy'), (4, N'Beth');
 
-- Create the Activities that people may have in common.
CREATE TABLE #Activities (
   ActivityID smallint
 , Activity nvarchar(25)
 , CategoryID smallint);
 
INSERT   INTO #Activities
         (Activity, CategoryID)
VALUES   (1, N'Walking', 1), (2, N'Biking', 1), (3, N'Running', 1), (4, N'Yoga', 1),        
	 (5, N'Movies', 2), (6, N'Golf', 2), (7, N'Camping', 2);
 
-- Create the Weak Entity Table to store the Persons Activities.
CREATE TABLE #PersonActivities (
   ActivityID smallint
 , PersonID smallint);
 
INSERT   INTO #PersonActivities
         (PersonID, ActivityID)
VALUES   (1, 1), (1, 2), (1, 7),
         (2, 1),(2, 2),
         (3, 7),
         (4, 1),(4, 3),(4, 6),(4, 7);

Bitwise Logic

Using the activities we can assign a bitwise value based on the identity and category.
Example:

Activity Activity ID Category ID Bitwise Value
Walking 1 1 21 = 2
Biking 2 1 22 = 4
Running 3 1 23 = 8
Yoga 4 1 24 = 16
Movies 1 2 21 = 2
Golf 2 2 22 = 4
Camping 3 2 23 = 8

If we summarize the bitwise values we can use the binary and operator (&) to determine if any matches exist. Example:

Summed Bitwise Value 23 22 21 20
6 = 0 1 1 0
12 = 1 1 0 0
4 = 0 1 0 0

The value of 6 represents Walking and Biking, the value of 12 represents Biking and Running. The intersection of the two is Biking, this would be the matched value of 4. Being that we have a matched value, the result is greater than 0. Using this logic we can implement the following query.

WITH  PeopleSums
        AS (
            SELECT   p.Name
                   , p.PersonID
                   , a.CategoryID
                   , BitWiseSum = SUM(POWER(2, pa.ActivityID))
            FROM     #Person p
            JOIN     #PersonActivities pa ON p.PersonID = pa.PersonID
            JOIN     #Activities a ON a.ActivityID = pa.ActivityID
            GROUP BY p.Name
                   , p.PersonID
                   , a.CategoryID
           )
   SELECT   ps2.Name
   FROM     PeopleSums ps
   JOIN     PeopleSums ps2 ON ps.PersonID != ps2.PersonID
                              AND ps.BitWiseSum & ps2.BitWiseSum <> 0
                              AND ps.CategoryID = ps2.CategoryID
   WHERE    ps.Name = 'Bob'
   GROUP BY ps.Name
          , ps2.Name
   HAVING   COUNT(*) = 2;

This query uses a CTE to calculate and then summarize the bitwise values, grouped by person and category. We then self-reference the CTE using the binary AND operator (&) where the result is anything but zero. This concept can show us who Bob matches, we can use the table below to illustrate this.

Persons that Match Bob Category 1 Category 2
Sally Walking, Biking
Nancy Camping
Sally Walking Camping

To make sure that we have a match for each category, we do a count and ensure that it matches the number of categories we currently have. And voila Beth is a match for Bob, because Beth is the only candidate that had matches in both categories.

LAG() and LEAD() Analytical Window Functions

I have been finding more and more uses for the OVER statement recently and I figured it was about time I did a blog post on it. The OVER statement captures a window of data that can be aggregated and included with the details of the original result set. There are a set of window functions that can be applied to the OVER statement to generate row numbers, ranking values, averaging, getting previous values and next values in a set. This idea of getting the previous and next values, in a set of data, is a problem that I just ran into recently. This blog post will describe the usage of LAG and LEAD analytical functions. LAG and LEAD are two new Analytical Window Functions that were introduced in SQL Server 2012.

Recently I was tasked with calculating the difference between two values, however the values existed in different rows. My first thought was a self-join on the table and compare the two results. Using Adventure Works 2012, I ran the following query and the results are shown.

SELECT   soh.PurchaseOrderNumber
       , soh.OrderDate
       , soh.CustomerID
       , PreviousTotal = soh1.TotalDue
       , CurrentTotal = soh.TotalDue
       , NextTotal = soh2.TotalDue
FROM     Sales.SalesOrderHeader soh
JOIN     Sales.SalesOrderHeader soh1 ON soh1.SalesOrderID = soh.SalesOrderID - 1
JOIN     Sales.SalesOrderHeader soh2 ON soh2.SalesOrderID = soh.SalesOrderID + 1;

LagLeadImg1

This result set shows the previous and next record values for the TotalDue column. This got me 1/2 way to what was needed to perform the difference calculations. I needed the results for a given customer rather than all customers. I updated the query to allow for a customer ID to be passed into the query and the results are shown.

DECLARE @CustomerID int = 30117;
SELECT   soh.PurchaseOrderNumber
       , soh.OrderDate
       , soh.CustomerID
       , PreviousTotal = soh1.TotalDue
       , CurrentTotal = soh.TotalDue
       , NextTotal = soh2.TotalDue
FROM     Sales.SalesOrderHeader soh
LEFT JOIN Sales.SalesOrderHeader soh1 ON soh1.SalesOrderID = soh.SalesOrderID - 1
                                         AND soh1.CustomerID = soh.CustomerID
LEFT JOIN Sales.SalesOrderHeader soh2 ON soh2.SalesOrderID = soh.SalesOrderID + 1
                                         AND soh2.CustomerID = soh.CustomerID
WHERE    soh.CustomerID = @CustomerID;

LagLeadImg2

This is was a problem. Being that I was using the SalesOrderID to perform the self-joins on the tables, and the Customer’s records were not in sequential order, I was not getting any of the previous and next records.

LEAD()/LAG() OVER (PARTITION BY)

Instead of going down the path of craziness, I decided to eliminate the use of the self-referencing joins and opted to use the OVER clause with LEAD and LAG to solve the problem. Using the following query and output we can explain the LEAD(), LAG(), OVER (PARTITION BY) statements.

DECLARE @CustomerID int = 30117;
SELECT   PurchaseOrderNumber
       , OrderDate
       , CustomerID
       , CurrentTotalDue = TotalDue
FROM     Sales.SalesOrderHeader soh
WHERE    soh.CustomerID = @CustomerID;

LagLeadImg3

In the blue box is the CustomerID column, this is the column we want to Partition by.

In the red box is the PurchaseOrderNumber and OrderDate columns, these columns we want to order the results of the window.

In the green box is the CurrentTotalDue, this is the column that we will pass to the LEAD and LAG window functions.

I updated the query to use the LEAD and LAG window functions and the results are shown. The red highlights show the results of the LEAD function and the green highlights show the results of the LAG function.

DECLARE @CustomerID int = 30117;
SELECT   PurchaseOrderNumber
       , OrderDate
       , PreviousTotal = LAG(TotalDue) OVER (PARTITION BY CustomerID ORDER BY OrderDate , PurchaseOrderNumber)
       , CurrentTotal = TotalDue
       , NextTotal = LEAD(TotalDue) OVER (PARTITION BY CustomerID ORDER BY OrderDate , PurchaseOrderNumber)
FROM     Sales.SalesOrderHeader soh
WHERE    soh.CustomerID = @CustomerID;

LagLeadImg5

As you can see the syntax for writing a query to use these LEAD and LAG are not overly complex. In a future blog post I will dive deeper into using the OVER statement.

Standard T-SQL UNPIVOT

In prior blog posts I talked about a Standard PIVOT query and a Dynamic PIVOT query. It is only fitting that I talk about how to implement an UNPIVOT query. As DBA’s we commonly unpivot data and turn it into normalized data for storing in a relational data store. Understanding how to use the UNPIVOT query is a great skill to have as a DBA. Just like the PIVOT operator, the UNPIVOT operator was introduced with SQL Server 2005. In this blog post I will show how we can use the UNPIVOT operator.

I got the idea for this blog post as a result from my online grade book. I noticed that the data is displayed as a denormalized set of data. As an instructor I thought it would be interesting to see average, sum, minimum and maximum assignment scores for each student. Below is an example of the display of the online grade book.

StudentID Assignment1 Assignment2 Assignment3 Assignment4
101 90 100 95 100
102 70 65 40 80
103 100 100 100 100

I found that writing a simple query to get the sum (adding each column) and average (adding each column and dividing by the number of columns) was not too bad. However, finding the minimum and maximum proved to be a bit tricky. Sure we could implement a CASE WHEN statement, however for each column the complexity and possibility of error increases significantly, and maintainability dang near goes out the window. In comes UNPIVOT to save the day. After unpivoting the data, we simply use the AVG(), SUM(), MIN() and MAX() functions.

Let’s take a look at the basic structure of the UNPIVOT query. Using the UNPIVOT statement, you specify a value and data column, then select each of the columns you want to unpivot into these new columns. The new value and data columns become select-able columns in the query, as shown in the following code.

SELECT   [AnyColumn]
       , [DataColumn]
       , [ValueColumn]
FROM     SomeTable ST 
UNPIVOT ( ValueColumn FOR DataColumn IN (UnPivotCol1, UnPivotCol2, UnPivotCol2) ) AS UNPVT

Applying this UNPIVOT syntax to our data above we yield the following query.

SELECT   StudentID
       , Assignment
       , ASMT.Score
FROM     #ClassGrades CG 
UNPIVOT ( Score FOR Assignment IN (Assignment1, Assignment2, Assignment3, Assignment4) ) AS ASMT

This query returns the following normalized view of the data

StudentID Assignment Score
101 Assignment1 90
101 Assignment2 100
101 Assignment3 95
101 Assignment4 100
102 Assignment1 70
102 Assignment2 65
102 Assignment3 40
102 Assignment4 80
103 Assignment1 100
103 Assignment2 100
103 Assignment3 100
103 Assignment4 100

Now that we have this data in a normalized result, applying SQL functions such as AVG() and SUM(), is relatively simple. Below is an updated version of the query using the AVG(), SUM(), MIN(), and MAX() functions grouping by StudentID.

SELECT   StudentID
       , Average = AVG(ASMT.Score)
	   , MaxScore = MAX(ASMT.Score)
	   , MinScore = MIN(ASMT.Score)
	   , Total = SUM(ASMT.Score)
FROM     #ClassGrades CG 
UNPIVOT ( Score FOR Assignment IN (Assignment1, Assignment2, Assignment3, Assignment4) ) AS ASMT
GROUP BY StudentID

This query returns the following view of the data

StudentID Average MaxScore MinScore Total
101 96 100 90 385
102 63 80 40 255
103 100 100 100 400

As you can see the PIVOT and UNPIVOT statements are not overly complex and are definitely great TSQL tools for any DBA to keep in their back-pocket. Below is the entire script with test data included.

IF OBJECT_ID('tempdb..#ClassGrades') IS NOT NULL 
   DROP TABLE #ClassGrades;
 
CREATE TABLE #ClassGrades (
   StudentID int PRIMARY KEY
                 NOT NULL
 , Assignment1 int NULL
 , Assignment2 int NULL
 , Assignment3 int NULL
 , Assignment4 int NULL);
 
INSERT   INTO #ClassGrades
         (StudentID, Assignment1, Assignment2, Assignment3, Assignment4)
VALUES   (101, 90, 100, 95, 100),
         (102, 70, 65, 40, 80),
         (103, 100, 100, 100, 100); 
 
SELECT   StudentID
       , Assignment
       , ASMT.Score
FROM     #ClassGrades CG 
UNPIVOT ( Score FOR Assignment IN (Assignment1, Assignment2, Assignment3, Assignment4) ) AS ASMT
 
SELECT   StudentID
       , Average = AVG(ASMT.Score)
	   , MaxScore = MAX(ASMT.Score)
	   , MinScore = MIN(ASMT.Score)
	   , Total = SUM(ASMT.Score)
FROM     #ClassGrades CG 
UNPIVOT ( Score FOR Assignment IN (Assignment1, Assignment2, Assignment3, Assignment4) ) AS ASMT
GROUP BY StudentID;

Rules of Engagement – NoSQL Graph Databases

Well Rick Krueger( blog | @DataOgre ) and I are back at it again. We have decided to pair up and do a talk about a rather controversial topic in the SQL Server community, NoSQL. Now we are not jumping ship and saying that NoSQL is the way of the future, because that is simply not true. Relational databases have been around since the 70′s and still outperform NoSQL Solutions at most tasks. The idea that we are going to seed is that these NoSQL solutions may have a place in enterprise applications. We are not going to say you should use SQL Server or a NoSQL Solution exclusively, rather we are going to claim Polyglot Persistence. Polyglot Persistence is the idea of using multiple data stores to solve multiple problems, hence Rules of Engagement: NoSQL is SQL Server’s Ally. In a previous blog post I discussed Key Value Stores and Column Stores, in this blog post I will be discussing another version of the NoSQL solutions, Graph Databases.

Graph Databases

Graph Databases are data structures that consist of nodes, properties, and edges. In RDBMS terms a node would be a record in a table, a property would be the data within the record, and an edge would be similar to the relationship between tables. However the edges can store properties too, these properties describe the relationship between the nodes.

In the RDBMS land, writing queries to join many large tables, especially self referencing tables, can degrade performance of a query and can be rather costly, even with proper index usage. This is where Graph databases really shine, the graph databases has the ability traverse nodes using the edges and the properties within the edges. Using Graph Database to find relationships between highly interconnected data is very fast. Finding data that would take multiple joins or a recursive query in a RDBMS is a simple task with Graph Databases.

You can visualize a graph database as a flattened RDBMS table structure with named connections. Looking at the the relational tables below we see simple car buying and selling database. This database has a list of cars that people currently own and the types of cars they are looking for.GraphDB_DBStructure If we were to flatten and denormalize the structure we would get something that looks like this.

GraphDB_GraphStructure

Here the Car node is the CarType and Manufacture tables denormalized, the Person node is the Person table, and the edges that connect these two nodes would be the Request and Cars tables. Depending on the data you are looking for, both of these structures have their merits. Lets say you need to write a query that will return a list cars for a given owner that is willing to trade with another owner. This is a simple query for both, however what if there is not a match, but there is a match using a 3rd party for a trade. Or maybe there needs to be a 4th or 5th party for a four or five way trade. This query is starting to get very complicated in a RDBMS, and the performance at each level is progressively degrading. This is where a Graph Database is really going to stand out! Being that the emphasis is on the edges between the nodes, the query is simple and will give you performance your application will desire.

Graph Database Implementations

One of the most recognized Graph Databases implementations is Neo4j and is currently made available by Neo Technologies.

“Neo4j is a robust (fully ACID) transactional property graph database. Due to its graph data model, Neo4j is highly agile and blazing fast. For connected data operations, Neo4j runs a thousand times faster than relational databases.” – http://www.neo4j.org/

Neo4j is available for most platforms and comes with a WebAdmin management console for viewing the graph database. Below are some screen captures of the Web Admin utility showing a Dashboard for over all database health, and some ways to view the data inside the database.
Neo4JDashboard
Neo4JDataBrowser
Neo4JDataViewer

Polyglot Persistence, SQL Server and Graph Databases

Recently Facebook announced a graph search feature in their application.

“With graph search you can look up anything shared with you on Facebook, and others can find stuff you’ve shared with them.” – https://www.facebook.com/about/graphsearch

This is idea of using a graph database within their application is the idea behind polyglot persistence. Their primary data could be stored in any data structure such as a RDBMS, but their graph search feature would be stored in a Graph Database.

Conclusion

I am not recommending that you re-write your applications to make room for a NoSQL solution, I am simply suggesting that you as the DBA (keeper of data), should keep an open mind to other possibilities. If there is another solution out there that could assist SQL Server, or is maybe a better fit, exploring the avenue might result in a better data solution. As always, a round peg will fit into a square hole if you have a big enough hammer.

Rules of Engagement – NoSQL Column Data Stores

Well Rick Krueger( blog | @DataOgre ) and I are back at it again. We have decided to pair up and do a talk about a rather controversial topic in the SQL Server community, NoSQL. Now we are not jumping ship and saying that NoSQL is the way of the future, because that is simply not true. Relational databases have been around since the 70′s and still outperform NoSQL Solutions at most tasks. The idea that we are going to seed is that these NoSQL solutions may have a place in enterprise applications. We are not going to say you should use SQL Server or a NoSQL Solution exclusively, rather we are going to claim Polyglot Persistence. Polyglot Persistence is the idea of using multiple data stores to solve multiple problems, hence Rules of Engagement: NoSQL is SQL Server’s Ally. In a previous blog post I discussed Key Value Stores, in this blog post I will be discussing another version of the NoSQL solutions, Column Data Stores.

Column Data Stores

Column Data Stores and Relational Database share some concepts like rows and columns. However, column data stores do not require columns to be defined, nor is space allocated for each column. A column is simply a key value pair, where the key is an identifier and the value stores values related to the key. Columns can be added without all of the overhead that is incurred with a Relational Database. A column can exist for every row, some rows, or only one row. A row can have many different columns or just one. Being that the primary focus of column data stores is the column, performing aggregations or returning column data becomes very fast. The image below depicts a sample column store, with a concept of column families, which are groupings of like column data. This concept of groupings help with more complex queries.ColumnDataStoreThe image above shows a simple example of what a column’s key values could look like, however they can store much more. The columns themselves can store rows and columns. Thinking of this as a table with-in a table would be correct. In a relational database we would use a secondary table linking back to the primary with a foreign key. The example below shows the comparison between a relational model and a column store model.ColumnDataStoreAdvancedThe big advantage here is that we are bringing the related data closer to each other, which makes data retrieval fast. However trying to aggregate this type of data becomes difficult and requires a Map Reduce job.

Just like the Key Value stores two big benefits to Column Stores are horizontal scaling and the lack of a schema requirement. However, with Column Stores, having a good understanding of the tables and key structure defined prior to development helps implementing a useful data store.

Column Data Stores Implementations

One of the most recognized Column Data Stores comes from the Hadoop world, it’s called HBase. Apache HBase is the database that sits on top of Hadoop and HDFS (Hadoop File System). HBase was designed to handle big data, millions of records with millions of columns type of big data. This is something you should keep in mind when selecting a data store. If you have big data, then looking at a data store solution like HBase is worth the effort. One of the biggest benefits that I have seen is the ability to store all the data that we would normally have to archive in a RDBMS. Here is an example, think of an Automobile company like GM. They have been selling cars and servicing cars for years. This would be big data! In a RDBMS we would have to eventually archive data to keep costs down, however with an HBase solution, we simply add more nodes to the Hadoop cluster and let HDFS work its magic. Imagine being able to go back and getting every service record for a car built in the 50′s, this is what HBase and Hadoop can give you.

Polyglot Persistence, SQL Server and Column Data Stores

An example would be an enterprise application that stores US census data. Here again we have big data. Let’s say the requirements are that we need to store the past 50 years of census data. However only the past 3 years need to be returned in real time, whereas the rest of the data could be returned over time, maybe in a queued report. We could store all 50 years of data in a RDBMS, however that could get rather costly and most likely will impact performance. Rather, we could store the past 3 years in a RDBMS and push the remaining data into a NoSQL solution and scale horizontally as needed. This hybrid data solution is polyglot persistence and best of all solves the problem at hand.

Conclusion

I am not recommending that you re-write your applications to make room for a NoSQL solution, I am simply suggesting that you as the DBA (keeper of data), should keep an open mind to other possibilities. If there is another solution out there that could assist SQL Server, or is maybe a better fit, exploring the avenue might result in a better data solution. As always, a round peg will fit into a square hole if you have a big enough hammer.