Unbeknownst to them, Soutwest has a SQL Saturday hookup!

Being the Southwest fanbois that we are, I wanted to bring y’all this PSA 🙂

Winglet!

Everyone loves winglets. OK, maybe not everyone.

They’ve got a sale going on until the end of October 28 (tomorrow) Pacific Time where you can get flights for $30 if you’re going < 450 miles. They go up from there, but the prices are still pretty good.

The windows for the flights are December 1-15 and January 4-February 16. There are FIVE (5) whole SQL Saturdays within those two windows!

  • Dec 4, #61 in DC
  • Jan 15, #62 in Tampa
  • Jan 22, #45 in Louisville
  • Jan 29, #57 in Houston
  • Feb 5, #60 in Cleveland

Sooooo, if you’re wanting to hit up more SQL Saturdays or are speaking at one of those and have to foot the bill yourself, this might be a slightly cheaper way to pull it off.

…You know, if you don’t mind being stuffed in the back of a 737-700.

Edit: Forgot to put the direct link to their page that lists all of the covered cities/destinations: clicky

T-SQL Tuesday #11: The Misconception of Active/Active Clustering

TSQL Tuesday

October's TSQL2sDay: Hosted by Sankar Reddy

I was expecting my inaugural T-SQL Tuesday post to not happen this month… Until just yesterday, when I found myself helping explain to someone what an Active/Active SQL cluster was and what it wasn’t. Ah hah! I expected this to be a topic already covered fairly well, and as expected, I found a couple of really good articles/posts by others on this topic. I will reference those in a bit, but I can cover some more general ground first.

The misconception that was I was explaining yesterday morning is the same one that I’ve explained before and have seen others do the same: An Active/Active SQL Server Cluster is not a load-balancing system. It is, in fact, purely for High-Availability purposes. If what you are really after is true load-balancing, unfortunately, you will have to go elsewhere, to something like Oracle RAC.

When I say “true load-balancing”, I mean being able to load-balance a specific set of data across multiple pieces of hardware. What you can do with Active/Active clustering is load-balance a set of databases across multiple pieces of hardware. This is useful if you have a SQL Instance with multiple very heavily-utilized databases—one or more of these DBs could be pulled out to its own instance on separate hardware. This is not done without potentially serious tradeoffs, however, which will be covered in a bit.

The underlying reason for this is that Windows Clustering is only designed for HA purposes. With its “shared nothing” approach to clustering, any given resource within the cluster (say, the storage LUN that holds your accounting software’s MDF file) can only be controlled by one node of the cluster at a time. If the accounting DB’s home LUN is currently attached to the first node in a cluster, it’s going to be pretty hard for the second node to service requests for that database.

OK, what is Active/Active Clustering, then?

I like to think of it as two Active/Passive clusters that happen to live on the same set of hardware. To illustrate:

Active-Passive Cluster Example 1

This example cluster is a two-node Active/Passive cluster that contains a single SQL Server Instance, CLUS1\ACCOUNTING. This Instance has its own set of drive letters for its DB, Log, and TempDB storage (say, E:, F:, and G:). If both nodes are healthy, Node2 never has anything to do. It sits around twiddling its bits, just waiting for that moment when a CPU melts down in Node1, and it can take over running the ACCOUNTING instance to show how awesome it is.

Active-Passive Cluster Example 2

This is another two-node Active/Passive cluster, running a single instance, CLUS2\TIME (you know, because internal time tracking software is that important). Like CLUS1\ACCOUNTING above, it has its own set of drive letters for its storage needs (M:, N:, and O:)In this case, the SQL Server Instance is on Node2, leaving Node1 with nothing to do.

What do we get if we mashed these two clusters together?

Active-Active Cluster Example

We would get this, an “Active/Active” SQL cluster. It’s logically the same as the two previous clusters: it just so happens that instead of the second node doing nothing, it is running another independent instance. Each instance still has its own set of drive letters (E:, F:, and G: plus M:, N:, and O:). Just like in the “Accounting Cluster” above, if Node1 were to fail for some reason, the CLUS1\ACCOUNTING instance would be started up on Node2, this time, alongside CLUS2\TIME.

So, is doing this a good idea?

Maybe. Above, I talked about how a single instance Active/Passive cluster with more than one heavily-used DBs could be re-architected into an Active/Active cluster with some of the heavy DBs on one instance living on one node and the rest of them on a second instance living on the other node in the cluster. This gives you both High Availability and you’ve spread the load out between two servers, right? It does, but what happens if one of the nodes fails and the two instances “clown-car” onto the remaining node? Are there enough CPU & RAM resources on the remaining server to support the combined load?

That illustrates one of the major things to consider when thinking about deploying an Active/Active cluster. Tim Ford (Blog | @sqlagentman) has an article over at MSSQLTips that goes into detail about making this decision.

In addition, Aaron Bertand (Blog | @AaronBertrand) has a sweet little piece of automation to help manage resources when trying to squeeze the maximum amount or size of clowns into a single car without someone’s foot sticking out in the breeze. Of course, it’s something that you hope never actually gets used, but it’s a good way to wring the most performance out of an Active/Active cluster.

Conclusion

I don’t know if this is the best way to explain this concept, but I like it. The Active/Passive cluster concept is usually understood, so I believe it is a nice stepping stone to get to the truth about Active/Active clusters. As with everything, there are tradeoffs and gotchas that need to be considered when designing server architectures like this, and the above-mentioned others have gone a long way to cover those topics.

There we have it, my first T-SQL Tuesday post. Hopefully it is useful and not from too far out of left field 😉

Alright, So Maybe I Get the Cloud…

In general, I’m a cynical, Negative Poo-Poo Head. In the last few years, I’ve become more and more this way when it comes to technology trends. My first reflex as to why, is my increased time spent working with developers on a day-to-day basis, and my perception of how development  is susceptible to Flavor-of-the-Week-itis. I don’t think that’s really the cause; instead, I think I’m just getting more cynical as I get older in general. I know, this isn’t necessarily a good thing, but that’s a different blog post. The good news here is that I at least can still turn it off when I need to.

Anyway, the Cloud and my negative poo-pooness.

I’ve basically spent more time making fun of Cloud Computing than I have learning about it. “Oh, it’s the dumb terminal/mainframe paradigm all over again”, “There’s something wrong with this whole thing when you can buy a shipping container full of servers, put it out back, and call it a ‘private cloud’”, “It’s just the latest buzzword/flavor-of-the-week”, on and on. Honestly, even though Brent, Buck, and others have been saying it’s the Way of the Future™, I didn’t buy it.

Well…I think I’m on board now.

What changed my mind?

I don’t think I can nail it down to one thing, but I can point to a couple things that have been going on lately that played into this change of heart.

Last Friday was the Nashville SQL Users Group Meeting. Presenting was Brian Prince (Twitter | Blog (lookit dat Gamerscore) ) on SQL Azure. I was pretty interested in this topic due to wanting to actually learn something about this stuff that I make fun of. OK, that and I am a little interested to see what all the fuss is about.

Although I had to leave early due to an unfortunately-scheduled meeting, I did get to see most of Brian’s presentation. I was impressed with a lot of what I saw. I was impressed by the shipping containers full of servers that make up open-air datacenters. I was impressed by how they each can have their own genset backup power sitting next to them, how the servers in the containers live at 95o F, are cooled primarily by what basically amount to Swamp Coolers, and pretty much never get touched once they’re racked up. During this part of the presentation, the Sysadmin in me was clamoring to give up the DBA thing and figure out how to get a job building/supporting this stuff, because it looks just that awesome…

Brian then went into the components the Azure platform and where SQL Azure sits in it and how it is supported, replicated, and made available. We also got to see how you actually use the system along with some good arguments about DBs that we all have in our environments that would be good candidates to move to the cloud. The prices seem pretty good (I haven’t done any extensive calculations) when compared with running small but still mission-critical DBs in-house… It all actually sets up a pretty good argument.

All of this good info and eyes-on examples went a long way to getting me over whatever I was afraid of or didn’t buy about the system.

Buck has been going on about this whole thing for a while, and even put his money where his mouth is and is moving over to the Azure team. At first, I thought that was a loss for the SQL Community, but I think with my improving outlook, I don’t feel that way anymore. I believe it will be for the better, and I can’t wait to hear more from him in the future.

These are a couple of the things that have been going on lately that are beginning to change my mind about the cloud. I’m looking forward to learning more about it and considering it as an implementation option in the future.

Time to stare up at the night sky again

Well, so this didn’t take long for me to break from writing good, useful SQL posts to drop into general geekery.

Long story short, the Earth is as close as we will be to Jupiter until 2022. This makes it very bright in the night sky. If you go outside and look tonight (which you should), you will see a bright thing in the sky to the Southeast of the Moon. That’s Jupiter.

So, that’s fairly cool. But there’s more!

Get some binoculars and look again. Depending on how good your eyes are and how powerful your binocs are, you will see three or four of Jupiter’s four big “Galilean” moons. They’re running in a plane from lower left to upper right. Won’t be able to see any rings without a telescope, but the moons are pretty cool. At midnight it should be more-or-less overhead, which is when everything will be the brightest & easiest to see.

That's no moon...there's four of them (clicky for big)

The pic I took here was with Tammy’s Pentax K10D on a tripod with the crappy 200mm telephoto that we have. I wasn’t even going to do this until @CanSpice said that he had good results, so out I went. Focus ring doesn’t go far enough over to make this really sharp, so between that and, you know, the whole “we’re moving” part, this is as good as I can get with the equipment we have.

Spaceweather.com has some ridiculous pictures posted from other people who have real equipment, along with some more info.

Uranus (heh*) is also in the vicinity (only one degree off), but I wasn’t able to find it–not enough power. All of us (Sun, Earth, Jupiter, Uranus) are all lined up in a row, which is why everything is so close together.

I would expect that there will be another few nights when Jupiter is really visible, so if you don’t see this tonight, all hope is not completely lost.

* I’m sorry, I still, and probably will always, snicker at this

“Well I made a mistake today”

Getting mails from a Developer that start like this almost always leads to awesome. This turned out to be one of the times when it wasn’t as awesome as it could have been, but did give me the opportunity to spread some knowledge (which I don’t get to do very often, because, well, I’m not that smart).

This situation was the old, “oops, WHERE clauses are a good idea with DELETE statements.” The good news is that this was in Development, so it wasn’t a giant fire. Although I didn’t see the message right when it came in, I did see it in time to get to it before that night’s backup ran (we just keep one backup file in Dev and overwrite it every evening). I probably could have pulled from Production or Test instead of restoring a 140+ gig DB for a 299 row table, but we’ve got the space, more IO than God, and it was a Friday night where nothing else was going on out of the ordinary. Table restored, life goes on.

Actually, there were a couple points that I was able to make with this situation.

First: Tell your DBA when things go bad!

In our situation, with the backup file getting overwritten every night, if a Developer makes a mistake like this, they have to let us know before 8:00 the day of in order for us to be able ion do anything about it. The guys/gals have to first realize something bad happened, and then get to us right then in order to recover. If they sit on it until the next day, it is too late.

Second: BEGIN TRAN is your best friend.

When running DML, manually start and end transactions. Sure, SQL Server has the nice, easy implicit transactions that you don’t have to worry about, but those can become your worst enemy very easily. All it takes is either missed highlighting before mashing F5 or an unfortunately-placed closing paren.

BEGIN TRAN? (skip this paragraph if you already know) By default, SSMS uses implicit transactions. This means that even though you don’t type it out, when you run statements, SSMS begins a transaction, runs your stuff, and then commits it. By manually starting a transaction with BEGIN TRAN in front of your UPDATE, DELTE, or whatever, you retain control of this instead of letting the UI do it for you. This means you can run your statement(s), check the results, and then COMMIT or ROLLBACK yourself. In short, this is manual transaction control.

This one takes some diligence, because it’s easy to be complacent. I’m doing a simple little UPDATE statement, I didn’t make any mistakes, everything will be fine. Of course you think that—you wouldn’t run any statements that you didn’t think were right, would you? This is why you have to tell yourself to type BEGIN TRAN every time. It only takes once to really ruin your day.

OK, Third: COMMIT TRAN until it throws an error

This is another tip that I learned from our senior DBA on probably my first or second day on the job. Basically, when you commit your user transaction, keep trying to commit it until SSMS reports an error (trying to commit a transaction when there isn’t one open). Why? Glad you asked!

Create a table & put a couple rows of data in it:
CREATE TABLE TransTest (
  
ID      INT     IDENTITY(1,1),
  
Name    VARCHAR(20)     NOT NULL
   )

INSERT INTO TransTest
  
SELECT 'Smythee'

INSERT INTO TransTest
  
SELECT 'Bob'

Next, say you want to delete Bob from the table. Bob was never any fun anyway, was he? Because you’re heeding the above advice, you are going to wrap this simple one-row delete in a Transaction. You run the following:

BEGIN TRAN

DELETE
   FROM TranTest
  
WHERE Namee = 'Bob'

Whoops, you fat-fingered the column name and didn’t notice until you ran it, and it threw an error.

Fix it and run it again:

BEGIN TRAN

DELETE
   FROM TranTest
  
WHERE Name = 'Bob'

This runs OK, you double-check the contents of the table, everything looks fine.

Next step is to run COMMIT TRAN. That runs without error, and you go on your merry way.

But, there’s a problem: Select @@trancount and see what you get. You should see one transaction still open. Why is that?

When the first statement was run, a transaction was opened. Even though the statement itself bombed because of the bogus column name, that transaction is still there. When you fix it, if you run BEGIN TRAN again, you will now have a nested, second transaction. Running a single COMMIT will commit your changes, yes, but it still leaves one transaction open. Because that transaction still has locks, it will block other statements looking to operate in the TranTest table.

Moral of the story? Mash F5 on COMMIT TRAN until SSMS throws an error.

What was I talking about again?

Oh right, our poor developer.

In the mail I sent back to him, I commended him for being smart about letting us know right away when a mistake was made, as it allowed us to actually get the data back (or mostly so). I also recommended manual transactions, because they can save your tail.

I don’t know if he’ll take the advice to heart, but he at least has the tools available to him now if he wants to use them.