Looking Ahead to 2011

For as much as I write tasks/to-do items down, am unhappy when I don’t get things done on-time, and really enjoy learning and doing new things, lists of Goals sure rub me the wrong way!

2011 Task

Just don't call them "goals"

I’ve always been that way and I haven’t been able to figure out what my deal is. At first, I thought it was the idea of someone else dictating to me what I should do, but I feel this way when I cook up “goals” for myself, too. It’s not being adverse to work, because when it’s fun, it doesn’t matter. Fortunately, just about everything related to IT that isn’t writing .NET code is fun to me, so it’s not that, either.

If nothing else, I at least understand that I’m in the minority here, and can play along at work (err, “play along” sounds a lot more negative than I mean it to be) and understand that Goals make the world go ‘round. I’ll fill out the tool, put them on my Task list for myself (maybe even put “Goals” in the name), and everyone will be happy at the end of the day.

At any rate, since this can’t be a “goals” post ( 😉 ), it’s just going to be some random mumblings about where I want to go next year.

2011: The last full year

OK, I don’t really think the world is going to end in 2012, because No-one expects the Spanish Inquisition! and Mathew 24:36. You’re right, that was dumb. Let me try again.

2011: I pick a direction and [maybe] stick to it

The biggest thing coming up next year for me is career-related.

The whole reason I switched from being a Sysadmin to this DBA thing is because of Business Intelligence. I’m doing the plain-Jane DBA partly because it was there and partly because I saw it as a way to get my foot in the door of “real” data land. For the most part, that has worked out quite well, and, as it turns out, I like doing this full-time, too!

What this means is I need to decide which way I want to go with my life: Do I want to stay a full-time DBA or do what I can to follow what SQLChicken (blog | @SQLChicken) has done, and switch over to full-time BI work (he actually reminded me the other night that Pragmatic Works is hiring, but I said that the travel wouldn’t work for me. On the other hand, if I could drive or fly a little airplane around TN, then we’d need to talk 😀 )? There’s the potential for a heavy BI opportunity at my current job, but I don’t yet know how that is going to play out. It might be a good way to test the waters a bit, but I won’t know that for at least a month or two.

Regardless of how that goes, I need to get some new things going on job-wise soon, because I’m so over SQL 2000, I don’t even know what to say about it.

Other Data Stuff

SQL-related travel is something that we should be doing more of. Neither one of us has a very good (ie, all but nonexistent) travel budget at work, so we have to foot the bill for this on our own. Yes, we know.

We couldn’t swing PASS this year, and that wasn’t any fun. I’ve got SQL Rally on the calendar, but that still remains to be seen. It would be much easier for us to pull off, plus can easily be driven to (we’re done flying Part 121 ops for right now, thanks to the TSA). There are some SQL Saturdays that aren’t terrible to get to, and those are, of course, good events in their own right.

The biggest problem we have with travel these days (and this goes for all travel, not just SQL travel) at the moment is the dogs, since we haven’t been able to figure out what to do with them since moving down here. Taking them along is almost impossible (lodging at destination), and Boarding them at a place that doesn’t just leave them in crates all day gets real expensive real fast. We could get an RV, I guess, but we’d quickly become that couple at events, hahahaha. Still working on this.

Next year, I want to do more experimentation/exploration/goofy stuff with SQL Server at home. I’ve got a couple instances running now, but I’d like to branch into playing with pre-release stuff. This would be easier with a hardware budget, but I’m pushing that envelope as it is, already. Part of what’s holding me back in this department right now, is the simple fact that I don’t really know what to do with myself if I don’t have big DBs with a lot of users hitting them really making things work. Basically, I don’t function well when I don’t have a problem to solve :-)…I’m going to work on either fixing that next year or figure out a way around it.

Big Bookmark, Skinny Book

I'm going to have bookmark problems

I’m going to try to read more next year. I have no idea how I’m going to pull this off, because I read stupidly slow, so it takes me forever to get through anything. I’ve cleaned up my backlog of SQLServerCentral dailies so they’re not so overwhelming, and cutting my losses on some older SQL books that I haven’t made it through. I don’t think I can do anything about my blog feeds, because so many of you guys out there write good and useful content (unlike me).

I hope to get a quick win on the book front by reading Tom LaRock’s (blog | @SQLRockstar) book, since it’s thinner than pretty much everything else I have around here. In fact, it might be a little too thin (see pic). I’m hoping that a quick win there will help me build some momentum. Maybe I’ll magically be able to read faster, too.

Other/Misc

The first other topic is this Blog. I need to spend more time and effort on it, especially when it comes to writing good technical content. So far, I haven’t done a good job at that. That’s partially due to my job situation, but I could be doing more anyway. I hate setting arbitrary numerical goals, but I think I’m going to do it here: 26 posts. That’s an average of one every two weeks, and I should be able to pull that off. I’m not going to get overly detailed and say that “20 of them need to have good technical content”, I’m only going to go for an overall number.

I used to bicycle a fair amount. It hasn’t been the same since I was in college, where I put 30-35 miles a week on the bike just going to class & work. Back then I did a fair amount of mountain biking, too (which is what I’m really in it for), and spent a couple years running races in the Indiana statewide series, DINO. The last few years have been really lacking in that department, both causing and exacerbated by me being fat.

Next year, going to fix that & spend more time on the bike. I’ve already started to do some work on the trainer, and when it warms up, I’m going to move it to the road around the house. We live almost as far away from good bike trails as we did in IN, but I’m going to make an effort to not let that be in my way as much as it has been in the past.

This talk of biking has made me realize that I’m going to need some form of case or other padding for my new phone. The one time I went on a trail this year saw me doing an awesome, completely unintentional half backflip on the bike. I wound up flat on my back, still clipped into the pedals, holding the bicycle up in the air above me. The problem is that I carry my phone in my Camelbak; although the old Samsung i760 lived through that without a problem, I’m pretty sure the Focus would be broken in half. Soooo, need to do something about that.

My Non-Y2K-Compliant Logbook

My Non-Y2K-Compliant Logbook

The final thing for next year is flying. I got my Logbook out recently and thumbed through it. There’s some good stuff in there. The last time I flew was February 2005. I don’t even have the tail number of the airplane or the hours of the last two flights. Just have the landings (2 for one, 1 for the other). I know they were both in one/some of the Skyhawks at Lafayette Aviation, Tammy was with me for one of them, and I had to wait to take off on the single landing one because the visibility was crap**.

Anyway, since then, I’ve waffled back-and-forth between missing flying and being OK with how things are, because flying is freakin’ expensive. But, with the TSA thing and me more and more wanting to get back into it, I want to start flying again next year if I can find the money for it. This does mean that I am supposed to work on my Instrument rating (so I can fly in the clouds, basically), because I promised myself that a number of years ago. So… there’s the last thing for next year: Get Instrument Ticket if I can find the room in the budget.

There we are. 2011. Please don’t hold me to this stuff, because, you know, they’re not goals; just stuff I want to get done 😀

**That one was actually sorta fun, because it was late in the afternoon and the visibility was crap due to a fairly dense haze layer. We finally took off (oh hey, this is the one with Tammy. This means she was only with me for one landing. Hahaha, stay tuned for awesome stories.), flew around the pattern, annnnd then turned final, which happened to be directly into the sun. Bam, couldn’t see anything. I mean, seriously, in-flight visibility was maybe 1/2 mile, and I was on a mile final. I grumped about this to the controller as I flew runway heading, knowing that it would eventually show up ahead, and he was all, “Yeah, the last guy said it was pretty bad.” Yeah, thanks for the heads-up, pal.

Help, Comcast’s DNS is sucking again!

This past Sunday, if you lived in parts of the Midwest (I know IN & IL were on the list, but I don’t remember who all else) and have Crapcast Comcast as your ISP, you might have had problems surfing the tubes, because they were having trouble with their DNS servers again. News of this spread through Twitter (at least that’s how I heard about it since I wasn’t affected directly), in a bizarre twist of news-spreading similar to people sending email at work letting people know that the mail server is down… Obviously with smartphones & such this is still a good way to spread such information, but it still strikes me as a bit funny 😉

(I’m not going to go into what DNS is here. I assume most people reading this are already going to know what it is, what it’s for, and why it’s important, so I’m going to skip this whole part.)

The main way to take care of this is to set your machine to use other DNS servers. The usual suspects recommended during times like this are Google’s two public servers, aptly named google-public-dns-a and google-public-dns-b.google.com (-c resolves to a server, but its IP isn’t as easy to just remember (more on that in a sec). Or, it’s a Google honeypot for smarty pants 😀 ). There’s also OpenDNS servers, which apparently do a bunch of other stuff like redirecting you to the right place when you misspell something. I don’t really know how well this actually works or anything; this is just what their Wikipedia page says.

Google’s own page explaining how to set these up is a good reference for what to do if you don’t know. They’re pretty good instructions, so knock yourself out there. These servers’ IPs are 8.8.8.8 and 8.8.4.4, which are fairly easy to remember (and type), making it easy to set up on machines on-the-fly, which is important if you’re looking to set this up while DNS is down and you can’t get to google.com…

Here’s what I prefer to do

Google’s instructions are fine to set up a machine or two, but what if you’ve got half a dozen boxes in your house and/or you want a bit of a more robust solution? Glad you asked!

Most likely, the machines on your home network are serviced by your router’s built-in DHCP server. At the same time it provides the machines with IP addresses to use, it also provides them with one or two DNS servers. It should be getting these addresses from your ISP in basically the same way—it’s a DHCP client instead of a server on its external-facing port.

Obviously, under normal circumstances, your ISP is going to be providing their own DNS servers for this use, as it allows them to load custom search pages and stuff if they want to. (Did Comcast ever go through with doing that?) If your router supports it, however, you can override these settings, and that is where the more robust solution to this DNS problem comes in.

Somewhere in your router’s settings is probably a section called “Basic Setup”, “Basic Settings”, or something similar. This section will include things like the router’s name, authentication settings in case your ISP requires that, and possibly more advanced things like MAC address spoofing. What you’re looking for here is a section that refers to “Static” or “Manual” DNS servers like these:

WRT54G Settings

Old Linksys Settings screen

Netgear Settings Screen

Netgear Settings Screen

These are a couple of the routers that I have available to look at. The first one is an old Linksys WRT54G (if you aren’t actively using one of these at home…well…you have my sympathy) and the second one is our travel router, some random Netgear thing. These are the general settings areas that you’re looking for.

The main idea here is to put Google’s DNS servers’ IPs into those manual configuration boxes. You could forego your ISP servers altogether and just put in both of Google’s server and call it a day. However, for some reason, this strikes me as something that isn’t the best idea. I don’t have a real good reason for it, it’s just one of those things that doesn’t feel right. What I would do is to get one of your ISP’s DNS servers and put that in as your primary, then one of Google’s servers as the secondary (or tertiary, if you have an option for 3). That strikes me as a safer option, but there is one thing to keep in mind: If your ISP ever changes their DNS servers, since your settings are manually configured, that change won’t be made automatically like it otherwise would. Obviously if you’ve got one of Google’s servers in there, things will still work, but name resolution may be a bit slower if your machine picks the now-broken IP first and you have to wait for that to time out before it hits the second machine. This is just something to weigh; I don’t know that there’s a good reason to not just put Google’s DNS servers in there and go on with life, other than it’s almost definitely an “unsupported” configuration in your ISP’s eyes.

The reason I advocate this method of setup is simple: When you do this, these addresses will be passed on to your workstations when they get IPs to use from the router. This will happen for every machine that connects to your network, so you only need to set this up once to use these other servers.

While you’re in here messing around, it is important to leave the router set to get its external IP via DHCP or whatever else your ISP has told you it needs to be set to. If you do this, then things will work OK for now, but at some time in the near future, it’s probable that things will stop working altogether. If you can’t manually set your DNS servers without leaving the IP address on Automatic/DHCP, then you won’t be able to set DNS up this way, and you’ll need to use Google’s instructions to set each of your machines individually.

Nutjob solution

Or, forget all this, run your own DNS server, point it at the Root Hint servers, and thumb your nose at Comcast’s inability to do simple things like keep some DNS servers happy.

I have no idea who would do such crazy things, though <.< >.>

Here Be Dragons: New Cell Phone

Cell phones in my family go way back…Pretty much as far back as is possible in the US:

Ameritech Phone

1984 called...

I don’t remember anymore if that was my dad’s first or second phone. I have no idea what the deal with the ammo box is. I do remember that the handset clip would get bolted to that little bracket on the lid & then could leave the box sit in the middle floor of his pickup and the phone would be right there. Yeah. AMPS analog. 3 watt transmitter. Good thing the antenna was on the roof. Those were the early Ameritech days. The phone wouldn’t even get a signal at the house—he had to get up towards Rensselaer (Indiana, look at a map, see the boondocks where I’m from) before he’d be able to use the thing.

‘Course, back then everyone called them “car phones.” It stayed in the car, and at one point was even hardwired to the vehicle so that if he wasn’t in the truck at the time, it would honk the horn if someone rang. That seems almost mind-blowing to me now. Once the whole 3 watt thing started to go away, things got smaller, and they started to come in little nylon bags (“bag phones”) that were more self-contained, and at least had little rubber duck antennas directly attached to them. It was on a BNC connector, so you could plug in the roof antenna if you didn’t like having what was probably still a whole watt radio broadcasting from the general vicinity of your right knee.

Anyway, in the mid-90s, I was assigned a Motorola DPC 650 by my parents when I started to drive (found out that Tammy had a 650 at about the same time I did). That thing was still a beast, and pretty much never left the truck. That phone began my love affair with Motorola flip phones served by our old friend GTE Communications. Through high school I went through a number of StarTAC 7860 and -68s (wore the phone, so broke a lot of antennas and other bad things happened). To this day, I sometimes wish for the simplicity of a StarTAC.

In summer of 2000, while I was in college, I got my own account and number with GTE. That brought the horrid piece of crap that was the T720. I don’t even want to talk about it. Still have the phone though; it’s pretty funny to fire it up and look at it now. It had a one-line display on the outside so you could see who was calling (OOOOO!). I also seem to recall you could install 3rd party applications on it, but I don’t remember much about that. Somewhere in here GTE became Verizon, of course. I can still hear James Earl Jones say, “Welcome to Verizon Wirelesssss…”

Things looked up after that: Motorola V710. This thing is honestly probably my favorite phone of all time. It was solid, its radio was great, and had ridiculously awesome call quality. It had a big display on the inside, a decent-sized one on the outside, and really good battery life. My first one of those met an untimely end when I wound up running over it with my pickup. While it was open. Face down. Did I mention that it was just at the right place that the front right tire of the pickup sat ON the phone for a few hours? The fun thing about that is that the phone still worked (!), and I used it like that for a few weeks before it decided it was done. Yay phone insurance!

The Smartphone Years

Because I was addicted to my job at the time, and liked the idea of getting mail in my pocket, I bought a Samsung i760 in December 2007. That is a Windows Mobile 6.1 device with a slide-out keyboard. I bought that phone because I wanted my phone to be just another Outlook client. I didn’t want to have to fight with some third-party Exchange connector and Blackberries were right out from the get-go. I have a giant rant about RIM and how I cannot comprehend how so many companies rely on their system for mail, but that’s a different story.

This was a big departure for me for a couple reasons: It was the first non-Motorola flip phone that I had ever owned, as was it my first smartphone. I was pretty worried about this at the time. As it turned out, though, everything was great. OK, except for the call quality. This was definitely a smartphone first. The radio wasn’t all that good, either. However, within about 10 minutes of having it home, I was scrolling (with its stylus, hahahaha) through my Exchange mailbox, which I thought was the coolest thing since sliced bread at the time. You know what else I liked about the phone? Windows Mobile 6.1. There, I said it.

Three years later, I jump off a cliff…

After ten years with the same Verizon account, phone number, and I’m quite certain, voicemail message… I switched to the company that carries Ma Bell’s bastardized name and a completely unproven phone OS on a device that doesn’t have a hardware keyboard. This could go terribly.

Yes, I bought a Samsung Focus with Windows Phone 7. Tammy and I both got one (ATT BOGO Black Friday deal). This was brought on by our house (The Osburn Hideaway) being in a bizzaro black hole where there is no Verizon coverage, Tammy and I wanting to combine to a family plan, and, well, ATT having WP7. Because at the end of the day, I’m just that big of a fanboi.

Everyone on Twitter knows that BrentO just loves Windows Phone 7. I do agree with just about everything that he says about it. We’re taking huge gambles that someone at Microsoft has this dev team’s throttle on the floor and all of the shortcomings the device has will be taken care of soon (I’m not going to talk about anything specific, because I have the same grumps that everyone else has, and all of those people are better writers than I am). The rumored sales numbers don’t look all that good so far, but I don’t know that anyone is all that surprised by that. If, after the first major update comes out and possibly another round of hardware, those numbers don’t start to go up…then I’m going to be pretty worried. I don’t expect iPhone-like numbers, as that device changed the freaking world, and it’s the likely-not-to-be-beaten incumbent.

So, that’s my cell phone story. I don’t expect to write about this much more, because like I said above, I’m really not smart enough to come up with anything new here on my own, so you’ll be able to see what I’m going through by everyone else writing about the OS and poo-pooing on its poor adoption rate. I mean, unless something really bad happens and I bail early.

…and if this doesn’t work out and that happens, I’ll… <deep breath>…probably get an iPhone.

T-SQL Tuesday #12: Why are DBA Skills Necessary?

TSQL Tuesday 12

T-SQL Tuesday #12 hosted by Paul Randal

T-SQL Tuesday is being hosted this month by the great Paul Randal (blog | @PaulRandal). This is awesome, because it’s Paul, but also not so awesome, because it means Paul is guaranteed to read my stuff. I’m not nervous about this at all, I don’t know what you’re talking about.

I had a couple ideas for this, and if I had gotten started on this sooner, I probably would have written two different posts. Too many things didn’t work out right for that, so there’s only this one. Hopefully I didn’t choose poorly. Here we go…

So…Why are DBA skills necessary?

Well…because they are!

I mean… Are car mechanic skills necessary? Are pilot skills necessary? Are business skills necessary?

OK, you’re right, it depends. It depends on if you’re planning on rebuilding a Rochester carburetor and having the engine idle afterwards. Or landing a 747 on an actual runway and not bending anything. Or being the CEO of a multinational company and it continue to grow, prosper, and make money.

I don’t think those are extreme examples. Just like the above, if you want to build, support, and continue to improve a highly available, scalable, and performing database system, you need DBA skills to make that happen. Sure, even if you don’t know what you’re doing, you might get lucky and find a washer in the carb’s air horn that had locked the air valves on the secondary bores closed, the removal of which led to unprecedented quantities of burning gasoline, but you can’t run on luck forever.

But…Why?

All these things are hard. I know that rebuilding a Quadrajet is hard, because those things are a pain in the ass. I know that landing a 747 is hard, because it’s a giant airplane that isn’t slow and might have lots of people in the back. OK, technically I don’t know that it’s hard to run a multinational corporation, but I’m pretty sure it is.

Know what else is hard? Databases. Databases are hard.

This isn’t about platforms or anything like that. Databases are hard when they’re Oracle DBs or when they’re mySQL DBs. Databases are hard when they’re Access “databases”, although in a different way (and in a way that’s not hard if your name is Brent Ozar).

OK, why are databases hard?

They require concentration. They require a wide range of technical skills from all parts of IT to accomplish successfully. They require you to know where to look and who to ask when there are things that you don’t know. They require being able to deal with the pressure put on you when things aren’t going right.

Perhaps more so than other areas of IT, when things aren’t going right, lots of people aren’t happy. Databases contain one of the most important assets of their owners: their data. Having this data safe and available in a timely fashion is a requirement if that data is going to be useful at all. Being able to troubleshoot and fix problems while juggling those unhappy people isn’t always an easy task.

Databases are hard because there are a lot of places for things to go wrong, and a DBA needs to be able to deal with all of those. There’s data modeling, which can cause never-ending problems when done incorrectly. There are server administration tasks, which can have fundamental performance impacts. There is a need for security skills to keep that all-important data safe from all kinds of bad guys. The list goes on.

Why is this point missed?

I don’t know. Wish I did, pretty sure I could make a lot of money 😉

The problem isn’t with us. For the most part, IT folks already know that DB work is hard. We don’t need to convince ourselves. All manner of IT management and/or business owners need to know that DB work is hard, just like most other forms of IT work. This is where the problem is.

It seems like DBs get short-changed a lot, doesn’t it? When things start out, the little Access DB works fine for the few users that there are. As the business grows, either a Dev or Sysadmin that knows a bit about what they’re doing gets a hold of a SQL Server license and migrate the system over. This is completely fine for small to even medium-sized shops. Hardware and SQL itself will run really well out of the box for the vast majority of applications out there.

The problem is when you cross that line. The above-mentioned migrated Access app will run fine for probably a long time, but the next thing anyone knows, it’s five years down the road and the blocking is so bad in the poorly designed & maintained database that the users just know to get coffee across the street when they’re doing certain tasks because it’s going to be a while.

Pain like that can be avoided by having those DBA skills around from the very beginning. They don’t have to be FTEs. They don’t even need to be dedicated resources. They are, however, necessary skills, and every business that has a database (that should be a pretty high percentage) should have someone available to take care of these tasks at least on a part-time basis, even if it’s that Sysadmin who wants to learn the right way to do at least a few things.

Business owners might not think they need to spend the money now, and that very possibly may be correct. However, if they don’t, they need to at least know that the need will be coming someday. It’s only going to be more expensive later, and hopefully that day won’t be a time when a disk has died and the backup job hasn’t been working properly for three weeks 🙁

Maintenance Plans: They don’t like it when DBAs leave

Discovered some…interesting…information about Maintenance Plans a few weeks ago when we had a DBA leave. I don’t think it warrants being called a “bug”, but it is behavior that I don’t think is ideal, and it definitely can cause issues for us and others. The good news here is that I did get a workaround nailed down (thanks to the help of Random Internet Guy), although we haven’t decided whether or not we’re going to use it.

How this all started

Recently, we had one of our DBAs leave. (Incidentally, this is the first time ever that I’ve had this happen to me and also the first time one has left this company.) After he left at the end of his last day, I pulled all of his access to the Production systems. We knew that any Agent jobs that he happened to own would start failing, and sure enough, there were a few failures that evening and over the course of the weekend.

One of these failures turned into a little more of an involved situation: The Maintenance Plan on the only production SQL 2008 box he set up failed. With us being used to SQL 2000, this was expected to be an easy fix and life would go on. Except, it wasn’t.

Toto, I’ve a feeling this isn’t SQL 2000 any more

First attempt to fix this was to change the owner of the SQL Agent Job to ‘sa’. He’s our standard job owner for consistency and because that makes the job impervious to people leaving. The job still failed, although the error was different this time. Obviously the Agent Job owner had some role in all of this, but it wasn’t the whole story, as things weren’t fixed.

After changing the Job owner, this is the error that occurred:

Executed as user: <SQL Service acct>, [blah blah blah]  Started:  11:25:38 AM  Error: 2010-10-06 11:25:39.76     Code: 0xC002F210     Source: <not sure if this GUID is secure or not> Execute SQL Task     Description: Executing the query “DECLARE @Guid UNIQUEIDENTIFIER      EXECUTE msdb..sp…” failed with the following error: “The EXECUTE permission was denied on the object ‘sp_maintplan_open_logentry’, database ‘msdb’, schema ‘dbo’.“. Possible failure reasons: Problems with the query, “ResultSet” property not set correctly, parameters not set correctly, or connection not established correctly.  End Error  Warning: 2010-10-06 11:25:39.79     Code: 0x80019002     Source: OnPreExecute      Description: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED.  [blah blah blah] Error: 2010-10-06 11:25:40.58     Code: 0xC0024104     Source: Check Database Integrity Task      Description: The Execute method on the task returned error code 0x80131501 (An exception occurred while executing a Transact-SQL statement or batch.). The Execute method must succeed, and indicate the result using an “out” parameter.  End Error  Error: 2010-10-06 11:25:40.64     Code: 0xC0024104     Source: <another GUID>      Description: The Execute method on the task returned error code 0x80131501 (An exception occurred while executing a Transact-SQL statement or batch.). The Execute method must succeed, and indicate the result using an “out” parameter.  End Error  Warning: 2010-10-06 11:25:40.64     Code: 0x80019002     Source: OnPostExecute      Description: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. [blah blah blah] End Warning  DTExec: The package execution returned DTSER_FAILURE (1).  Started:  11:25:38 AM  Finished: 11:25:40 AM  Elapsed:  2.64 seconds.  The package execution failed.  The step failed.

A Permissions error in MSDB? Huh?

I dug a little, and the short of it is that the Maintenance Plan’s connection string was still set to the departed Admin’s sysadmin account (a SQL login). All of the work that the MP was trying to do, it was still attempting as the other DBA. This, of course, isn’t going to get very far.

Maintenance Plans have connection strings?

This seems like a dumb question to ask, but it’s one of those things that I had glossed over in my head and not actually thought about. The answer: Sure they do! MPs are basically SSIS packages these days, and of course those have connection strings/settings. MPs are no different. Seems like a “DURRR” now, but hindsight, yadda, yadda.

This connection string is controlled by the “Manage Connections” button in Maintenance Plan toolbar. If you never mess with this when you create an MP, it uses an auto-generated connection to the Local Server, using whatever name you used to connect to it (there’s another topic right there, but someday I’ll talk about some quirks with Aliases I’ve run into, and I’ll address that then). This auto-gen’d connection also uses the same auth credentials you are currently using. If you only use Windows Auth, you are stuck with that; if in Mixed Mode, a SQL auth account is an option, and you can set it to another account. Of course, you would need to know that account’s password.

To fix our MP’s connection string, I could change it to my own credentials, but that of course doesn’t actually fix anything, and will leave an even bigger mess for whoever’s left if/when I leave or responsibilities change. I can’t set it to Windows Auth, because when you select that, it validates the auth right then and there. Since we don’t have Windows Auth SA accounts, that won’t work for us: the AD account that I’m logged on with doesn’t have enough rights. Something else needed to be done.

At this point I realized that this is a fair-sized problem and we can’t be the first shop to run into this; I mean, SQL 2005/2008 have been out for a while, and I’m sure lots of DBAs have left shops that heavily utilize MPs. Time to ask the Senior DBA.

Going nowhere fast

For as common as I expected this to be, I wasn’t finding much helpful information. There were some references to this error message, and even someone in the same situation I was in, but most of what I found were dead-end threads.

At one point, I thought for sure that @SQLSarg had it. This blog post contains the revelation that MPs have Owners! His main point of changing the MP owner is to change the owner that the associated Agent Job(s) are set to when you save the MP. Even though I had already changed the job owner to a valid account, I hoped that the MP’s owner had something to do with the credentials that were used in the Plan’s connection string.

Of course, it didn’t. No behavior change.

Aside: Even though it didn’t help with what I was trying to fix at the time, I believe that changing an MP’s owner to a shared or central service account is a good idea. Every time an MP gets modified, the Agent Job’s owner is changed to the same account as the Plan’s Owner. This might not even be you, and when that person leaves the company, you are guaranteed to have execution problems. There’s a Connect item open asking to make it easier to change the owner, so if you like Maintenance Plans and don’t hate yourself, it’s probably a good idea to log in over there and mash the green arrowhead.

The more I dug, the more dead-end threads I read, the more almost-but-not-quite blog posts I ran across, the more I became reserved to one simple fact:

It is impossible to set the Connection String Credentials in a Maintenance Plan to an account that you don’t know the password for.

As that started to sink in, I asked #SQLHelp if that were, in fact, true. I got a couple responses that yes, that was the case (I would be specific, but this was a number of weeks ago, and since Twitter sorta sucks for finding things like that and I can’t remember who it was, we’re out of luck). Although not the news I was looking for, I was glad to know that the deal was pretty much sealed—we were going to need to figure out something else to do.

One last gasp

Since I don’t feel like I’m doing everything I can while troubleshooting a problem if I don’t grab for at least one straw, I decided to investigate something that was described in one of the dead-end threads that I had found while searching.

This thread on eggheadcafe is a conversation between “howardwnospamnospa”* and Aaron Bertrand, who we all know and love. This one “howardw” was in a pretty similar boat to what I was in, and was really close to getting it to work. He was only thwarted because of an apparent quirk when an Instance is in Windows Auth-only mode.

In the last post in the thread, he reports that when he assigns the Local System account (“NT AUTHORITY\SYSTEM”) as owner of the MP, and sets the connection properties of the plan to use Windows Auth, the plan runs. To attempt to duplicate this, I started with an existing MP on a Dev instance that I had set up (therefore my SQL auth account was the MP owner and the user listed in the Connection Properties) and proceeded with the following steps:

  1. Using the statements in SQLSarg’s post, I changed the owner of the MP to Local System
  2. I added my Domain account to the Instance with SA rights
  3. I opened the MP in question with SSMS, switched its connection to use Windows Auth (while logged in with the Domain account used in Step 2), and saved it.
  4. Checked the MP’s auto-gen’d Agent Job to confirm that its owner was NT AUTHORITY\SYSTEM.
  5. Using a connection with my Domain account, I pulled my SQL account’s SA rights. This would previously make the MP sub-plan fail.

I ran the job and, holy cow, it worked! 😀

That’s good, right?

Ehhhh, maybe. Setting the owner to Local System seems pretty dirty. I haven’t thought of a way that this would be a security problem, but it still feels like a stupid hoop to jump through. For us, it has the extra hoop of us having to add our Domain accounts to the instances every time we want to set up or change connection info on an MP (once the Windows Auth is set, it doesn’t re-validate it when you save the plan, so you only have to do it that one time), then take it out when we’re done.

Those bits aside, this workaround (if you will let me get by without calling this a “hack”) does allow setting up Maintenance Plans that will live through the DBAs that set them up leaving the company. By technicality, I achieved my goal.

We’ve talked about this setup very briefly at the office, but haven’t made a final decision on whether or not to go down this road or to abandon MPs for DB maintenance altogether. Hopefully we will get that decision made before someone else leaves.

But at what cost?

The main cost here is a lot of destruction to my affinity for Maintenance Plans.

Using MPs in general, love them or hate them, is pretty much a religious debate. Usually I fall on the proponent side of those arguments. I think they’re easy to set up, they’re pretty flexible, they can dynamically pick up new DBs that get added to the system (not always an advantage), you can cook up some really crazy dependencies within them, and I like pictures & flowchart-y things :-).

After this whole ordeal? Pretty sure I’ve changed my mind. This whole default behavior of how they will stop working when the account that set them up initially is no longer an SA is way too much of a price to pay. What happens when you’re a single DBA shop and you get honked and bail one day? Near as I can tell, if your account gets restricted as it should, backups will stop working, but there won’t be anyone left to notice!

In lieu of MPs, there’s at least one good option out there that has most all of the desired functionality. While this was going on, @TheSQLGuru posted a link to this set of SPs. I haven’t had time yet to really dive into these, but on the surface, they look pretty good.

Final Comment(s)

This entire thing bugs me. I’m almost certain I’m missing something. This should be a huge problem for lots of people. I feel like I’m missing a detail or it’s because we use SQL logins for the most part or some other thing. But… in all of the testing that I’ve done, bad things happen when the initial creating user gets SA pulled. Period. I can’t get past the feeling that I’m doing something wrong, but I haven’t been able to figure it out yet.

In the meantime… I think Maintenance Plans are evil after all…

* Apparently not to be confused with howardwSOLODRAKBANSOLODRAKBANSO[LODRA] (Eve-Online thing; it’s honestly better if you don’t ask)