Thursday, September 25, 2008
Working with Users
10:36 |
Posted by
Sean McCown |
Edit Post
Ok, so I don't rant in here nearly as much as I could or probably even should, but here's a good one for you.
I got a call from a user who said that he was getting different results when he ran an SP under 2 different user accts. There were no errors, just one was returning results and the other wasn't.
So I got on and ran the SP under his acct and the other acct and I couldn't see any differences. So I looked at the code real quick and didn't see anything weird and then I wrote him to say that nothing was wrong. He insisted that there was still a problem so I deferred troubleshooting to this morning. I got in today and pulled the most recent copy of the SP and started looking at the code. I still didn't see anything that would warrant the different results. I got pulled away for about 30mins and when I came back I got him on chat and we started going through it. I ran the SP under his acct and got an error. Weird... Then I ran it under the other acct and got the same error... Not really that weird considering that I'm looking for different behavior and didn't expect to find it. So now I've at least got something to troubleshoot.
The error said that #table didn't exist, but I could see it being created with a select into. I stepped through the different lines of the SP and it created the #table just fine and returned results with no problem. But when I ran it, it errored out again. What the hell could be causing this thing to error out when it was called as an SP, but not when it's called as adhoc SQL? Then it hit me... and I was right! He had gone in and changed the SP and taken out the select into line while I had stepped away for that 30mins. I was working with old code.
It's a shame that things that start out so promising end with such a fizzle. I was hoping to dig into some weird bug that only happened on thurs mornings. Anyway, I'm still not seeing any different behavior and I suspect he's going to find that he's doing something else different.
Oh, and the method I was using to test the different user accts?
execute as login = 'LoginName' --set to the user you want to test.
SELECT SUSER_NAME(), USER_NAME(); --ensure you're working under the right user.
run your code here...
REVERT -- now set it back to you. It's that easy.
I got a call from a user who said that he was getting different results when he ran an SP under 2 different user accts. There were no errors, just one was returning results and the other wasn't.
So I got on and ran the SP under his acct and the other acct and I couldn't see any differences. So I looked at the code real quick and didn't see anything weird and then I wrote him to say that nothing was wrong. He insisted that there was still a problem so I deferred troubleshooting to this morning. I got in today and pulled the most recent copy of the SP and started looking at the code. I still didn't see anything that would warrant the different results. I got pulled away for about 30mins and when I came back I got him on chat and we started going through it. I ran the SP under his acct and got an error. Weird... Then I ran it under the other acct and got the same error... Not really that weird considering that I'm looking for different behavior and didn't expect to find it. So now I've at least got something to troubleshoot.
The error said that #table didn't exist, but I could see it being created with a select into. I stepped through the different lines of the SP and it created the #table just fine and returned results with no problem. But when I ran it, it errored out again. What the hell could be causing this thing to error out when it was called as an SP, but not when it's called as adhoc SQL? Then it hit me... and I was right! He had gone in and changed the SP and taken out the select into line while I had stepped away for that 30mins. I was working with old code.
It's a shame that things that start out so promising end with such a fizzle. I was hoping to dig into some weird bug that only happened on thurs mornings. Anyway, I'm still not seeing any different behavior and I suspect he's going to find that he's doing something else different.
Oh, and the method I was using to test the different user accts?
execute as login = 'LoginName' --set to the user you want to test.
SELECT SUSER_NAME(), USER_NAME(); --ensure you're working under the right user.
run your code here...
REVERT -- now set it back to you. It's that easy.
Tuesday, August 12, 2008
Linux Programming on SQL Server
13:34 |
Posted by
Sean McCown |
Edit Post
I've currently got a SQL programmer who really should be a Linux guy instead. He consistently sends me DB releases with directions like this:
1. Open the stored procedure folder and run all the .sql scripts.
(Then I go to the folder and there are 3 dozen files in there.)
2. Add a CustTypeCd column to the Orders table and use Server2.DW.dbo.Orders as a guide on how to create the column.
(Then when I get there, there is no Server2.DW.dbo.Orders. Orders is a view with 12 joins and I have to sift through the code to find which column it is and where it's actually coming from.)
3. Create the CustHistDist table by running this the .sql in the Create Table folder.
(When I try to run the script there's no 'USE' statement to tell me which DB it goes in and when I track the guy down to ge the DB, the script fails because the custom data type hasn't been created.)
4. Populate the CustTypeCd column created in step 2.
(But it never says where to populate it from or any hint as to how long it will take or how much data it will populate.)
5. Refresh all views.
(So after looking you'll see that there are dozens if not hundreds of views and I'm supposed to write a cursor during an implementation that writes that SQL for me because I'm not doing them all by hand.)
Now, this isn't something where these types of things occur every now and then. Every release I get from this developer has most of these elements if not all of them. I fact, I really used the implementation doc I got from him this morning as a template for this post. And of course I kick it back every time and tell him that everything needs to be in one script and that all of his 'USE' statements need to be there, etc.
And for those of you who don't know anything about linux (which is me too, but I know this much), in linux when you download software there's rarely an actual installer. You just don't get pretty packaged software on that side of the moon. You have to compile it yourself quite often and even go to different websites and download specific versions of .dlls to make it compile and work correctly. It's not an easy task. We in the windows world are very spoiled. And as much as I complain about the Katmai installs I've been doing lately, I really couldn't imagine having to do something like that on linux. It's just ridiculous.
So in case my point isn't clear here. When you package your code for ANYBODY to deploy. Be nice to them. Don't assume that they have 2hrs to sit there and play with all of your files. Don't assume that they know anything at all about your app. Assume that they just want to hit a button and get on with their lives.
Here are some things to include when you package software:
1. Your contact info should there be a problem.
2. A single script per server assuming that's possible.
3. 'USE' statements in ALL your code. Use this to move between DBs instead of different scripts.
4. Approx. completion times so the implementor will know how to plan.
5. Verified, tested scripts. Don't make them troubleshoot or call you. Test your code several times before you send it to them.
6. If your release spans multiple servers you might even wrap it up in a .bat file for them so they just have one button to push.
So really the rule is just to realize that the guy running your code isn't there to troubleshoot your code and he's not there to tie up your loose ends. What would be be like if whenever MS released a new version of Office you had to compile this portion, or go to their website for a certain .dll because they just didn't feel like putting it up with the rest of the package? Or what if they said, ok, now manually register all 35 of these .dlls and you're good to go.
Whenever I package SQL code for someone I quite often use PrimalScript to compile it into an exe and it does its thing. So all I have to do is send them a final exe and they click on it and it just runs. I test that exe several times before sending it to them and they really have very little to worry about. THAT'S how you package software.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com
Read my book reviews at:
www.ITBookworm.com
Blog Author of:
Database Underground – http://weblog.infoworld.com/dbunderground/
1. Open the stored procedure folder and run all the .sql scripts.
(Then I go to the folder and there are 3 dozen files in there.)
2. Add a CustTypeCd column to the Orders table and use Server2.DW.dbo.Orders as a guide on how to create the column.
(Then when I get there, there is no Server2.DW.dbo.Orders. Orders is a view with 12 joins and I have to sift through the code to find which column it is and where it's actually coming from.)
3. Create the CustHistDist table by running this the .sql in the Create Table folder.
(When I try to run the script there's no 'USE' statement to tell me which DB it goes in and when I track the guy down to ge the DB, the script fails because the custom data type hasn't been created.)
4. Populate the CustTypeCd column created in step 2.
(But it never says where to populate it from or any hint as to how long it will take or how much data it will populate.)
5. Refresh all views.
(So after looking you'll see that there are dozens if not hundreds of views and I'm supposed to write a cursor during an implementation that writes that SQL for me because I'm not doing them all by hand.)
Now, this isn't something where these types of things occur every now and then. Every release I get from this developer has most of these elements if not all of them. I fact, I really used the implementation doc I got from him this morning as a template for this post. And of course I kick it back every time and tell him that everything needs to be in one script and that all of his 'USE' statements need to be there, etc.
And for those of you who don't know anything about linux (which is me too, but I know this much), in linux when you download software there's rarely an actual installer. You just don't get pretty packaged software on that side of the moon. You have to compile it yourself quite often and even go to different websites and download specific versions of .dlls to make it compile and work correctly. It's not an easy task. We in the windows world are very spoiled. And as much as I complain about the Katmai installs I've been doing lately, I really couldn't imagine having to do something like that on linux. It's just ridiculous.
So in case my point isn't clear here. When you package your code for ANYBODY to deploy. Be nice to them. Don't assume that they have 2hrs to sit there and play with all of your files. Don't assume that they know anything at all about your app. Assume that they just want to hit a button and get on with their lives.
Here are some things to include when you package software:
1. Your contact info should there be a problem.
2. A single script per server assuming that's possible.
3. 'USE' statements in ALL your code. Use this to move between DBs instead of different scripts.
4. Approx. completion times so the implementor will know how to plan.
5. Verified, tested scripts. Don't make them troubleshoot or call you. Test your code several times before you send it to them.
6. If your release spans multiple servers you might even wrap it up in a .bat file for them so they just have one button to push.
So really the rule is just to realize that the guy running your code isn't there to troubleshoot your code and he's not there to tie up your loose ends. What would be be like if whenever MS released a new version of Office you had to compile this portion, or go to their website for a certain .dll because they just didn't feel like putting it up with the rest of the package? Or what if they said, ok, now manually register all 35 of these .dlls and you're good to go.
Whenever I package SQL code for someone I quite often use PrimalScript to compile it into an exe and it does its thing. So all I have to do is send them a final exe and they click on it and it just runs. I test that exe several times before sending it to them and they really have very little to worry about. THAT'S how you package software.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.ITBookworm.com
Read my book reviews at:
www.ITBookworm.com
Blog Author of:
Database Underground – http://weblog.infoworld.com/dbunderground/
Wednesday, May 21, 2008
MidnightDBA is looking better.
10:16 |
Posted by
Sean McCown |
Edit Post
OK, I told you guys before that I had my new video tutorial site up and running. Well, it's been just thrown together until now. We finally put some effort behind it and now it's not quite so bad. We've tied it into the main site and it's looking pretty ok... of course, you can expect to see it start turning into a real site here before long, but it's clearly coming along.
I've added lots of vids and I'm adding more every week.
Feel free to check it out. It's totally free and you don't even have to sign up or anything.
So visit MidnightDBA today and sign up for the RSS. It'll tell you when I post new content.
I've added lots of vids and I'm adding more every week.
Feel free to check it out. It's totally free and you don't even have to sign up or anything.
So visit MidnightDBA today and sign up for the RSS. It'll tell you when I post new content.
Thursday, May 15, 2008
Vids posted
09:01 |
Posted by
Sean McCown |
Edit Post
OK, remember back when I said I was going to be making some vids? Well, I've got them done now. I'm doing more all the time. I average about 3/wk. Sometimes more, sometimes less... that's why it's an average. Anyway, they're posted now so feel free to go take a look at them.
My server's slow, and my connection is ok, but it prob won't take much to max it out. But if you keep trying, you'll get them. I don't know how popular this blog is so I don't know how many will see this a rush out to download the vids. But I'm working on another hosting solution and I'll let you know when I have it ready. It could be some time though, huh.
So let me take a couple mins to explain about these vids. This is me sitting up at night doing camtasia after everyone's in bed. They're not meant to be these superly produced movie type productions. They're just me at the computer talking about whatever topic happens to cross my mind that day. I try to do them in small chunks on specific topics so you can get just the info you're looking for. There's no need to bog you down with crap about backups when I'm talking about SSIS. So I do my best to stay on topic. And I like to throw some practical advice in there whenever possible. And if I took the time to do any real production on these guys, it would take me weeks to get a video out the door. As it stands, I can make the vid, produce it, and post it in about an hour. This way I can get a lot of vids up instead of spending all my time producing. And it's only screencast, so you don't have to look at my face.
Anyway, here's the link. I look forward to criticism. I'm fairly new at this so I'm still kinda getting my rap down. And I take requests, so if any of you have anything you'd like to see, just shoot me an email and I'll do my best to get it done for you.
Also, there's a blog I use to notify you of new vids... I've got a lot of vids up, and only a couple blog entries, so I haven't been really good at sticking with it, but I'll get better.
MidnightDBA
Just click on the SQL Server link there. It doesn't look like a link, but it is.
My server's slow, and my connection is ok, but it prob won't take much to max it out. But if you keep trying, you'll get them. I don't know how popular this blog is so I don't know how many will see this a rush out to download the vids. But I'm working on another hosting solution and I'll let you know when I have it ready. It could be some time though, huh.
So let me take a couple mins to explain about these vids. This is me sitting up at night doing camtasia after everyone's in bed. They're not meant to be these superly produced movie type productions. They're just me at the computer talking about whatever topic happens to cross my mind that day. I try to do them in small chunks on specific topics so you can get just the info you're looking for. There's no need to bog you down with crap about backups when I'm talking about SSIS. So I do my best to stay on topic. And I like to throw some practical advice in there whenever possible. And if I took the time to do any real production on these guys, it would take me weeks to get a video out the door. As it stands, I can make the vid, produce it, and post it in about an hour. This way I can get a lot of vids up instead of spending all my time producing. And it's only screencast, so you don't have to look at my face.
Anyway, here's the link. I look forward to criticism. I'm fairly new at this so I'm still kinda getting my rap down. And I take requests, so if any of you have anything you'd like to see, just shoot me an email and I'll do my best to get it done for you.
Also, there's a blog I use to notify you of new vids... I've got a lot of vids up, and only a couple blog entries, so I haven't been really good at sticking with it, but I'll get better.
MidnightDBA
Just click on the SQL Server link there. It doesn't look like a link, but it is.
Thursday, May 08, 2008
Spoil your users
11:40 |
Posted by
Sean McCown |
Edit Post
It happens sometimes when things just don't go as planned. One of the things I always do is to make sure that things I may need in a hurry are ready. Restores are just that kinda thing. I go out of my way to make sure that for any given box, I can access complete restores within just a few secs. I'm using LiteSpeed, so I've written my own code that will create a list of restore statements for me based off of the last full backup and all the logs since then. And I can produce hundreds of lines of restore code in just about 5secs, including opening the script.
Well, it happened today. My first restore in a while, and it happened to be on a box where the LiteSpeed process had stopped logging to the central repo so I didn't have my usual list of backups to use to create my statements. So there I was building statements by hand, which wasn't too hard because I got lucky and they only needed a few. Then about 15mins later, the manager of that group came up to me and said that I had forgotten to send the email that the restore was done. When I told him that it was because I hadn't started it yet, he was like WHAT? What's taking so long, this kinda thing only takes a couple mins usually.
And that's what you want to hear. You want your users to get spoiled to getting these kinds of things fast. I've since fixed my LiteSpeed glitch and the next time he'll be good to go. But I love it when stuff like this happens because it means I'm doing a good job and the users have come to count on me being reliable and fast.
Well, it happened today. My first restore in a while, and it happened to be on a box where the LiteSpeed process had stopped logging to the central repo so I didn't have my usual list of backups to use to create my statements. So there I was building statements by hand, which wasn't too hard because I got lucky and they only needed a few. Then about 15mins later, the manager of that group came up to me and said that I had forgotten to send the email that the restore was done. When I told him that it was because I hadn't started it yet, he was like WHAT? What's taking so long, this kinda thing only takes a couple mins usually.
And that's what you want to hear. You want your users to get spoiled to getting these kinds of things fast. I've since fixed my LiteSpeed glitch and the next time he'll be good to go. But I love it when stuff like this happens because it means I'm doing a good job and the users have come to count on me being reliable and fast.
Thursday, May 01, 2008
At a Loss
09:57 |
Posted by
Sean McCown |
Edit Post
OK, this is where the rant portion of this blog comes in. I'm completely at a loss and I just don't know where to go from here.
I was approached by a dev/admin the other day with a problem on his SQL box. His job isn't kicking off and there are no errors or anything. It's just not being run by the Agent. And when he runs it by hand, the history isn't logged. OK, so I check the usual suspects and don't seem to get any love. So I start asking him... has anything changed, have you added anything, have you deleted anything, etc.
He says no, nothing has changed and it just started having problems Mon. So I dug a little deeper and did some testing and was able to reproduce the problem with ease, which is a huge advantage in something like this. So I made a couple changes that looked like they may fix the problem. So the next day when his job ran he had the same problem. OK, so back to the drawing board.
I looked into a couple things only now it's getting pretty dry, right? The number of things that could cause this problem is dwindling. I managed to make a small change and now there was nothing to do but wait. So the next day came and the same issue was there.
So I went to ask him again... has anything changed, anything at all... I need to know if you've changed the wallpaper or put a txt file out there. No matter how insignificant you think the change is, I need to know about it. What's changed? \
Again, I get the answer, nothing has changed.
So I spend another day looking at this and I found that there were a bunch of Agent jobs that had been hung for a long time. And apparently this is happening every day.
I burped the services and all was well. However, the next day the problem came back and so did the hung jobs.
And this guy, with whom I've been working for almost a week, and for whom I've been racking my brains, tells someone else that he wishes he could finally get this issue worked out because it's been happening off and on for months... ever since they moved it to the new box. WHAT? What the shit dude? Didn't I just spend almost a week asking you if anything had changed? And somehow you didn't find it relevant to tell me that you had switched boxes and that this was an ongoing issue?
I'm just at a loss. I don't know how many more ways I can ask you what's changed on the box. What does it seriously take to get you to tell me something major like that?
But this is the nature of our work isn't it? All support people deal with stuff like this, and it seems that even IT people are prone to this as well. I would figure that since he's in IT, and an admin himself, that he would be able to give me at least the large pieces of info like this. So I guess when people have problems with something they're not responsible for, they just turn off their brains or something. I don't know.
I was approached by a dev/admin the other day with a problem on his SQL box. His job isn't kicking off and there are no errors or anything. It's just not being run by the Agent. And when he runs it by hand, the history isn't logged. OK, so I check the usual suspects and don't seem to get any love. So I start asking him... has anything changed, have you added anything, have you deleted anything, etc.
He says no, nothing has changed and it just started having problems Mon. So I dug a little deeper and did some testing and was able to reproduce the problem with ease, which is a huge advantage in something like this. So I made a couple changes that looked like they may fix the problem. So the next day when his job ran he had the same problem. OK, so back to the drawing board.
I looked into a couple things only now it's getting pretty dry, right? The number of things that could cause this problem is dwindling. I managed to make a small change and now there was nothing to do but wait. So the next day came and the same issue was there.
So I went to ask him again... has anything changed, anything at all... I need to know if you've changed the wallpaper or put a txt file out there. No matter how insignificant you think the change is, I need to know about it. What's changed? \
Again, I get the answer, nothing has changed.
So I spend another day looking at this and I found that there were a bunch of Agent jobs that had been hung for a long time. And apparently this is happening every day.
I burped the services and all was well. However, the next day the problem came back and so did the hung jobs.
And this guy, with whom I've been working for almost a week, and for whom I've been racking my brains, tells someone else that he wishes he could finally get this issue worked out because it's been happening off and on for months... ever since they moved it to the new box. WHAT? What the shit dude? Didn't I just spend almost a week asking you if anything had changed? And somehow you didn't find it relevant to tell me that you had switched boxes and that this was an ongoing issue?
I'm just at a loss. I don't know how many more ways I can ask you what's changed on the box. What does it seriously take to get you to tell me something major like that?
But this is the nature of our work isn't it? All support people deal with stuff like this, and it seems that even IT people are prone to this as well. I would figure that since he's in IT, and an admin himself, that he would be able to give me at least the large pieces of info like this. So I guess when people have problems with something they're not responsible for, they just turn off their brains or something. I don't know.
Wednesday, April 16, 2008
Red-Gate's Data Generator
09:07 |
Posted by
Sean McCown |
Edit Post
I've been on data generators lately so I'm continuing that again. I've actually gotten a hold of a couple more last night so I'll be doing some write-ups on those as well. If you want to catch up, here are the other postings that started all of this.
Generating Data
Going Red
OK, I was able to get Red-Gate's data generator up and running on my new instance of Yukon last night and I pumped some data through it. And like many of Red-Gate's other tools it's just easy to use. There are a couple things you have to get used to, but it's really no big deal.
So I was able to generate 100,000 rows across a wireless pretty damn fast. In just a few secs really. I wanted to make a quick Camtasia of it for you last night, but time just got away from me. I'll try to get it done tonight and give you a quick walk-through of the product. But it's a nice tool. It doesn't do absolutely everything I'd like it to do just yet, but there's nothing wrong with it. And it's young so I suspect that it'll grow into itself.
Anyway, I haven't really put it through any real paces yet. I've just barely scratched the surface to get the feel for the GUI and how fast it can generate a simple data set. And it's not only fast, it's easy to use. I'm very pleased so far.
Generating Data
Going Red
OK, I was able to get Red-Gate's data generator up and running on my new instance of Yukon last night and I pumped some data through it. And like many of Red-Gate's other tools it's just easy to use. There are a couple things you have to get used to, but it's really no big deal.
So I was able to generate 100,000 rows across a wireless pretty damn fast. In just a few secs really. I wanted to make a quick Camtasia of it for you last night, but time just got away from me. I'll try to get it done tonight and give you a quick walk-through of the product. But it's a nice tool. It doesn't do absolutely everything I'd like it to do just yet, but there's nothing wrong with it. And it's young so I suspect that it'll grow into itself.
Anyway, I haven't really put it through any real paces yet. I've just barely scratched the surface to get the feel for the GUI and how fast it can generate a simple data set. And it's not only fast, it's easy to use. I'm very pleased so far.
Tuesday, April 15, 2008
Going Red
12:31 |
Posted by
Sean McCown |
Edit Post
Like me mentioned in my IW blog last week, I got a hold of Red-Gate’s new data generator and started playing around with it. The first thing I noticed was that it didn’t yet support Katmai. I’m a little confused by that because the libraries should be close enough to be able to make a basic connection so there shouldn’t be anything wrong with it, but alas, it’s unsupported as of yet. I did connect with the PM on the project who showed me their roadmap which tells me that it should have Katmai support by Q3. No worries, I just put up a Yukon instance so I’ll be starting up my testing soon. Now, I realize that they wanted to make it a simple tool that only connects to SQL Server, so they don’t support ODBC. But that’s the definite advantage to supporting ODBC is that as long as you adhere to basic ANSI you should be able to connect to any DB you have loaded. So in this case, they could support SQL Server only as well as accepting ODBC and you could easily connect to a beta copy like Katmai. Still, I get where they’re coming from; I just always want it all.
Now about the roadmap… I saw some interesting things in there.
1. In Q2 SQ Prompt 3.8 will be released offering a Pro version that incorporates the layout SQL functionality from Refactor. I’ve been trying to find out more info on the future of Refactor, but I’m having trouble hooking up with the PM. I’ll let you know more when I manage to hook up. I don’t know when exactly in Q2, but if they shoot me a press release I’ll let you know and maybe I’ll even get some code to play with as well.
2. Also in Q2 will be SQL Response. This is their new monitoring product. I had no idea that the market could still support a monitoring product, but it apparently can. Personally, I’m really big in the monitoring space so I can’t wait to see what they come up with.
3. Exception Hunter was release back in Dec and I didn’t even know about that one. It sounds interesting though so I’m going to do some reading on it and see what I come up with.
OK, that’s it, but take a look at the roadmap and keep an eye on it for more tidbits. It would be nice if they’d wrap an RSS around it so I don’t have to keep checking.
Now about the roadmap… I saw some interesting things in there.
1. In Q2 SQ Prompt 3.8 will be released offering a Pro version that incorporates the layout SQL functionality from Refactor. I’ve been trying to find out more info on the future of Refactor, but I’m having trouble hooking up with the PM. I’ll let you know more when I manage to hook up. I don’t know when exactly in Q2, but if they shoot me a press release I’ll let you know and maybe I’ll even get some code to play with as well.
2. Also in Q2 will be SQL Response. This is their new monitoring product. I had no idea that the market could still support a monitoring product, but it apparently can. Personally, I’m really big in the monitoring space so I can’t wait to see what they come up with.
3. Exception Hunter was release back in Dec and I didn’t even know about that one. It sounds interesting though so I’m going to do some reading on it and see what I come up with.
OK, that’s it, but take a look at the roadmap and keep an eye on it for more tidbits. It would be nice if they’d wrap an RSS around it so I don’t have to keep checking.
Wednesday, April 09, 2008
Database Therapy
11:59 |
Posted by
Sean McCown |
Edit Post
A good DBA has to be a bit of a therapist at times. We have to take users who are freaking out and get past their anxiety, past their hurt, and a lot of times past their anger, and get down to the real issue. What I find amusing a lot of times is how the most excited users, the ones who are just freaking out, tend to be the ones in the least amount of trouble. Their problems are usually easy to solve. It's the devs who come up to you nice and calm with just kind of a question to ask you real quick that find out they've done something really stupid and you can't get them out of the hole they've dug.
On slightly the same topic, I had a comical exchange with a dev this morning in trying to explain to him why you can't install a .net assembly in SQL2K, and how SQL can't be compiled into a .dll.
And I'm falling more and more in love with the Katmai group query capabilities. Almost every day I have to deploy code or make some kind of change to all of our marts and being able to hit them all in one shot is just incredible. MS really got this one right.
On slightly the same topic, I had a comical exchange with a dev this morning in trying to explain to him why you can't install a .net assembly in SQL2K, and how SQL can't be compiled into a .dll.
And I'm falling more and more in love with the Katmai group query capabilities. Almost every day I have to deploy code or make some kind of change to all of our marts and being able to hit them all in one shot is just incredible. MS really got this one right.
Thursday, April 03, 2008
The Runaway Instance
08:31 |
Posted by
Sean McCown |
Edit Post
I recently installed the latest Katmai CTP on my workstation at home. I installed the entire package. I connected to it just fine and worked with it for several days. I then went to hit it with an external client so I could load some data, but it wouldn't connect. I checked and it was set for remote connections, and it was listening on 1433 so that wasn't the problem. I didn't know if there might not be some kind of glitch so I burped the service from the Katmai Config Mgr. The service failed to restart. The message I got was that 1433 was already in use. OK, at least I have a message to troubleshoot now. Well, I've got Pinnacle running on that box and it uses SQL Express. So since I installed Katmai as a default instance as well, it was clear that Pinnacle was stepping on it... even though Yukon was on the box before and it never had issues with Pinnacle.
Anyway though... I stopped the express service and I still couldn't connect. So to make a long story shorter, because I spent a very long time on this, it turns out that the Katmai Control Mgr was falsely reporting the express instance as stopped when it was really still on. When I went into Windows SCM it was still running. So I stopped it from Windows and all's well.
At least I got some good troubleshooting practice in, huh?
Anyway though... I stopped the express service and I still couldn't connect. So to make a long story shorter, because I spent a very long time on this, it turns out that the Katmai Control Mgr was falsely reporting the express instance as stopped when it was really still on. When I went into Windows SCM it was still running. So I stopped it from Windows and all's well.
At least I got some good troubleshooting practice in, huh?
Monday, March 31, 2008
The Untunable Database
14:21 |
Posted by
Sean McCown |
Edit Post
There are some DBs that just can't be tuned any more than they already are (or aren't). A good example of this is an application that hits a DB and never qualifies any of its queries. They all hit with select * and no where clause. There's really nothing you can do to increase the performance short of just throwing more spindles at it. But that's not really what I'm thinking about right now. What I've got on my mind now is an application and DB that just can't be tuned no matter what you do because the business owners don't see the benefit of making the changes.
I saw that a lot when I first got to my current gig. We had queries doing horrendous things and taking several hours to return and nobody cared. The end users had been running these queries for years and were happy with them. They didn't care that the server was maxed out all the time and that they had to wait 12hrs for a report to return. Now, I don't have to tell you that as a DBA that just drives me insane. Not to mention that it gives me nothing to do. Why am I even here then?
So with that in mind, I had to go a little cowboy on them and just start making minor changes that proved my point. I really can't stress enough that I'm against going cowboy on any DB and I don't care who you are. But there are some instances where it's warranted. You have to get the ball rolling somehow. And how this DB got in such bad shape was definitely their fault, but their current view wasn't. They had just been so used to things working the way they were that they didn't see the need to change. They got their reports more or less when they expected them and even if they had to wait a couple extra hours for them they didn't really mind because they understood the server was busy.
So what I did was just start by indexing a couple huge #tables. I just picked a couple of the worst SPs and added a couple indexes. Then I went in and started commenting out cursors and replacing them with simple join queries. Both of these made a huge difference. Then I just sat back and waited. You really don't want to go too far with something like this. Then when they started noticing that their 12hr queries were coming back in just a few secs, then I had their attention. I was then able to convince them to let me go even further and start really tearing into some of these SPs.
And now, for the first time ever, we've got a near-realtime reporting effort in our company. They've come a long way from 'I don't care if it takes 12hrs' to 'I have to have it now'. The problem is they still slip back into their old habits now and then. They currently want to implement an encryption solution that will take around 2mins to return for each report when the solution I suggested returns in about 2secs. And sure, 2mins isn't really going to break the bank, but as those of you who have done a lot of query tuning should know, you have to be hungry for resources. You have to treat every single CPU tick as a drop of water in the desert. If you don't, you'll wake up one day and be in the middle of the same shit you were in before. You have to fight over milliseconds and squeeze every last drop of performance out of every query you can find. That's the only way to run a real tuning effort.
But it's amazing how politics and perception find their way into every aspect of your job.
I saw that a lot when I first got to my current gig. We had queries doing horrendous things and taking several hours to return and nobody cared. The end users had been running these queries for years and were happy with them. They didn't care that the server was maxed out all the time and that they had to wait 12hrs for a report to return. Now, I don't have to tell you that as a DBA that just drives me insane. Not to mention that it gives me nothing to do. Why am I even here then?
So with that in mind, I had to go a little cowboy on them and just start making minor changes that proved my point. I really can't stress enough that I'm against going cowboy on any DB and I don't care who you are. But there are some instances where it's warranted. You have to get the ball rolling somehow. And how this DB got in such bad shape was definitely their fault, but their current view wasn't. They had just been so used to things working the way they were that they didn't see the need to change. They got their reports more or less when they expected them and even if they had to wait a couple extra hours for them they didn't really mind because they understood the server was busy.
So what I did was just start by indexing a couple huge #tables. I just picked a couple of the worst SPs and added a couple indexes. Then I went in and started commenting out cursors and replacing them with simple join queries. Both of these made a huge difference. Then I just sat back and waited. You really don't want to go too far with something like this. Then when they started noticing that their 12hr queries were coming back in just a few secs, then I had their attention. I was then able to convince them to let me go even further and start really tearing into some of these SPs.
And now, for the first time ever, we've got a near-realtime reporting effort in our company. They've come a long way from 'I don't care if it takes 12hrs' to 'I have to have it now'. The problem is they still slip back into their old habits now and then. They currently want to implement an encryption solution that will take around 2mins to return for each report when the solution I suggested returns in about 2secs. And sure, 2mins isn't really going to break the bank, but as those of you who have done a lot of query tuning should know, you have to be hungry for resources. You have to treat every single CPU tick as a drop of water in the desert. If you don't, you'll wake up one day and be in the middle of the same shit you were in before. You have to fight over milliseconds and squeeze every last drop of performance out of every query you can find. That's the only way to run a real tuning effort.
But it's amazing how politics and perception find their way into every aspect of your job.
Wednesday, March 26, 2008
Training Site
12:43 |
Posted by
Sean McCown |
Edit Post
I just came across a training site I haven't seen before. They've got a couple free vids, but it's hard to really get a feel for the training from just those.
So I'm trying to get a pass to the site so I can review it for you guys and let you know if it's worth it.
I'll keep you posted.
So I'm trying to get a pass to the site so I can review it for you guys and let you know if it's worth it.
I'll keep you posted.
Monday, March 24, 2008
Celebrity DBA Work
11:32 |
Posted by
Sean McCown |
Edit Post
I've always had a soft spot for celebrities who have their entire lives spilled out in public. It's gotta be tough to not only have everyone see how good or bad you do you in your job, but also in your personal life as well. And I've often thought for everything I'm not (rich, well-known, etc) at least I don't have all my failures made public.
At least, that's the way it is until you have a disaster with one of your DBs. Then everyone wants to come stand over your shoulder and watch you bring it back online. Now the heat is on because you have to remember every last command and parameter in front of the crowd. And knowing that is like trying to stop laughing in church. The pressure is just too great. Some DBAs fold at this time. Others just do their jobs like nothing's going on. Me, I clear my desk of on-lookers. I had that happen just this morning. I can't stand to have people just standing there watching me work. I like to be able to follow a train of thought without worrying about how I come off to my audience. So I always tell them... the sooner you leave the sooner I can get to work fixing this.
This morning's disaster came in the form of a dev sending me an email telling me his system was down. He then followed up with a trip to my desk. So I asked him... did you come over just to stand over my shoulder and watch me work? Thankfully, he took the hint and left. But it's not always that easy.
I've always been very sensitive about that with other people too. Unless invited, I always try to stay on the backside of the monitor... especially when someone is trying to get something done in a hurry or needs to concentrate.
There are those who don't really mind people watching them. These would be those who present at conferences all the time and are used to it. But I'm not one of them. Anyway though... give your DBAs a break. If you want them to do something for you, don't stand there and watch them. Just leave and they'll call you when it's ready.
At least, that's the way it is until you have a disaster with one of your DBs. Then everyone wants to come stand over your shoulder and watch you bring it back online. Now the heat is on because you have to remember every last command and parameter in front of the crowd. And knowing that is like trying to stop laughing in church. The pressure is just too great. Some DBAs fold at this time. Others just do their jobs like nothing's going on. Me, I clear my desk of on-lookers. I had that happen just this morning. I can't stand to have people just standing there watching me work. I like to be able to follow a train of thought without worrying about how I come off to my audience. So I always tell them... the sooner you leave the sooner I can get to work fixing this.
This morning's disaster came in the form of a dev sending me an email telling me his system was down. He then followed up with a trip to my desk. So I asked him... did you come over just to stand over my shoulder and watch me work? Thankfully, he took the hint and left. But it's not always that easy.
I've always been very sensitive about that with other people too. Unless invited, I always try to stay on the backside of the monitor... especially when someone is trying to get something done in a hurry or needs to concentrate.
There are those who don't really mind people watching them. These would be those who present at conferences all the time and are used to it. But I'm not one of them. Anyway though... give your DBAs a break. If you want them to do something for you, don't stand there and watch them. Just leave and they'll call you when it's ready.
Friday, March 21, 2008
Videos Coming Soon
10:25 |
Posted by
Sean McCown |
Edit Post
I've started making some SQL Server tutorial videos. I've done like 3 or 4 so far and I'm going to post them soon. I'm starting with a basic topic and building on that. So this time it's backup. I started with the basic backup syntax and am working up to complicated backup procedures. This is meant to be kind of an inclusive tutorial series for someone who doesn't know backups at all to taking them through some decent types of procedures. I'll be branching out too, this is just where I started. I'll post links once they're online.
Thursday, March 20, 2008
Will the Real Idiot Stand-up.
17:11 |
Posted by
Sean McCown |
Edit Post
As you know our other DBA just left and he just started his new job. I was IMing him today and asked him how it was going. He said, the last DBA was an idiot. It's funny, cause I can't count the number of times I've said that. I don't thing I've ever started a job where the guy before me knew what he was doing.
The question is though, am I really all that good or do I just have an inflated ego? I'd probably have to say it's a bit of both really. I've seen a lot of DBAs who just don't know the simplest things about SQL. I've talked about this several times in both my blogs so I'm not going to harp on it too much right now, but it holds more true every year I'm in this industry.
There's a difference between just doing things differently than the other guy, and his systems actually being neglected. Not performing backups or index maint. is bad DBAing. It's not just a different way of doing things. I remember talking to a guy who was a very high DBA at a company we all know last year. I was at PASS come to think of it. And he sat there proudly and told me that they NEVER change their sa passwords on any of their systems. I would love to tell you his reasoning, but I just couldn't get into something like that. But to be proud that you never change your sa password is just assinine. You know what they say dude, if you look around the room and you can't find the asshole, it's you. The same goes with idiots.
It's hard to measure skill though. Everyone has such different experiences. Things I've come to know well may be completely foreign to a different DBA who's far better than I at something else. So does it make him an idiot because he doesn't know what I know? Yeah, sometimes. The basics should be covered. Every DBA should know what it is to backup a system and do maint. and basic security. And so often it's these basics that aren't covered.
So now it comes down to simplicity. What makes a really good DBA? I've had several talks with guys all over IT about this same topic, and in almost every session, we've pretty much concluded that you only have to try a little bit to be better than the average guy. The average guy does very very little to further his knowledge or to get really good at his job. Most people just skate by. So if you try just a little bit you can rise above the crowd. That's what I think anyway.
The question is though, am I really all that good or do I just have an inflated ego? I'd probably have to say it's a bit of both really. I've seen a lot of DBAs who just don't know the simplest things about SQL. I've talked about this several times in both my blogs so I'm not going to harp on it too much right now, but it holds more true every year I'm in this industry.
There's a difference between just doing things differently than the other guy, and his systems actually being neglected. Not performing backups or index maint. is bad DBAing. It's not just a different way of doing things. I remember talking to a guy who was a very high DBA at a company we all know last year. I was at PASS come to think of it. And he sat there proudly and told me that they NEVER change their sa passwords on any of their systems. I would love to tell you his reasoning, but I just couldn't get into something like that. But to be proud that you never change your sa password is just assinine. You know what they say dude, if you look around the room and you can't find the asshole, it's you. The same goes with idiots.
It's hard to measure skill though. Everyone has such different experiences. Things I've come to know well may be completely foreign to a different DBA who's far better than I at something else. So does it make him an idiot because he doesn't know what I know? Yeah, sometimes. The basics should be covered. Every DBA should know what it is to backup a system and do maint. and basic security. And so often it's these basics that aren't covered.
So now it comes down to simplicity. What makes a really good DBA? I've had several talks with guys all over IT about this same topic, and in almost every session, we've pretty much concluded that you only have to try a little bit to be better than the average guy. The average guy does very very little to further his knowledge or to get really good at his job. Most people just skate by. So if you try just a little bit you can rise above the crowd. That's what I think anyway.
Wednesday, March 19, 2008
SQL Server Done Right
11:23 |
Posted by
Sean McCown |
Edit Post
This is the perfect topic to go along with what I wrote on my other blog today in The real difference between SQL Server and Oracle.
I just got an email from the producer of the new Kalen Delaney series on SQL Server giving me my press pass into the online content for this series. I've only watched the 1st 9mins so far and already it's exactly what I'm talking about in my other blog. Here's Kalen Delaney who writes one of the most successful series on SQL Server (the other one is by the late Ken Henderson. I still have a hard time saying that), and she's going the extra mile to put her book into a video training series where she explains the concepts herself.
I, like many other people learn better when things are explained to me than I do from a lifeless page. And Kalen's an experienced teacher so she has a way of explaining things that make you just get it. Already in this video she's already covered security of metadata and the sys schema. She's actually explaining how this stuff fits together from the ground up. That's how it's done. I have no doubt that the rest of the series will contain the same deep-level understanding.
I think I'm going to enjoy this series and I'll try to write-up a full report when I'm done. Or maybe I'll just do it as I go along.
OK, so here's the link to the site. You can order the DVD or you can watch it online. It's good stuff. Seriously, go check it out if you haven't.
I was recently chatting with Kalen in email and she told me that this is basically the course she teaches when she's brought into a company to teach a class.
Actually, I didn't mean this to be an official interview, but I'm going to go ahead and paste her email here. I'm sure she won't mind (at least I hope not) and she explains it better than I would anyway. I typically don't post emails without asking first, but she knows who I am and she answered my questions like she was being interviewed, so this one time I'm going to do it. But you'll almost never see me take this liberty.
1. What material will this first DVD cover…
You can get information about my course here: http://www.insidesqlserver.com/Course%20Description%20and%20Outline.htm
The first DVD covers most of what is in Module 1.
2. What format will it take… will be be a group of slides and whitepapers, or screencast instruction by you…
The DVD will be a mixture of live capture of me talking, and screen captures of my slides and my demos.
3. Who all is involved in the project…
I am recording the class that I have been presenting all over the world for the last several years. Chuck Boyce, of AskaSQLGuru.com is doing the filming and editing. The business side is being managed by Peter Ward of www.WardyIT.com in Brisbane, Australia
4. How often can we expect to see a new DVD come out…
Since I have to fly to New York for filming, we are only able to do about one a month. In fact, I am just about to leave for the airport for the second round of filming.
5. What advantage will one have in ordering these over just getting the books…
Different people learn in different ways. If you like to hear and see someone explaining concepts, this can add to the benefit of the books. People pay a lot of money to attend my classes, but since I’m only one person, I can’t offer them that often. The DVDs are a chance to for anyone, anywhere to get to take my class. If you can read and absorb everything in the books on your own, the DVDs might not offer anything more.
So again, here's the link to SQLServerDVD.com.
I just got an email from the producer of the new Kalen Delaney series on SQL Server giving me my press pass into the online content for this series. I've only watched the 1st 9mins so far and already it's exactly what I'm talking about in my other blog. Here's Kalen Delaney who writes one of the most successful series on SQL Server (the other one is by the late Ken Henderson. I still have a hard time saying that), and she's going the extra mile to put her book into a video training series where she explains the concepts herself.
I, like many other people learn better when things are explained to me than I do from a lifeless page. And Kalen's an experienced teacher so she has a way of explaining things that make you just get it. Already in this video she's already covered security of metadata and the sys schema. She's actually explaining how this stuff fits together from the ground up. That's how it's done. I have no doubt that the rest of the series will contain the same deep-level understanding.
I think I'm going to enjoy this series and I'll try to write-up a full report when I'm done. Or maybe I'll just do it as I go along.
OK, so here's the link to the site. You can order the DVD or you can watch it online. It's good stuff. Seriously, go check it out if you haven't.
I was recently chatting with Kalen in email and she told me that this is basically the course she teaches when she's brought into a company to teach a class.
Actually, I didn't mean this to be an official interview, but I'm going to go ahead and paste her email here. I'm sure she won't mind (at least I hope not) and she explains it better than I would anyway. I typically don't post emails without asking first, but she knows who I am and she answered my questions like she was being interviewed, so this one time I'm going to do it. But you'll almost never see me take this liberty.
1. What material will this first DVD cover…
You can get information about my course here: http://www.insidesqlserver.com/Course%20Description%20and%20Outline.htm
The first DVD covers most of what is in Module 1.
2. What format will it take… will be be a group of slides and whitepapers, or screencast instruction by you…
The DVD will be a mixture of live capture of me talking, and screen captures of my slides and my demos.
3. Who all is involved in the project…
I am recording the class that I have been presenting all over the world for the last several years. Chuck Boyce, of AskaSQLGuru.com is doing the filming and editing. The business side is being managed by Peter Ward of www.WardyIT.com in Brisbane, Australia
4. How often can we expect to see a new DVD come out…
Since I have to fly to New York for filming, we are only able to do about one a month. In fact, I am just about to leave for the airport for the second round of filming.
5. What advantage will one have in ordering these over just getting the books…
Different people learn in different ways. If you like to hear and see someone explaining concepts, this can add to the benefit of the books. People pay a lot of money to attend my classes, but since I’m only one person, I can’t offer them that often. The DVDs are a chance to for anyone, anywhere to get to take my class. If you can read and absorb everything in the books on your own, the DVDs might not offer anything more.
So again, here's the link to SQLServerDVD.com.
Tuesday, March 11, 2008
Already a Nightmare
12:38 |
Posted by
Sean McCown |
Edit Post
I've had to call a vendor for support and it's already not going well. I'm not going to say which vendor, but I'll tell you that I'm having problems with my SQL backups.
Anyway, I went online and filled out the form with all the info... OS, SQL version, etc... then I got the email from the support tech asking me for all this info.
I replied by saying I had already filled that stuff out when I created the ticket so why did I have to do it again? I don't mean to be a difficult customer, but come on. Why do I have to do everything twice?
Anyway, I went online and filled out the form with all the info... OS, SQL version, etc... then I got the email from the support tech asking me for all this info.
I replied by saying I had already filled that stuff out when I created the ticket so why did I have to do it again? I don't mean to be a difficult customer, but come on. Why do I have to do everything twice?
Friday, March 07, 2008
Hairy Toast
10:20 |
Posted by
Sean McCown |
Edit Post
Every morning when I leave the house I get in the car and the first thing I do is throw my jellied toast face down in the floor of my car. Why not, that's where it's going to end up anyway when the asshole in front of me slams on his breaks. Now at least it doens't piss me off when it happens. Then when I get to work the first thing I do is run some kind of wild query that fills up all the memory and CPU and locks everyone out of the system. Why not, that's what's going to happen as soon as my favorite report writer logs in.
Some days it just doesn't pay to get out of bed and go through the hassle. Then again, if you've done what you should as a DBA, it shouldn't be that bad. Hopefully you've setup things in your DB that keep things from getting too out of hand. Hopefully you've been able to teach your report writer some basic dos and don'ts (sp?).
But what if you're in a new place? Can you still be effective? Of course you can. Again, if you're a good DBA you've collected a nice gaggle of scripts that you take from place to place. Years ago when I was just getting started, I didn't get that. I always thought, why be lazy, just write the damn scripts when you need them. But that's the wrong attitude. It's not laziness, it's practicality. There's just no reason to work out that logic every time.
So if you follow some of your own best practices and set yourself up for success, maybe you can withstand the bad times. But there's nothing you can do about traffic, so I guess you'll have to keep throwing your toast on the floor. Life goes on.
Some days it just doesn't pay to get out of bed and go through the hassle. Then again, if you've done what you should as a DBA, it shouldn't be that bad. Hopefully you've setup things in your DB that keep things from getting too out of hand. Hopefully you've been able to teach your report writer some basic dos and don'ts (sp?).
But what if you're in a new place? Can you still be effective? Of course you can. Again, if you're a good DBA you've collected a nice gaggle of scripts that you take from place to place. Years ago when I was just getting started, I didn't get that. I always thought, why be lazy, just write the damn scripts when you need them. But that's the wrong attitude. It's not laziness, it's practicality. There's just no reason to work out that logic every time.
So if you follow some of your own best practices and set yourself up for success, maybe you can withstand the bad times. But there's nothing you can do about traffic, so I guess you'll have to keep throwing your toast on the floor. Life goes on.
Thursday, March 06, 2008
Installing LiteSpeed
09:24 |
Posted by
Sean McCown |
Edit Post
For those of you who follow my blog, you know that I've used LiteSpeed for many years now. And one of the things that's always bugged me is the lousy way they run their upgrades. To upgrade LS typically consists of having to do some version of manually deleting the current repository tables and then running the install program. And this is after uninstalling your previous version manually as well. And if you want to save your data in your repo tables you have to create new ones and copy the data over, or just rename the old tables. And it still isn't that easy because of the relationships. If you just rename the tables you have to rename the relationships too or the new install will fail. And you have to do this for every box you own. What a pain in the ass!!
Well, I have a problem with LS not backing up one of my DBs even though it's been working just fine for almost 2yrs. So I just upgraded from v.4.7 to the latest 4.8 (yes, I know v.5 is coming out soon, but I can't wait). And let me tell you this... it looks like they finally got these upgrade problems fixed. My upgrade went smooth. I didn't have to manually uninstall the old version, I didn't have to play my usual shell game with the repo tables, and I didn't have to burp the service. Life is good.
Now, I'll have to see if it actually fixed my problem. But I'm happy to be able to report something good about LS for the first time in a long time. They've mostly been adding features, and very slowly at that, and haven't been really concerned with our actual problems. Maybe this is a turning point.
Anyway, good job LS guys.
Well, I have a problem with LS not backing up one of my DBs even though it's been working just fine for almost 2yrs. So I just upgraded from v.4.7 to the latest 4.8 (yes, I know v.5 is coming out soon, but I can't wait). And let me tell you this... it looks like they finally got these upgrade problems fixed. My upgrade went smooth. I didn't have to manually uninstall the old version, I didn't have to play my usual shell game with the repo tables, and I didn't have to burp the service. Life is good.
Now, I'll have to see if it actually fixed my problem. But I'm happy to be able to report something good about LS for the first time in a long time. They've mostly been adding features, and very slowly at that, and haven't been really concerned with our actual problems. Maybe this is a turning point.
Anyway, good job LS guys.
Wednesday, March 05, 2008
Bad Code Management Practices
10:21 |
Posted by
Sean McCown |
Edit Post
One thing that's as varied as ways to code is how to manage the code itself. Or I guess I should say architect instead of manage, but it all comes down to management.
The 2 major ways are to componentize and to not. And by componentize, I mean taking all the individual components and turning them into small independent chunks that can be called from many sources. A good example is taking a date calculator routine in your SPs and turning it into a function so the SPs just call it instead.
So those are the 2 ways. You either put it inline with the code or you make it into a component and call it from all the modules you need.
Of course, I'm mainly talking about SQL code here, but it really applies to any kind of code I suppose.
It's easy to see the advantages of developing a component solution. This is the DB equivalent of COM, right? But what's not easy to see is what it does for you (or to you) support-wise. What this solution can do is put you in a new form of .dll hell from the old days.
Let's take a simple example...
Let's say that you have sn SP in prod that's giving you problems and you want to track down where the degraded performance is coming from. So you open up the SP and start looking at the code. You see that a major component of it is an external SP call. OK, so you open up that SP. Now you see that that SP calls 3 other SPs. And each one of them is calling more SPs. And so on and so on. This is the definition of spaghetti code isn't it?
I'm actually in the middle of doing this very thing right now. I wanted to stop and blog about it while I was still pissed off. I've been doing this for an hour now and I'm not any closer to a solution than I was when I started. So you can see that while going the COM route can be helpful, you can take it too far. And that's really what happens, isn't it? Devs take a good idea and drive it into the ground until it's so hard to manage it's not even worth having it anymore.
I've been trying to come up with some guidelines for this and it's tough. But roughly, I like to say that if several SPs use the same code, or are calculating the same thing, it's ok to pull out into COM. The same goes with code that gets changed a lot. If you've got code that changes often and is used in a lot of SPs, by all means put it into it's own SP so you only have one place to change it.
Anyway, I guess I'll get back to my plate of spaghetti now.
The 2 major ways are to componentize and to not. And by componentize, I mean taking all the individual components and turning them into small independent chunks that can be called from many sources. A good example is taking a date calculator routine in your SPs and turning it into a function so the SPs just call it instead.
So those are the 2 ways. You either put it inline with the code or you make it into a component and call it from all the modules you need.
Of course, I'm mainly talking about SQL code here, but it really applies to any kind of code I suppose.
It's easy to see the advantages of developing a component solution. This is the DB equivalent of COM, right? But what's not easy to see is what it does for you (or to you) support-wise. What this solution can do is put you in a new form of .dll hell from the old days.
Let's take a simple example...
Let's say that you have sn SP in prod that's giving you problems and you want to track down where the degraded performance is coming from. So you open up the SP and start looking at the code. You see that a major component of it is an external SP call. OK, so you open up that SP. Now you see that that SP calls 3 other SPs. And each one of them is calling more SPs. And so on and so on. This is the definition of spaghetti code isn't it?
I'm actually in the middle of doing this very thing right now. I wanted to stop and blog about it while I was still pissed off. I've been doing this for an hour now and I'm not any closer to a solution than I was when I started. So you can see that while going the COM route can be helpful, you can take it too far. And that's really what happens, isn't it? Devs take a good idea and drive it into the ground until it's so hard to manage it's not even worth having it anymore.
I've been trying to come up with some guidelines for this and it's tough. But roughly, I like to say that if several SPs use the same code, or are calculating the same thing, it's ok to pull out into COM. The same goes with code that gets changed a lot. If you've got code that changes often and is used in a lot of SPs, by all means put it into it's own SP so you only have one place to change it.
Anyway, I guess I'll get back to my plate of spaghetti now.
Tuesday, March 04, 2008
sp_Whoville
13:48 |
Posted by
Sean McCown |
Edit Post
This post is dedicated to all those field DBAs who like to call up the prod DBAs and tell them how busy the server is based on the number of spids or short-term blocks returned by sp_who(2).
See this is the hidden cost of generic logins that have too many rights. Everyone on the planet can read BOL and try to interpret the results. And whatever they mark in their heads as being the sign of a server being too busy is what they're going to call you with. Our users have 2 criteria for a busy server. The number of spids (active or inactive), and how many short-term blocks they see.
Of course I used to try to explain to them that you could bring the server down with a single spid so the number doesn't matter... and that blocks are fine as long as they don't persist. Since I've been here for quite a while though, and none of them have gotten the hint yet, I usually just thank them for letting us know and that we'll get right on it.
One of my favorite analogies used to be that judging the server on the amount of spids is like loading your car up with people and declaring that traffic is really bad today. Somehow that still didn't get the point across.
See this is the hidden cost of generic logins that have too many rights. Everyone on the planet can read BOL and try to interpret the results. And whatever they mark in their heads as being the sign of a server being too busy is what they're going to call you with. Our users have 2 criteria for a busy server. The number of spids (active or inactive), and how many short-term blocks they see.
Of course I used to try to explain to them that you could bring the server down with a single spid so the number doesn't matter... and that blocks are fine as long as they don't persist. Since I've been here for quite a while though, and none of them have gotten the hint yet, I usually just thank them for letting us know and that we'll get right on it.
One of my favorite analogies used to be that judging the server on the amount of spids is like loading your car up with people and declaring that traffic is really bad today. Somehow that still didn't get the point across.
Monday, March 03, 2008
Losing a DBA
09:13 |
Posted by
Sean McCown |
Edit Post
It's almost never fun to lose a DBA, but it's a fact of life. People leave jobs, and sometimes jobs leave people. This is another reason why it's really important to not tax your DBAs too heavily. If you've got 2 DBAs and they're both working like dogs, what happens when you lose one? I'll tell you what happens... deadlines start to slip, backups start failing and don't get looked into, maint starts getting behind, security gets relaxed, etc. You want your DBA to have time to do his job and be able to pick up some slack when you lose someone. And it could be quite a while before you get a replacement. Good DBAs are really hard to find and you don't want someone to just warm the seat.
I talked about this recently in my IW blog. Of course, this really only goes for production DBAs, right? I mean, you can work your devs as much as you want. They'll never get a call in the middle of the night because the server's down or because a package failed. So again, DBAs are insurance policies. We're kinda like a clustered server. You don't use the inactive node. It just sits there waiting for something to happen to the primary node. It seems a terrible waste and managers hate spending that money for a box that just sits there. And while DBAs aren't quite that useless, we really should be used in the right way. So it really is just like a multi-node cluster. You never run the primaries at full capacity because one day something will happen and one box will have to take on its workload and one of the downed nodes. So if they're all running at 100%, they can't failover and resume work. So you run them at 50%... give or take, right?
So again, let your prod people do their prod jobs and don't put them on too many actual projects. Afterall, that's what prod means.
And yeah, we're losing our other DBA so I'm all alone now. We'll see how it turns out.
I talked about this recently in my IW blog. Of course, this really only goes for production DBAs, right? I mean, you can work your devs as much as you want. They'll never get a call in the middle of the night because the server's down or because a package failed. So again, DBAs are insurance policies. We're kinda like a clustered server. You don't use the inactive node. It just sits there waiting for something to happen to the primary node. It seems a terrible waste and managers hate spending that money for a box that just sits there. And while DBAs aren't quite that useless, we really should be used in the right way. So it really is just like a multi-node cluster. You never run the primaries at full capacity because one day something will happen and one box will have to take on its workload and one of the downed nodes. So if they're all running at 100%, they can't failover and resume work. So you run them at 50%... give or take, right?
So again, let your prod people do their prod jobs and don't put them on too many actual projects. Afterall, that's what prod means.
And yeah, we're losing our other DBA so I'm all alone now. We'll see how it turns out.
Friday, February 29, 2008
More thoughts on Admin Passwords
10:08 |
Posted by
Sean McCown |
Edit Post
OK, I'm also changing this in the original post, but in case you don't think to go back and check.
When I sent my admin password solution to a buddy at MS, he tried it and it didn't work. After a fairly short discovery, I discovered that it worked for me because I had a drive mapped to the admin share on my DC. So PSExec was using that IPC to connect. So unless you're lucky enough to have something like that, you prob won't be able to use this solution. However, it's a good backdoor so it may be a good idea just to setup a mapped share on your box anyway just in case something happens and you get shut out of your DC.
On that note. This is a really excellent reason why you never want to run SQL under an admin acount and especially not under a domain admin acct. I've seen that more than I care to and it always comes out bad eventually. It only takes a very basic knowledge of SQL to discover that someone with regular user rights can setup a SQL job to promote their user acct to admin because the job will run under the context of the agent acct at the OS level. And if you're running your services under an admin acct they'll have full rights to do whatever they like. And if you're running it under a domain admin acct, then they can promote themselves to domain admin pretty easily by running a .vbs or prob even powershell. Pretty much any scripting language will prob do, huh...
And that goes for running SQL on a DC under local system. That's the same as domain admin as far as the DC is concerned. So be smart and run your SQL accts with non-admin accts and just remove that from the equation. There are enough ways for internal and external hackers to elevate rights. Don't help them.
When I sent my admin password solution to a buddy at MS, he tried it and it didn't work. After a fairly short discovery, I discovered that it worked for me because I had a drive mapped to the admin share on my DC. So PSExec was using that IPC to connect. So unless you're lucky enough to have something like that, you prob won't be able to use this solution. However, it's a good backdoor so it may be a good idea just to setup a mapped share on your box anyway just in case something happens and you get shut out of your DC.
On that note. This is a really excellent reason why you never want to run SQL under an admin acount and especially not under a domain admin acct. I've seen that more than I care to and it always comes out bad eventually. It only takes a very basic knowledge of SQL to discover that someone with regular user rights can setup a SQL job to promote their user acct to admin because the job will run under the context of the agent acct at the OS level. And if you're running your services under an admin acct they'll have full rights to do whatever they like. And if you're running it under a domain admin acct, then they can promote themselves to domain admin pretty easily by running a .vbs or prob even powershell. Pretty much any scripting language will prob do, huh...
And that goes for running SQL on a DC under local system. That's the same as domain admin as far as the DC is concerned. So be smart and run your SQL accts with non-admin accts and just remove that from the equation. There are enough ways for internal and external hackers to elevate rights. Don't help them.
Thursday, February 21, 2008
Reset Domain Admin Account
19:25 |
Posted by
Sean McCown |
Edit Post
Last week I somehow forgot the domain admin acct in my lab. I tried for 3 days to remember it and I finally came to the realization that I just wasn't going to. So I started looking around on the web for different things I could do. I wasn't really in a hurry cause my domain was running just fine.
I found many methods for changing the local admin acct, but not many for changing the domain admin acct in a windows 2003 domain controller. I did find this method here.
Anyway, I checked with an SE friend of mine on the windows support team at MS and he said that was a good method and told me to run with that one. Not that I minded, but I just didn't want to bring down my DC and take the time to copy all that stuff to floppy, etc. So I kept looking.
I finally had an idea that I just had to share with everybody. It's so simple it's scary.
I used the SysInternals tool psexec.exe. What it does is run programs remotely on the box of your choosing. The problem of course, is that you have to make an admin connection to that box, and if you could do that, you wouldn't have to use psexec to reset the password. So I looked at the help file for psexec and found a parameter that just amazed me. -s is its name.
What that does is tells psexec to connect with the local system acct of the remote box. And since the local system acct has full admin rights, you have the rights you need to change the admin password.
So at a command prompt go to the dir where you have psexec and type the following command:
psexec.exe -s \\machinename cmd
That tells you to bring up a command shell on the remote box and log in as the machine's local system acct. So now you've got a shell open on your box just like you were standing in front of the console on the remote box. Now you just type the command for changing the admin password: net user administrator newpassword
That's it. It's simple, elegant, beautiful, and it really saved my ass.
I hope this helps someone else.
Just for grins, here's a Camtasia I made of the process just to make it even easier. Click here
UPDATE:
Here's an update to my original post. I sent this solution to a friend at MS and he found that it didn't work for him. After a little investigation, I discovered that it worked for me because I had the admin share on my DC mapped on my box and PSExec was using that IPC. You can read more about that here.
I found many methods for changing the local admin acct, but not many for changing the domain admin acct in a windows 2003 domain controller. I did find this method here.
Anyway, I checked with an SE friend of mine on the windows support team at MS and he said that was a good method and told me to run with that one. Not that I minded, but I just didn't want to bring down my DC and take the time to copy all that stuff to floppy, etc. So I kept looking.
I finally had an idea that I just had to share with everybody. It's so simple it's scary.
I used the SysInternals tool psexec.exe. What it does is run programs remotely on the box of your choosing. The problem of course, is that you have to make an admin connection to that box, and if you could do that, you wouldn't have to use psexec to reset the password. So I looked at the help file for psexec and found a parameter that just amazed me. -s is its name.
What that does is tells psexec to connect with the local system acct of the remote box. And since the local system acct has full admin rights, you have the rights you need to change the admin password.
So at a command prompt go to the dir where you have psexec and type the following command:
psexec.exe -s \\machinename cmd
That tells you to bring up a command shell on the remote box and log in as the machine's local system acct. So now you've got a shell open on your box just like you were standing in front of the console on the remote box. Now you just type the command for changing the admin password: net user administrator newpassword
That's it. It's simple, elegant, beautiful, and it really saved my ass.
I hope this helps someone else.
Just for grins, here's a Camtasia I made of the process just to make it even easier. Click here
UPDATE:
Here's an update to my original post. I sent this solution to a friend at MS and he found that it didn't work for him. After a little investigation, I discovered that it worked for me because I had the admin share on my DC mapped on my box and PSExec was using that IPC. You can read more about that here.
Wednesday, February 20, 2008
Good Times
09:44 |
Posted by
Sean McCown |
Edit Post
Wow... I really don't get a chance to write in this blog very often because my company has been blocking the address. For some reason though, they just opened it up so I'll be writing here a lot more. So lookout, I'm back.
Subscribe to:
Posts (Atom)
About Me
- Sean McCown
- I am a Contributing Editor for InfoWorld Magazine, and a frequent contributor to SQLServerCentral.com as well as SSWUG.org. I live with my wife and 3 kids, and have practiced and taught Kenpo for 22yrs now.
Labels
Blogumulus by Roy Tanck and Amanda Fazani