Thursday, May 27, 2010

Good Press is Awesome

Just double posting for a while till everyone gets the hang of the new location. Don't forget that officially my blog has moved here and I won't be posting very much longer at this location.

I know Jen's already blogged on this, but this is my little contribution to the whole, how cool are we, theme.

Brad Mcgehee just came to speak at our user group and to my surprise he thought enough of what we're doing to write a blog about it.

So I thought I'd take this time just to explain a little more so maybe some more of you can start doing it at your user groups too.

I started doing this last year when it really started bothering me that the user group model is so limited. The problem is that you bring in a speaker to give a session on a topic, and next month you do the same thing with a different speaker and a different topic. The problem is that you don't have time to go deeper into any of the topics. You only get to barely touch on the subject and there's no time to really learn it. So I decided to start up a progressive class before the user group. I lasts like 6-8 mos and each session builds off of the previous ones. The downside however is that it takes you 6mos to complete the class, but it's better than nothing and it's a free class. I actually treat it like a real class that you would go to at a training center only I like to think I go into more depth sometimes.

It's a really good model because if you hold it before or after your main user group meeting the people are coming anyway so you've got a built-in audience. And you'd be surprised how many people show up to the group that never did before. So anyway, that's all I've got on that. I hope some more of you start doing this and if you do I'd like to hear how it works for you. And I'd like to also give a special shout-out to Brad for recognizing my genius.



Petri for us all

The Petri IT KnowledgeBase just opened up its SQL section and Jen and I are the authors. I know, Jen also wrote about this too... what can I say, I have nothing original so I have to steal others' material. Anyway, it's really a nice site they've put together for us so come check it out. Currently we've got some nice barebones "I know nothing about SQL" articles so if you're a complete newbie then you can start here and we'll explain what all this stuff actually means.
Thursday, May 06, 2010

New Blog Location

Hey everybody... after like 5yrs of blogging here, I've made the decision to move this blog over to MidnightDBA.com.

Jen and I are consolidating our blogs in the same location so she's moving too. I'm pretty much up and ready to go in the new location so update your RSS feeds to point to the new location because this'll probably be my last post at this address. Maybe I'll double-post for a while, I'm not sure.

Here are the links to our new blogs, and they're actually on a linux server so the URLs are case-sensitive.

http://www.midnightdba.com/DBARant/
http://www.midnightdba.com/Jen/
Wednesday, May 05, 2010

Data Explosion

Hey everybody... it's been a while since I've blogged, but that's what starting a new gig will do to you. I've had so many new things to rant about here and I'm sure I'll be getting to those in the future, but right now I want to share a funny story with you.

We had a group of devs who were trying to run a fairly simple insert and it kept filling up the disk. And when I say it was filling up the disk, i mean like 50K rows were filling up like 200GB. So they came to me to see if I could fix it. This is where it gets fun.

It really didn't take me too long in my investigation to find that they had the data file set to autogrow by 131,000%. That's right, I said it! 131,000 percent. So now that I found the problem I was able to set it to something more reasonable, shrink the file and let them get on with their insert.

So it started me thinking about what other DBs in the place had similar issues because I've been hearing about disk space problems here and there. So I wrote a powershell to go out and check the file growth rates for every SQL box and I found a lot of the same type of stuff. There were a lot of them set for several thousand percent growth, several of them set for 1MB growth, and everything in between. In general, as a good generic setting to start from, I like to grow my files 1GB at a time. It's a good round number that works in a lot of cases and then you can snipe the ones that need something else. And then I altered the powershell to go out and change the growth rates of all the DB files out there to 1GB. Life is good again and we have the beginnings of a happy shiny environment.

Oh y, and in the same discovery I also found 110 DBs set to autoshrink and I also took care of those.
Friday, April 02, 2010

Powershell 2.0 Get-Member Enhancements

The other day I blogged on a cool way to alter database file properties.  And I promised I would film the solution for you.  Well, I got home that night and started filming but when I pulled up sqlps I was stopped dead in my tracks by something I couldn’t explain.  The methods to get and set were missing.  I checked my other box and they were there, so what happened?  The only difference was that the box I wrote the blog from was XP and the box I was filming from was Win7.

So I pinged a couple really smart guys as MS (and I cc’d Buck).  So after a couple of days we have an answer.  There’s a new feature in powershell 2.0 that hides the getter and setter methods from you by default in get-member.  They’re officially calling it an enhancement.

There are 2 new parameters that get-member supports that allow you to control what you see.  You can now use –view and –force.

-view takes the parameters Extended, Adapted, Base, All

-force doesn’t take any additional parameters.

However if you’re looking specifically for getter and setter methods, you’ll have to use –force because –view doesn’t display them.  Here’s what that’ll look like:

dir | gm –force

Now you can see everything.  It’s interesting though is that by defalt, PS loads all the methods, it just doesn’t show them to you.

Here’s the MSDN blog that explains this and so much more.

And I’d like to give a special thanks to Robert Hutchinson at MS for chasing this down for me while Buck and I sat on the sidelines helpless.

Wednesday, March 31, 2010

Powershell to Alter Database File Properties

Man, I still just get so jazzed when I work with powershell. Every time I need to do something long and hard(that was for you Buck), I look in powershell and it lets me just lay back and enjoy the long and hard.

So this time we've got like 85 DB files for our main DB. I know that's a lot but it's been spread across so many LUNs for so long and it's just grown and grown as it's filled up the LUNs. Now we finally got a couple really large LUNs and we're trying to consolidate the files. And now that we've moved most of them to a single LUN, we want to stop the autogrowth on most of them so they don't grow anymore before we empty them and then drop them. Because what's the point really in putting them all on a single LUN if you're not going to get rid of some of them. So we definitely don't want them to grow anymore before we can empty them.

The task at hand was to set all of these files (most of them really) to not autogrow. The choices were do it in the GUI, or write a T-SQL cursor to go through all the files and create an alter database command in dynamic SQL for each one. Neither of those is very attractive considering that there are so many files. Of the 2 of them though, the cursor is the more attractive solution. Then I thought of powershell. I hadn't specifically done anything like this in powershell before, but I figured I'd be able to get some love. I'm not going to teach you powershell here, or explain the script too much. I'm just gonna give you the code and if you have any questions you're free to write, but it's pretty straightforward. I'll be making a video on this tonight for those of you who like to see things in action.

PS SQLSERVER:\SQL\MYSQLBox\DEFAULT\Databases\MyDB\filegroups\data\files> gci | where{$_.Name -ne "MyDB_Data"} | %{$_.set_GrowthType("None"); Alter();}

OK, so a quick glance at the important parts.

1. Make sure you're in the right node. Notice I'm in data\files.
2. Here I'm filtering out the one file I don't want to be effected. So the main data file I want to still be able to grow. I'll talk about that more in a min.
3. Don't forget to use the Alter() at the end. Usually whenever an object has an alter method it wants you to use it. If you don't it'll change it in cache in your powershell session but you won't see the change persisted to the DB. So you're not really making the change to the object in SQL, just in your powershell pipeline. You can tell if something has an alter method by doing this at the cmdline 'gci | gm'. If there's an alter method there, then use it.

OK, you don't have to filter. If you find it too much trouble or if your files are named in such a way that it would be too hard to snipe a single file for some reason, then you can change them all and then just go back to the GUI to change the last one back to what it was.

So there you go. We have a nice 1-line script to alter as many files as you need. And you can clearly tack as many file changes on there as you need. And the best part about it is that the code doesn't really change if you need to do another file operation. All you do is change the method call. Ahhh, the power of powershell.
Tuesday, March 23, 2010

The xp_CMDShell Curse

I interviewed a couple guys for a DBA position the other day and I asked one of my usual questions which is simply to discuss xp_cmdshell and its implications.  Without hesitation, I got the same answer twice.  Both guys gasped and talked about the evils of xp_cmdshell.  So I let them go on and then presented them with a couple scenarios that we discussed.  And again, all of this just makes me think about true evil and how xp_cmdshell just doesn’t fit.  See, this xp was developed for a reason;  It filled in the gap on some missing functionality, and it does it very well.  In fact, it does it so well, and the gap is so big, that xp_cmdshell gained in popularity and became an instant success.  And in fact, it was used too much and this is where it got its bad reputation.  Don’t hold that against xp_cmdshell though.  Just because everyone recognizes its usefulness doesn’t mean that it’s bad.  The problem started because everyone wanted to use it and the security was opened wide allowing damage to be done. 

But it’s not evil;  it’s just misused.  If you keep it locked down there’s no reason why you can’t have a fairly nice relationship with xp_cmdshell.  The same goes for cursors.  Cursors have also fallen prey to the cmdshell curse.  And again, it’s through misuse. So don’t kick either of these out of your shop because so many other devs are stupid.  Use them both wisely and they’ll do good things for you.

Wednesday, March 10, 2010

People Leave Jobs

One thing I see from time to time is when someone leaves a gig they’ve been at for a long time, their workmates get upset.  It’s like they’re personally offended that the guy is choosing to work somewhere else.  Look, I know it’s easy to build relationships with people you work with and it’s easy to bond with people you spend so much time with.  Hell, you might even say that you’ve been in battle together.  What you have to realize though is that people have to do what’s good for them.  I can almost promise you that the guy’s not leaving to spite you.  He’s probably just tired of one thing or another.  Maybe he’s not getting paid what he thinks he should.  Maybe the new job is just closer to home.  Maybe the new job allows more training or flexibility.  Whatever the reason, it’s almost certain it’s not to piss you off.

I know what it’s like because I went through that many years ago.  I got a couple guys I worked with that I got along with great and we all worked very well together.  Coming to work wasn’t a chore at all.  Then the first cool dude left.  I was crushed.  How could he do that to me?  After all we’ve meant to each other!!  Then the next one left, and then the next.  Finally I was all alone.  I want along this way until I went to my next job and found a couple guys I really liked (you know what I’m talking about, huh).  Then I left that gig because I got something better and I labored over whether to leave those guys or not.  Finally I decided that taking care of my family was more important.  Then I got it.  Those other guys weren’t leaving ME, they were just moving on to a better gig. 

So really guys, if someone you work with leaves for another gig, wish him well.  He’s only doing what’s best for him.  And if you really are a friend, that’s how you’ll look at it.

SSIS Email Chain

Today's blog is actually an email chain between me and a user. It's only a single question and reply, but I think it's good info.

Q:
Hey, would you agree that a monthly load process is better served as an SSIS – even if you have to push/pull from text files for now – than as a series of SPs or DLLs?

A:
if you're staying on the same box for the load then SPs can be an attractive offer because they're very fast and the memory stays in sql and can be managed quite well... if you move that process to ssis, and ssis is on the same box, then you have to allocate memory away from sql to run the pkg and house the data while in the buffer...



if ssis is on another box, but the data is still being moved to different dbs on the same box... so if the data is being moved from server1.db1 to server1.db2 and ssis is on server2, then you don't have to fight sql for memory, but now you incur the network cost of moving the data from the box, and then back to it...



if you're moving between boxes, then y, ssis is a better choice because in SPs you have to manage linked servers or openrowset to make that happen and that's not cricket...



however, what makes ssis attractive in the single box scenario is that it handles errors easier and alerting is much richer and easier to come by... you can also more easily fix problems in the data itself and it's easier to extend... so if your requirements change and you need to switch it to another box, or if you need to send a copy somewhere else, etc then that's much easier in ssis... ssis also gives you parallelism that you cant do in sps... you can load several tables at once in ssis where they have to be serialized in sps...



a good compromise in the single box scenario is to keep things moving like they are if possible, and where not stupid... so if they've already got an sp that inserts data into the 2nd db, then just call that sp from ssis... this way you get the insert and select process in sql where it belongs, and the workflow and error handling of ssis, and everything else that goes with it... if a single sp inserts several tables serially though, i'd create several sps and call them individually in ssis tasks... even if you have to keep them seialized (like for PK/FK issues)... because now you can even add things like checkpoints and individual error flows to your pkg steps and have a richer experience...



these things are meant to allow your process to grow and change much easier than is possible with an sp... so if one of the DBs gets too big and has to move to another box, it's a simple matter in ssis... but in an sp, that means you have to create and manage a linked server, which is just one more thing and they come with their own problems...



as for the DLLs... these processes NEVER belong in a DLL... not only do you have to steal memory from sql, but it's not optimized for data flow tasks... it's just a plain DB connection... you can have parallelism though by calling the windows threading model, but they have to manage those threads manually and mistakes can be made... and that takes extra memory, etc... not only that, but the code isn't available for admins to fix problems, and making changes to the compiled code can be dangerous, right... so no, that's not a good idea...



does this answer... i realize you prob wanted more of a yes, do this, but it's not that simple...
Tuesday, March 09, 2010

Deleting Production Data

While talking to a group of devs late last week the topic of deleting prod data came up.  Since I’m the big MVP of the group they wanted to know what the official word from on-high is.  The question is simple, and the answer is even simpler and frankly it’s one of those that was solved so long ago it just doesn’t hold any interest for me anymore.  I’ll hash it out here again though because I know there are plenty of you out there who haven’t solved this problem yet. 

The question is, what’s the best way to delete gazillions of rows out of a prod table.  The issue is with both concurrency  and rollback.  The problem is that if you do a huge delete in a single transaction you don’t leave any room for anyone else to access the table during that time.  So your delete can bring a DB to a complete halt if you don’t do it right.  As well, you put your system at risk should something go wrong and the transaction has to rollback.  So why would the transaction have to rollback?  Well, there are a couple reasons really.  The first, and most likely, is that it could fill up the log and with no more log the transaction can’t continue.  The other possibility is that the server or the service gets restarted because someone didn’t know you were doing a big operation.  And while I’m at it, there is one more logical possibility.  Someone could get impatient with the operation and stop it without realizing that it has to rollback.  This is actually a scenario that I’ve seen played out many times and the result is always the same.  The system has to wait for at least as long as it was going.  And what’s even better is when the DB doesn’t come back online right away, that same user will bounce the service again and again. 

And of course this counts for updates as well… in SQL2K this was accomplished like this:

SET RowCount 1000

While (1=1)

Begin

Begin Transaction

--Delete stmt here--

if @@RowCount = 0

Begin

   Commit Transaction

   Break

End

    Commit Transaction

End

However, this doesn’t work in SQL2K5 and above where you have to do something more like this:

set nocount on

while 1=1

begin

delete top(1000)

from table1

where conditions…

if @@rowcount = 0

end

 

So ok, that’s how you batch delete/update in prod.  Only under very controlled and specific circumstances should you ever do these types of ops in a single batch.

Wednesday, February 10, 2010

What makes Powershell Awesome?

I was asked to write a quick blog on what makes PS so awesome. And while it's a huge topic with many many examples, I'll give you a couple just to get you going on how cool it is.

1. It makes it really easy to add things to your process that you never could before. Let's take for example that you want to make a change to your DB but you want to backup the schema for an object or a set of objects first. This is so easy in PS it's not even funny. The code to script out objects is:

gci| %{$_.Script()}

That's it!!! And all you have to do is add a filter if you want to script only a certain schema, or a certain object. And the code doesn't change if you want to script tables, views, function, SPs, etc. All you have to do is change the node of the tree you're in... and for that you manage it like DOS. So to script out a table you would type this code:

cd sql:\sqlserver\servername\default\databases\MyDB\Tables
Then just run the script code above.

AND to script out the SPs, you just change the contest to StoredProcedures like this.

cd ..
cd StoredProcedures

That will put you in:
cd sql:\sqlserver\servername\default\databases\MyDB\StoredProcedures

Then just run the same script method from above.
So you can see that scripting objects is wicked easy. And now you can backup the schema of a table before you mess with it, or now you can automate the scripting of SPs before you deploy a new version so you can easily rollback.

2. Now since scripting things like we did above will just print the script out to the screen, you'll want to do something more permenant with them. Have you ever tried to write to a txt file from tsql? You know, you either have to mess with xp_cmdshell or go to one of the sp_OA procedures and make those external calls, right? Well PS solves that problem. So let's script out the objects from above and save them to a file.

gci| %{$_.Script()} | out-file c:\MyTables.txt

That's it!!! So now you can save those things to a file and you've got them to use again.

That's not all you can do with files. You can also get the stuff back out of the files pretty easily. Let's say that you have a list of servers you want to connect to and check that the services are running and you want to turn on the ones that aren't. I'm not going to show you the code for all of the operations here, but to use a txt file as a list of servers you have but to call the txt file and automatically cursor through the items. So here I'll just get the list of servers from a file and print them to the screen. Because what you do with them isn't nearly important here as being able to get them easily.

get-content c:\ServerList.txt | %{$_}

So really, I just listed the servers to the screen one at a time.
The % character is an alias for 'for-each'. So for each item in that txt, print it to the screen. Easy huh?

And just for completion, you can easily work with services by calling the get-service cmdlet or the wmi class for remote boxes.

3. Getting info for objects has never been easier. Let's say you want to get all of the sizes for all of the tables in the DB and do something based on the size. While we're not going to do the entire script, I'll show you how to get the size for all the tables.

First, switch to the tables node.

cd sql:\sqlserver\servername\default\databases\MyDB\Tables

Then run this code:

gci | %{$_.DataSpaceUsed}

That'll just print them out to the screen, or you could pipe them to a txt file like above, or you could continue working with them in the pipeline and manipulate certain tables based off of whatever you like.


There are so many more things you can do with it to make your DBA job so much easier that it would make your head spin. I don't know how we ever got along without it.

Ok, so before I go I'm gonna do just one more because I do this one a lot.
I'm not going to give the code right now, but I'll tell you about it.

Let's say you've got a new employee and you need to add his acct to a Windows group on 30 boxes, and you need to add him to SQL groups and give him access as well. So He's got to be able to TS into these 30 boxes and have certain rights in all DBs on these servers.

With PS I can create a list of servers and either keep them in a table, or put them in a txt file and iterate through them and add the user both to Windows groups, and to SQL groups in a single script. So when's the last time you were able to do that without logging into all the servers individually yourself? And it's really not that hard.

So PS is well worth your time getting to know. If you need help getting started with resources, then ping me and I'll be glad to hook you up.

I've got some really good stuff on PS on MidnightDBA so go out there and check them out.

Here's a overview vid I did back in 2008:
http://midnightdba.itbookworm.com/VidPages/PowerShellOverview/PowerShellOverview.aspx

And you can see most of my other PS vids on the powershell tab of this page:
http://midnightdba.itbookworm.com/Admin.aspx

Good Luck.
Tuesday, February 09, 2010

A Timeless Lesson

You think I’d learn after all this time in DBs not to take the data for granted.  Here I’ve been troubleshooting a complicated query because I had some anomalous results when it turns out that the data wasn’t categorized correctly and I spent all that time on the query for nothing.

I’m telling you this not only to remind you to not take your data for granted, but to show you that no matter who you are (like I’m anyone special) you can make these types of mistakes.  So don’t be so hard on yourself if it happens to you because truth be told, I still do far worse on a regular basis.

We also posted a new DBAs@Midnight vid here:

It’s all about how to break up with your boss when you find a new gig.

http://midnightdba.itbookworm.com/VidPages/DBAsAtMidnightBreakUpWithBoss1of4/DBAsAtMidnightBreakUpWithBoss1of4.aspx

Wednesday, February 03, 2010

Ken’s Legacy

Some of you may remember an obscure little man who used to take a shot at writing now and then.  His name was Ken Henderson.  And he wrote (among others) small series of books called the Guru’s Guide to SQL Server.  He also wrote a little tome on SQL internals.  Now, Ken isn’t with us anymore, but his legacy lives on.  I remember sitting in Ken’s office at MS a few times talking about his vision for what he wanted his books to be.  His goal was to teach at several levels.  What he wanted was for the beginners and the experienced alike to get something out of his books.  And he succeeded very well.  I can still go back and read some of the Guru’s books and understand things I didn’t get before.  I remember saying that to Ken once and he just smiled.  Then he said, finally somebody gets it.  I want people to be able to grow into my material.

So what brought all this on?  Well, as I do sometimes, I was looking through one of my favorite blogs to kinda keep up on things.  I typically go through the back posts to make sure there’s nothing I missed the first time, or see if there’s anything I’ve grown into since reading it the last time.  And by looking at the writing style, and the intent behind it, I see that this author has the same principles that Ken held.  He not only tells you the most intimate details of the inner workings of SQL, but he does it in a way that allows you to learn at different levels.  I don’t even think he does it on purpose because like Ken, he’s just being himself;  a true teacher.

And I’m not sucking up. The only reason I’m writing about this is because it really hit me tonight because I just finished going over an old chapter of Ken’s before I went through the blog.

So since I read his material the same way I read Ken’s, I’m naming Paul Tripp the new Ken Henderson.  Paul, you make me proud to be your friend.

Wednesday, January 27, 2010

Experts are Sharp

You know I was thinking just this morning about the last round of interviewing I did to find a new DBA at work.  And that of course got me thinking about some of the interviews I’ve done in the past.  There are a few that really stick out.  The ones that are sticking out right away are the ones who didn’t know anything and claimed that they had so much experience and were so good that they didn’t have to be bothered with memorizing every little thing anymore.

This astounds me because all the experts I know are really sharp and on top of their game.  So what these guys are telling me is that they’re so good they don’t have to demonstrate even the most basic knowledge of SQL because they’ve transcended above that?  If that’s the case then my mother’s 100x the DBA any of us will ever be because she doesn’t know the first thing about it.

I remember this one guy especially.  He claimed both on his resume and in person to be an expert in query tuning.  He said, I’ve never found anyone who’s my equal at tuning queries.  So armed with that bit of knowledge I set about quizzing him with the basics.  I mean after all, you have to just get the basics out of the way, right?  I asked him if he had ever worked with exec plans.  He said of course, you don’t tune queries without them.  I said, that’s what I think, but i just wanted to make sure we were on the same page.  And I then asked him how expert his knowledge was of exec plans.  He said he was a very deep expert and didn’t know anyone with his knowledge.  Wow, now I’m getting a little nervous, right?

So I started with the basics.  What’s the difference between an index scan and an index seek?  Well, I’m not sure the exact difference, but I know you want to get rid of one of them.  OK, which one?  I can’t remember.  Um, ok.

So what’s a bookmark lookup (this was back when SQL2K was stull ubiquitous)?  I’ve seen it before, but I’m not sure what it does.

We went back and forth like that a couple more times and I finally broke down and told him that there was no evidence that he had ever tuned a query because he didn’t even have basic knowledge of exec plans.  I asked him what he was basing his claim of being an expert on.  That’s when he let me have it.  Look, I’m an enterprise DBA and I don’t have to know every piddling definition you dig up out of BOL.  Maybe someday when you’re at the level I am you’ll understand.

Um… ok, I’d say we’re done, huh? 

So like I said, I was thinking about that this morning and while I can’t keep up with everything, and nobody can, I like to think that I’ve got a lot of the basics covered.  And the real experts certainly know their stuff.  Go ahead and see how many people would follow her if you asked Kalen how big a SQL page is and she couldn’t answer.  And how many people do you think would follow Paul Tripp if he couldn’t tell you what DBCC CheckDB() was for? 

It just doesn’t hold water.  So for those of you out there thinking you’re all the Pooh, go test yourself and see how much knowledge you really have.  You may find out you’re not as hot as you thought.

Monday, January 11, 2010

Bad Guys

Here I am on a conf call while shooting bad guys with my baby boy.

Wednesday, January 06, 2010

Valid Opinions

A friend of mine was in a meeting yesterday and he called me right after because he was upset. Apparently he and someone else had conflicting opinions about how to do something that neither one of them had ever done before... so they were both essentially guessing.

He was mostly upset because he said the other guy's opinion was just as valid as his and so far it's brought the group to a standstill while they figure this out.

I told him that there's no way that both opinions are equally valid. In some instances that may be the case, but in this one, I happen to know who the other guy is and I know that when it comes to SQL brains he definitely got the short end of the stick. So I'm saying it out-right: not all opinions are equally valid. My mother-in-law likes to put her 2-cents in when we're sitting around talking computers. She doesn't know the 1st thing about it, but she likes to give her opinion anyway. So when I ask Jen's brother something, his answer is worth more to me than her mother's. The same holds true here right? Because someone is in a group and was invited to a meeting that doesn't mean that they have equal say in things.

Here's another really good example... let's say that there's a weird recovery scenario or even corruption scenario that you come across. And you come up to me and Paul Tripp at a conference and ask us what our opinions are. I'd say that Paul's opinion in this case is worth far more than mine. Sure, I'm a qualified DBA and I've been around the block more than a few times, but Paul WROTE CheckDB so I think he carries a little more weight than I do. Even if it's something neither of us has heard of before, I'd still take his guess over mine.

So no, I'm not one of those who believes that everyone's say is as equally important as everyone else's. Hell, I don't even believe that everyone should even have a say. Some people are just not qualified to give opinions on a topic. In a restaurant you don't see the dishwasher getting a say in what specials go on the board, and in a dojo you don't see white belts chiming in on black belt promotions. So why is it that everyone in IT thinks they deserve an equal say just because they were invited to the meeting?

About Me

My Photo
Sean McCown
I am a Contributing Editor for InfoWorld Magazine, and a frequent contributor to SQLServerCentral.com as well as SSWUG.org. I live with my wife and 3 kids, and have practiced and taught Kenpo for 22yrs now.
View my complete profile

Labels

Blogumulus by Roy Tanck and Amanda Fazani

Page Views