MUCH better than Vista.
If you're a Microsoft employee and the goons from ITG(*) are trying to rip your favorite OS from your cold, dead hands and make you run Vista, don't - use Server 2008 instead.
There's no LUA/UAC/this idiotic thing that asks you whether you really intend to do what you just asked your computer to do. ipconfig works in a normal shell, instead of demanding one with elevated priorities. It takes literally half as many clicks/keystrokes to do everything.
There are manifestly fewer bugs. It did not fall apart within the first couple of weeks of use. And it seemed much snappier than Vista (although I did not do any real performance benchmarks).
Here is a blog post that has instructions on how to configure Server 2008 into a workstation: http://blogs.msdn.com/vijaysk/archive/2008/02/11/using-windows-server-2008-as-a-super-desktop-os.aspx..
And this one claims that it's 20% faster than Vista, but does not give details on how this number was arrived at: http://vista.blorge.com/2008/03/11/windows-server-2008-is-20-faster-than-vista/.
ITG - Microsoft's Informational Technology Group. Changed names multiple times over the years, but its essense stayed the same - these are the people who prevent developers from doing their jobs in the name of security. But this probably merits a separate post...
I have a lot of storage in my house - not counting client computers, there are approximately 9TB of redundant, RAIDed usable (after RAID) space. Most of my data is on this storage - software, music, pretty much everything.
Every time I buy anything, I immediately copy it to the server, and put the originals in a big box in the basement, where they will stay until the time when BSA people show up at my door and demand the proof of license :-).
My own data is replicated to multiple servers - approximately 300GB of home videos and digital photos, plus other, smaller stuff that accumulated over the years.
Storing stuff requires storage. Storing terabytes of things requires redundant storage. Over the years, I sampled a few RAID-5 solutions, both at home and at work.
There are two major problems with RAID controllers.
(1) While they protect you from a disk failure, they do not protect you from the failure of the controller itself.
Since all of these controllers use proprietary information to describe the RAID array that they store on the disks themselves, they are not interchangeable - you can't take a bunch of disks that you used in RAID mode on LSI and move them to Adaptec (while preserving that data that's on them, that is).
In fact, there's no guarantee that you can move disks between the controllers from the same manufacturer. Or between controllers with different versions of firmware. Or...
So 5 years from now when the RAID card fails, one can very easily be stuck with trying to find an exact replacement of a controller that had been out of production for the last 4.5 years. eBay, anyone?
(2) Software that accompanies these cards is often crappy.
The UI is almost always some atrocious Java program obviously written by a contractor in 2 days right before the product shipped, rife with misspellings and terrible English usage.
I have had multiple problems where midrange cards corrupted data when used on machines with more cores than the manufacturers originally expected.
So unless you test the disk failure scenario right upfront, BEFORE you get any data on it, you may well discover that (a) either the recovery mode does not work, or (2) because of unobvious UI you did something that wiped your disks instead of recovering them.
And good luck finding drivers when you upgrade to the new OS. And since you can't easily move the disks to a new controller... see (1).
Luckily, Microsoft Server family has software RAID subsystem (I am sure Linux has something similar, but coming from Microsoft, I am more familiar with Windows software).
To use it, you have to make your disks dynamic, then you can combine multiple disks into RAID-0, 1, or 5. Volumes of different types can share the same set of physical disks, so for example you can have part of disks 1 and 2, and whole of disk 3 to contribute to RAID-5 volume, and remainder of disks 1 and 2 to form RAID-0 temp storage.
The advantage of soft RAID is that it's hardware independent. You can take all these disks, shove them into any other computer (running Server), and it will still be a RAID volume with all your data intact. The same disk packs can be played on both Server 2003 and Server 2008.
It's an insanely cool idea. Unfortunately, in Server 2003, it was coupled with atrocious implementation.
Soft RAID-5 on Server 2003 is slow. Glacially slow. On my servers that feature dual Xeon 5130s (4 cores per server), with a very decent server motherboard (Supermicro X7DVL-E), and fairly decent mid-range SATA controllers, it was barely doing 20MBps writes, and sometimes would drop to 10MBps for extended periods of time. That on disks that are individually capable of 300MBps transfer rate.
RAID-5 works by partitioning disks into chunks, and then combining chunks from N disks to get N-1 chunk worth of data and 1 chunk of parity. Which means that to write a single sector to a volume, the RAID would have to read corresponding chunks from N disks, compute the parity, and write 2 sectors - one data and 1 parity.
A really terrible implementation would not cache the results of this read, so if the next sector needs to be written, it would repeat all the operations anew, instead of reusing the results of the previous reads.
The only way I can explain the RAID-5 write speed on Server 2003 is that it was this very terrible implementation, although I don't know for sure - an alternative explanation is that maybe they had sleep cycles in there :-).
So when Server 2008 came out, I could not wait to install it and check out its soft RAID implementation. I installed it first on my media server, and then on my data server.
Overall, I was quite impressed. Of course my expectations were very low to begin with because of Vista, but this thing was closer to Server 2003 than it was to Vista. I hit a few bugs right upfront - it hard hung once within a couple of days of installation, and then lost a set of disks (but recovered after reboot).
I am not quite ready to blame it on server itself though, because I added an unknown RAID controller to the machine, and it is more than likely that buggy drivers are to blame. The second server which did not have that controller did not (yet) exhibit this behavior.
Since then, it was relatively quiet and everything functioned the way it supposed to.
The drivers for SATA controllers from Server 2003 worked on Server 2008. The chipset drivers for the motherboard did not. I found that the chipset support for Server 2008 is still quite scarce.
What is unquestionably a bug in Server 2008 is that on RAID volumes the performance counters for logical disks are completely broken - everything except the disk free space and idle time is 0 when it is reading or writing full speed.
But most importantly, its soft RAID implementation is way faster than Server 2003. I copied a few terabytes of data so far, and on writes it does sustained throughput of ~80MBps - 4 times faster than the peak performance Server 2003 could muster. The reads (comparing files between two servers) almost saturate 1GBps network.
So far this things gets my stamp of approval :-).
I am yet to see if it is has long-term stability to last between Windows Update reboots (just in case, I preserved the original installations of Server 2003). I will report on this in a couple of months if everything goes well, earlier if it does not.