Sunday, October 19, 2008

How to build a server for your house

Five years ago there came a moment in my life where I realized that I was being overwhelmed by the amount of data in our household.

We have just bought a digital camera, and started accumulating tons of digital images that needed reliable store. There was also software on CDs that kept getting lost; various random bits of data - code, text, word documents - that spanned the last ten years, and occasionally required retrieving; and mp3 files.

Meanwhile, the number of computers that the household had kept increasing. My wife and I had laptops; my daughters had their own PCs. All were networked using a Windows CE-based router which was also a print, web, and a small file server.

After a brief attempt at using USB drives, I quickly came to conclusion that they are not the answer. There were several problems.

First, it is impossible to say if the data on a drive is good. There were bit errors introduced by copying. A bad sector in the middle of a file would suddenly make it unreadable, but you wouldn't know until you try. And of course occasionally, the whole drive would go bad. At least the later problem is easier to discover, if not fix.

Second, I ended up with lots of them, fast. Because I wanted to protect against hardware failures, I would usually create multiple copies of the data. Then I would forget where the data were, and how many copies I had. Most of the time I would fail to update all copies. Towards the end I started treating disks as a write-once media, creating a new backup every time I remembered to do it, on the theory that one of the copies would survive a disk/sector/bit error.

Finally, USB drives are slow. I don't know where they got the 480 megabit figure - I was getting 10 megabytes per second (100 megabits, if you account for the transport overhead), and even that only when the disk was directly connected to a PC, and there was only one disk. Plug in a couple in a USB hub, and the transfer quickly drops by a half.

So when Microsoft shipped the Small Business Server 2003 and gave a copy of it to every employee of the Windows team, I pounced on the opportunity, bought a second-hand workstation from one of the PC-Recycle shops that spawned around Microsoft, and built my first home server.

It turned out that building various servers for homes had quickly become my hobby. In the next five years I've tried almost every possible variant. From dirt cheap Fry's motherboard/CPU combinations to super-expensive Xeon/Supermicro performance monsters. From SBS 2003 to Server 2008. I've even helped start (as the very first developer, and then dev manager) the Windows Home Server project.

All in all, between home and work I've built perhaps 20 servers. I have 4 of them in my basement right now, totaling roughly 10TB of usable protected storage, doing my web site and email hosting.

The write-up below documents what I learned as a result.

The only caveat is that the software section is entirely Windows-based. Since I work at Microsoft, I can buy our own software for drastically discounted prices at the company store. If I didn't, perhaps there would be an incentive for me to look at what's available in the Linux world.

Or maybe not - one can buy SBS 2003 for around $300 these days, and this comes with Exchange, domain controller, file server, and everything else in one package conveniently wrapped in one installer. Would it be worth for me to learn Linux to avoid spending $300? I don't know. (

Anyway, let's get to the meat!


There are three priorities that you should follow when building a home server.

First, it must be reliable.

Unlike a personal computer, a home server is always on, and is rebooted extremely infrequently. As a result, it is prone to weird errors that are usually invisible on a typical personal computer - such as spurious bit flips in memory, and during data transfers.

When my home servers were cheap Fry's motherboard/CPU combos, they would usually hang every month or so, and would produce a bit flip in data transfers - usually one per 100-200GB of files copied. It was then when I got in the habit of diffing the source and destination every time I copied files around - and in many cases I discovered that I was lucky I checked.

Therefore, a server motherboard and ECC memory are absolutely required. Say no to cheap desktop hardware - in most cases the motherboard should be in the $200+ range, and make sure it does take ECC memory before buying it.

Second, it must be able to accommodate many internal hard drives.

While you can add external drives, there are several problems with that. First, external drives usually cannot be combined in fault-tolerant arrays, which is the option you will most certainly want (there will be more discussion below). Second, they make the system fragile - they fall off, the cables spuriously disconnect. Third, they create an unsightly - and more importantly, hard to navigate - mess of cables and power supplies that makes the server impossible to move.

Definitely get a full tower case - not a mini- or mid-tower. Make sure that there are fans near the internal 3.5" hard drive bays, so the drives are properly cooled. Check the 5.25" bays - it would be the best if there was no railing in these slots. Smooth walls where the equipment is held in place only by the screws are the most versatile: many SATA enclosures do not have grooves and won't fit into multiple 5.25 bays without modifying the case.

Do buy the SATA enclosure that fits the case - you will want a minimum of 11 drives per server - and there are no cases with 11 internal drive bays.

For example:, or (but note the absence of the grooves on the second one).

Also, hard drives are power-hungry, especially on boot when they spin up. The power consumption of my server (which has 11 drives) jumps to 500 watts on boot, before receding to around 270 watts in steady state. If 500 watts does not seem like much, consider that much of the power is consumed on the 12V rails, whereas much of the modern power supplies are optimized for the maximum wattage on the 5V rail - the video card.

A 750W PSU is barely enough for 9 drives - get a minimum of 850W for 11, and 1200W if you plan to have 16. Do pay attention that the enough power is available on the 12V rails - allow 40W/drive.

And do get a brand name, this tends to be one of the more important components for the reliability of the system. I used CoolMax and Thermaltake and they were quite reliable.

Third, the server should be QUIET and power-efficient.

These two things go together.

Drives generate a lot of heat, and heat takes many fans to dissipate. Pick out the quietest (12cm or above), yet powerful: if the temperature rises inside the case, the CPU fan (which is small, and therefore requires more RPM) tends to sound like a rocket engine. Even if the server is in the basement, you probably do not want it to be the most audibly prominent thing there. And forget the closet, unless it has proper ventilation :-).

Do get a CPU with a decent power management. Currently, 3000-series Xeons seem to have a decent tradeoff between the power consumption and the compute power.

Also keep in mind that a big server might dissipate ~250-300W of heat. This is equivalent to 1/2 of a portable oil heater, operated on low. And it's always on. Check the electricity prices in your neighborhood :-).

What is less important?
Rejoice - there ARE a few things you do NOT need to care about. They are usually the things you care most when building a desktop machine.

Unless you plan to run a virtual farm, CPU computing power is not important. But CPU I/O bandwidth is. Most desktop processors optimize for the former. Many server processors optimize for the later.

As of this writing, I would recommend Xeon 3xxx series. They are relatively cheap, have great power efficiency, and more than enough juice to run any Microsoft server OS. They do not support dual configuration - but two cores are plenty for serving files.

Obviously, you don't care about graphics - a built-in video adapter is available on most server motherboards, and it's more than enough.

More controversially, I do not recommend spending money on hardware-based RAID controllers.

(A short aside about RAID.

There are 3 most popular modes of combining multiple hard drives into one storage array for speed and/or reliability. All three can be accomplished in software - by storage driver doing the work - or in hardware - by the controller card doing the work.

RAID0 places half of the data on one hard drive, and one on another, in stripes. As a result, you are writing your data on two disks simultaneously and effectively at twice the speed. But if any one drive fails, all your data is gone - the probability of the array failure is twice the probability of a failure of a single drive. Because the task can be accomplished by scheduling two DMA transfers in parallel, the best hardware is not any faster than a software implementation.

RAID1 places the same copies of the data on both hard drives. If one fails, the other survives, and the array can be repaired by substituting the failed drive for a new. RAID1 arrays are very reliable - but you pay twice for protecting the data. Hardware and software speeds are the same for the same reason as above.

The best way to visualize RAID5 is to imagine a system of 9 drives where the first 8 store 1 bit of every byte, and the 9th stores the XOR of all the bits. If any of the drives fail, its contents can be regenerated from the other 7 and XORs (and if XOR drive fails, the mask can be regenerated again from the data). Note that excluding the cost of generating a mask, such a system could be 8 times faster in transferring the data. This is not how the RAID5 system really works - the data is striped, and XOR is computed from the corresponding stripes - but it's close enough. Real RAID5 arrays can consist of any number of disks greater than two - N-1 drives store the data, and Nth drive stores the XORs (again, it's more complicated in reality, but we'll skip the gory details).

RAID5 protects data for much less than RAID1, because of N drives only one is wasted. For this reason alone you will want to use RAID5 in your home server. Given a powerful CPU, a good software implementation could give RAID5 hardware a run for the money. As we will see below, the quality of software RAID5 stack in Windows Server varies.)

All versions of Windows Server support software RAID5. I am sure, Linux support it as well. Note that if your raid controller is cheaper than $400, chances are that its RAID5 implementation is in software, not hardware, anyway.

The most important consideration about hardware RAID is that it protects you from a disk failure, but not from the failure of the controller itself. Because the format of the data on hard disks managed by hardware RAID depends on the RAID card, the RAID controller cannot be substituted for a different model. So if your controller dies, so does your data if you can't get the exact replacement - which is very likely if it dies after a few years.

For this reason I prefer using software RAID - it is standard, it is hardware independent, and if your computer dies, you can put the disks in another computer and the array will be recognized.

I typically have 2 arrays of 5 disks each per server, plus one small (200GB) system disk.

Speaking of disks - more disks that are smaller have better reliability and better price/performance than very big expensive disks. I usually go for whatever is available around $100, which is typically two sizes below the current maximum.

For example, as of this writing the biggest available disk is 1.5T. 1T retails for ~$150, and 750GB drives can be had for ~90. 750GB would be what I'd use. For just below $1000 in drives, you'd get 6TB of usable protected space (10 drives in two 5-disk RAID5 arrays, 3TB of usable storage per array).

The memory. You don't need to stuff your computer with memory. For any reasonable home usage, 2-4GB is enough. For a corporate environment where hundreds of people may be using the system concurrently, this is obviously not true, but for home, it is really enough.

So in short, hardware wise, the rules are simple: big case, big power supply, server motherboard, 2-4GB of ECC memory, one midrange Xeon CPU, and simple, run-of-the-mill SATA controller card, with as many cheap disks as possible in software RAID5 arrays.


Again, this only covers the Windows options. I do not know much about what's available on Linux.

Windows Server 2008


  • Works really well as a file server.

  • RAID5 implementation is very good.

  • Lots of useful data-management features.

  • Fast.

  • Secure.

  • Available to current MSDN subscribers at 0 incremental cost. You keep your license key even if your subscription ends.


  • If you don't work at MSFT, it's $800, plus CALs.

Windows Server 2003


  • Cheaper than Server 2008 - can be had for $500.


  • Software RAID5 implementation is crappy. Expect ~15MBps throughput.

  • $500 is still a lot of money.

Windows Home Server

Full disclosure: I was the first dev and then the dev manager for the v1 of WHS.

This is an interesting product, and its target market is, well, the homes in need of a server. It is relatively cheap ($140 in Newegg, although it is an OEM version, which means that once registered, it is tied to a motherboard). It comes with a great backup solution, a UI custom-made for home server scenarios, and a nice remote access portal.

Its weakness is the storage system.

It is a conceptual variant of RAID1 - you can have some of your files duplicated. If you have a lot of data though, you're paying twice for its protection - for me, it would not just be prohibitively expensive, but a physical impossibility of cramming so many disks into the server chassis. The recommended way to expand Windows Home Server is via external USB disks, but (see above) I do not recommend doing it because it makes the physical installation fragile.

And because the UI and all other subsystems in Windows Home Server are tied to its custom storage management solution, you can use native Server 2003 software RAID5, but you would then have to keep these disks on the side and leave them unconfigured in the Windows Home Server UI. You would not be able to put backups on them.

To do this, install Windows Home Server on a computer with a single big hard drive. Do not create any shares, and do not put any data on the shares created for the users. Then, AFTER the installation, add more disks, and configure them in RAID5 arrays by using native Server 2003 UI. Leave these disks unconfigured in Windows Home Server UI. You will also need to use native UI to share them out.

Another way to use Windows Home Server is by putting it on one big, hardware-managed RAID5 system. You do have to pay more for hardware and be mindful of possible controller failure, but given a substandard performance of 2003 software RAID, you will win on performance.

A relatively small disadvantage of Windows Home Server compared to other options is its inability to join the domain, or be a domain controller. This had to be done to justify the much lower price.

Small Business Server 2003

This is the product with potentially the best price/performance ratio that has ever come out of Microsoft. It is a full implementation of Server 2003 with Exchange server, Sharepoint, and a nice remote access portal, and you can buy a retail version of it for a mere $300 (go to; search for Small Business Server 2003 Standard; be careful - a lot of options are CAL packs, not the software itself; find the RETAIL version - OEM copies are only marginally cheaper, but tied to the motherboard once you register them).

It comes with the installer that sets it up as a domain controller, and configures its components. If a PC has two network interfaces, it can even be used as a router! It serves as a DHCP server, and has a DNS proxy, and even a VPN server built-in, so you can access your home network from the outside.

And yes, you do get a real domain for your home. Your user accounts and passwords are now centralized, and any user can log in to any machine in the household using the same name and the password. It's really cool!

In addition, you can host your own web site, your own email, share the calendar with the family members (Microsoft Outlook is included), etc. Highly recommended!

Two gotchas.

First, do not put any data on the system hard disk (the whole disk, not just the partition): the write cache on that disk is turned off, so accesses are SLOOOOOOOW. This is because of the Active Directory database.

Second, SBS is restricted to be a domain controller: it cannot join any other domain.


I use the following relatively simple rules to ensure that my data is healthy.

  1. I ALWAYS diff source and destination on multi-hundred-gigabyte data transfers. Bit flips do happen when you're dealing with a lot of data. A single bit flip can render a ZIP file useless.

  2. I keep all data on RAID5 arrays. I keep multiple copies of truly irreplaceable data, such as digital pictures, videos, and documents. There is one server for such data where two RAID5 arrays are mirrored by hand (and are periodically diffed).

  3. Once every few months, I backup the irreplaceable data to a large drive and store it offsite.

  4. Truth is always on servers. All other computers may periodically keep a cache, but all modifications are regularly transferred to servers.

If you've made it so far, congratulations! You can build servers for your home for fun and profit, and avoid all the mistakes I've made when building mine :-)!


Anonymous said...

Another option is the open source home server from

It is actually far more powerful to use than those listed. It even configures a VPN for you so you can securely access your home network from anywhere. Furthermore, it is driven through an easy to use web browser interface!

Take a look:

Dave said...

Also, Windows Server 2008 Standard Edition is available to students for free through Microsoft's DreamSpark program.

dzembu said...

Nice write up Serge! :-)

1) What are you store on your servers ? How many GB each group of files takes ? ( Like home video , ripped video ...)

2) Do you really need all these? Do you have time to watch these videos?

3) Will you ever edit home DV ?

4) Do you know that there are Netflix and instant View ( with now + 2500 new recent flicks ) ?

4) Apple TV and rentals ?

5) Fotki, Flicker, YouTube?

6) Gmail ?

7) How you "check" data integrity ?

8) What is your "off-site" plan ( still USB disk - how much info you really "protect" this way ?) ?

9) How much ENERGY your server farm consume and how much it cost per/year.

10) How much is h/w s/w cost overall ?

11) What SERVICES are you Servers run ?

12) Why do you need to VPN home?

13) What is wrong with 100$ per year unlimited Web Hosting ?

( you can drop me e-mail if you do not like to share )

Sergey Solyanik said...

To Dzembu:

You're arguing about owning vs. renting. There's nothing wrong with renting model. There're people who buy cars and there're people who rent them - it's all about personal preference.

I like owning my data. The argument about cost does not make any sense, because there's no way to price "fun" :-).

dzembu said...

If you like to own your data I would suggest building a good "cage" around your server farm. While ugly boxes are less attractive compare to laptops as I found out, I would not push the luck too much :-)

BadTux said...

Agree on your hardware specs, other than your Xeon. I prefer the Opteron here because while it doesn't have the CPU grunt of the latest Xeons, it has better I/O bandwidth with an appropriate multi-bus server motherboard. That makes a significant difference with software RAID1 writes in particular, where you can put the mirror drives on different I/O busses and your server OS can drive both busses at the same time. And of course don't forget gigabit networking everywhere. Even N-spec wireless is much slower than hardwires straight to your server through a gigabit switch.

I'm not sure that, with 1TB drives being so inexpensive, I'd want a case with as many drives as yours. My current case will hold 8 drives. That's 4 terabytes if I use mirrored RAID, which is pretty much all that I trust with the Linux RAID stack (I've had no end of problems with the Linux software RAID5 in the past). But then, I don't rip high-def videos or anything like that, so 4 terabytes is plenty. Still, even an 8-drive case is a big chunk.

Regarding external USB, Linux will RAID those together, but as you mention, it is *slow*. So I don't do that. Only reason for external USB is for off-site backups.

Agree on the plain old SATA vs. hardware RAID. My experience is that software RAID, at least under Linux, is faster than "hardware" RAID and more reliable, presumably because the open source Linux RAID drivers have more eyes looking at them for problems than the closed-source internal firmware on these "hardware" RAID cards (which also have a slower CPU than the main CPU on most systems today). My motherboard came with four SATA ports and I added a four-port SATA card. Things scream.

Regarding Linux, Linux has one big advantage -- it just runs and runs and runs like the friggin' energizer bunny. You don't have to worry about which disk the system is on either, because if you plug 4 gigs of memory into your server, you just make one small slice your system partitions and then LVM the rest into one big filesystem for your data. I suggest using XFS as the filesystem, SGI designed XFS for big multi-terabyte storage and it shows, compared to other filesystems for Linux it is much faster for large filesystems and large files (such as video files). It does have some quirks that make it less than ideal for your system partitions (specifically its bad habit of re-ordering writes so that metadata gets written before the actual data, resulting in files with block of zeros at their end if the system power fails at the wrong time), so leave those as ext3.

In the past I worked on the design of a 48 terabyte Linux-based network file server for a company that shall not be named, and the above pretty much reflects 40 million dollars worth of experience poking and prodding Linux to handle that much data. But it works just as well on my own home personal server as it did on that six-figure-costing network appliance. When you use LVM and the Linux XFS filesystem on your home server today, you're getting the benefit of tens of millions of dollars worth of development time by dozens of companies that have been poking and prodding that filesystem to serve the needs of *big* Linux-based servers, yet you're not having to pay a penny for it. What a deal!

Sergey Solyanik said...


Based on what you say (that you don't trust Linux RAID5), Linux becomes not free in this scenario.

If you need to get 4TB of space on Linux and you have to use 8 1TB drives in RAID10, this same thing can be done on Windows using RAID5 and 5 1TB drives. So you win 3 drives - at $140/drive, it's $420. SBS 2003 at around $300 is cheaper than that.

Software RAID5 implementation on Windows is very reliable. I've used it for years now, with great success.

BadTux said...

My opinion of RAID5 on Linux is jaundiced because of my experiences dealing with it on a 48TB system. We had to do a *lot* of hacking on it to make it reliable under all circumstances. That was where a part of the $40M I mentioned went to -- fixing the Linux RAID5 implementation.

That said, most of our changes did make it into the current Linux kernel, and our result was bullet-proof. None of our customers ever lost any data. But I'm extremely conservative and all those bad experiences tend to make me gunshy around Linux RAID5. I never personally have lost any data, but there was one time on my personal server when, due to failed hardware, I had to go hack RAID headers to get the generations correct so I could assemble my RAID5 array (they'd gotten out of sync during a crash, I had to get two out of the three to agree on a generation in order to get things assembled). The upside is that you can do that kind of hacking with Linux -- if you know what you're doing, it's virtually impossible to lose data with Linux. The downside is that sometimes you end up doing so until someone with money decides to fix whatever is broken.

alexandroid said...

Wow, thanks for the detailed description! I was thinking about setting up something like that but smaller. But now it scares me even more. =)

Gosh, why with all these technologies and software we still have to do so much work? I just want a box of hard drives which would be quiet and power-effective, blip when some of drives fail, connect to my wifi router and give me a piece of mind. Ideally, available from internet.

That's it! I do not want to install Windows Server, think about hardware specs, prices, shop for coolers etc...

So far I am stuck with using 2 USB drives and robocopy to occasionally mirror files from a laptop. And couple times a year I offload photos and videos on DVDs (best ones go on Picasa, others I will simply give to my grandchildren one day to play with).

For the most critical and portable data I copy it on JungleDisk and hope their AmazonS3 servers will be reliable...