NAS options for a home network

BoredSysAdmin

BoredSysAdmin

Audioholic Slumlord
Drives do not need to "support" RAID. They have no idea what is being written to them and they don't care. Those red drives are probably more marketing than anything, but supposedly they're built more robust for continual access in a business situation - unlikely need for a media server. Any disk drive faces potential failure, including those, so you should determine what lengths you want to go to in order to minimize the risk that comes with that problem.
Look up TLER. Without it and lack of controller support = RAID + Regular off shelf drives = disaster

I do agree with to the point, enabling TLER in regular green/black drives was possible before until WD blocked it. It's super cheap for them to enable it, but they love extra money they charge for it.
 
AcuDefTechGuy

AcuDefTechGuy

Audioholic Jedi
For now... I think I have about 3.1TBs of total media data... music, movies, tv shows. I don't anticipate adding all that much in the next year or so as I really haven't even watched 75% of what I already got.

Hell... even two of those units still seems a good deal.

I believe I have room for 2...maybe 3 more internal drives in my home pc which is a newish HP. Just to make things easy... holding convenient backups aside... should I just add 3TB drives into my box and call it a day?

Also I'm wondering how do you see the two drives in the Iomega? Does it show as one location on the network as opposed to seperate drive letters if I just add HDD's?
Antec 9 case $90
Gigabyte MB $60
Intel dual core CPU $60
CPU cooler $28
Corsair 4GB RAM $20
Cooler Master 500W PSU $38
Seagate Barracuda 3TB HDD $150

So 6TB NAS would cost you about $600 to build, 9TB would be $750, 12TB would be $900.
 
BMXTRIX

BMXTRIX

Audioholic Warlord
I have 4 Thecus N4100PRO units at my home that I use to store my BD collection on. About 20TB of storage space so far.

It's worth saying that the RAID component is something that was extremely desirable to me, and their RAID boxes allow for additional storage space to be added on the fly if necessary. Haven't tried it myself!

So, about $300 for their fully built gigabit RAID and then add hard drives of your own choosing. I went with 2TB Samsung F4 drives and have excellent results for the better part of over a year now.

What I like is that I can serve up my BD movies and TV shows across my network to my Dune HD players. Two of them at the same time without problem. I can rip my BD collection straight to the RAID across my network directly without hiccups. They have other media server components built in which I have not played with, but overall their product has been extremely reliable for me, which is something that I really appreciate having. I just bought another unit recently and expect a lot from that one as well.
 
itschris

itschris

Moderator
For now... and i realize I'll need to drop the big bucks down the road if I continue storing a bunch of media...

But for now... would you spend $300 on the 4TB Iomega unit or would spend the same amount buying a 2 or 3 larger hard drives to drop in my existing PC?
 
BoredSysAdmin

BoredSysAdmin

Audioholic Slumlord
For now... and i realize I'll need to drop the big bucks down the road if I continue storing a bunch of media...

But for now... would you spend $300 on the 4TB Iomega unit or would spend the same amount buying a 2 or 3 larger hard drives to drop in my existing PC?
See reviews on Newegg.com - iomega 34481 2TB StorCenter ix2-200 Network Storage
http://www.amazon.com/Iomega-StorCenter-ix2-200-Network-Attached/product-reviews/B002SG7MEG/ref=dp_top_cm_cr_acr_txt?ie=UTF8&showViewpoints=1


Imo it's simple - you get what you pay for...
I'd say just get more larger drives - And down the road either get a decent nas (usually about $500-600 without hard drives) or build you own killer ZFS box :)
I have all my larger media stored on WD 2tb Green drive and it's fast enough for me
 
BMXTRIX

BMXTRIX

Audioholic Warlord
For now... and i realize I'll need to drop the big bucks down the road if I continue storing a bunch of media...

But for now... would you spend $300 on the 4TB Iomega unit or would spend the same amount buying a 2 or 3 larger hard drives to drop in my existing PC?
Get the drives and put them in your PC, don't waste your money on some hack Iomega piece of junk.

There are decent RAIDs out there with serious muscle to drive multiple drives in a proper RAID configuration. They hold at least 4 drives, and anything less will not allow you to get up to a RAID 5 setup. But, if you only are going at 3-4TB, then the Iomega isn't giong to be a RAID, it will just be a bunch of discs (JBOD) to you, and that's no different, at all, then running a bunch of drives internally to your PC.

I went that route - got a bunch of drives, put them in my PC. I've got a hard drive (stand alone) sitting on my network... I wasn't happy, not even close, until I dropped the cash for a decent RAID.

It's the difference between a cheap Polk subwoofer and a decent SVS. The price difference, build quality, and final performance is all there. As an A/V person, that's one analogy you really should understand right away. :D

Of course, the choice is still yours.
 
L

Lordhumungus

Audioholic
This is just my personal opinion, but I'm not a big fan of hardware based RAID in the home sector. RAID-5 in particular is overkill when current 2-3TB drives can burst transfer at 130Mb/s and the user may want to easily and efficiently upgrade the array later. Further, the heat, power, and expensive hardware really make it a professional solution.

I think a software based solution is far more elegant. I personally use FlexRaid with 15 HDDs and I can easily add or remove drives whenever I want. I can have as many redundancy drives as I want and if anything breaks I don't have to 1. spend ages rebuilding the array and praying that nothing takes a dive during the rebuild and 2. don't have to worry about having identical hardware to what the array was originally created on. If everything but my HDDs in my server suddenly disappeared, I could pull out each HDD and get the data off individually. If I have more simultaneous failures than I do redundancy drives, no problem, I only lose the data on the individual dead drives and not the entire array.

Overall I just think software RAID is unbeatable in pretty much every aspect except pure, on paper, performance.
 
BMXTRIX

BMXTRIX

Audioholic Warlord
Overall I just think software RAID is unbeatable in pretty much every aspect except pure, on paper, performance.
I will say that my one concern is that if you look at the website for the software based RAID you listed, it doesn't talk about the hardware requirements up front.

If someone is into all of it, then it will take some digging through forums and asking some pretty stupid newbie questions. Especially if you are looking for something wtih some expandability. Can I just use my existing PC and case? Do I need to buy a whole new PC? What software do I need to be running to get it up and working?

These are the exact types of questions which should be answered up front on their website, but they tend to focus more on their product and not the backbone which their product exists on, and this concerns me. Just some 'examples' would be great to see.

If I were a more low end user, then perhaps a few drives in my existing PC would be fine. But, if the RAID is something of a resource hog, then it may not play nice on my PC while I'm trying to actually USE my PC. That would be frustrating.

If I need to buy a whole new PC, then that may make sense if I need a lot of room for expansion, or have a specific goal. If I can use an old PC (and have one), then that may work just fine as well.

I've very seriously considered, and still am considering, this exact track as I really do want a 20+ drive RAID setup at some point. So, when I finish my setup of my 5th Thecus (yes, five!) I will likely try something like this.

But, when I'm wandering around in the dark, with solutions that are complex, and very little normal English explanations of decent products, the Thecus really has been a great product for me. Their manual was on a CD, but was about 100 pages with screen shots of Windows to walk me through every step of the setup. It came with software that did what it was supposed to do (awesome!) and even firmware updates were easy. Most of all, the units exist on their own. They don't steal my PC resources for their use and they have been working well.

Both solutions seem like they are good, but there definitely is a bit of a learning curve on the software solution I think.
 
L

Lordhumungus

Audioholic
I will say that my one concern is that if you look at the website for the software based RAID you listed, it doesn't talk about the hardware requirements up front.

If someone is into all of it, then it will take some digging through forums and asking some pretty stupid newbie questions. Especially if you are looking for something wtih some expandability. Can I just use my existing PC and case? Do I need to buy a whole new PC? What software do I need to be running to get it up and working?

These are the exact types of questions which should be answered up front on their website, but they tend to focus more on their product and not the backbone which their product exists on, and this concerns me. Just some 'examples' would be great to see.

If I were a more low end user, then perhaps a few drives in my existing PC would be fine. But, if the RAID is something of a resource hog, then it may not play nice on my PC while I'm trying to actually USE my PC. That would be frustrating.

If I need to buy a whole new PC, then that may make sense if I need a lot of room for expansion, or have a specific goal. If I can use an old PC (and have one), then that may work just fine as well.

I've very seriously considered, and still am considering, this exact track as I really do want a 20+ drive RAID setup at some point. So, when I finish my setup of my 5th Thecus (yes, five!) I will likely try something like this.

But, when I'm wandering around in the dark, with solutions that are complex, and very little normal English explanations of decent products, the Thecus really has been a great product for me. Their manual was on a CD, but was about 100 pages with screen shots of Windows to walk me through every step of the setup. It came with software that did what it was supposed to do (awesome!) and even firmware updates were easy. Most of all, the units exist on their own. They don't steal my PC resources for their use and they have been working well.

Both solutions seem like they are good, but there definitely is a bit of a learning curve on the software solution I think.
The answer is no hardware or software requirements AT ALL. It can sit on top of ANY Windows or Linux environment and is completely compatible with whatever hardware you already have. If it works in your OS, it works in FlexRaid. Further, you can break apart the array at any time and the data is totally readable. You can also build the array out of any existing drives/data without losing what is already there.

I can't say for sure, but I believe the reason that info on the site is a little sparse is that it was originally a completely free solution that had a lot of community involvement via forums. Only recently has it gone commercial after many people realized just how flexible and powerful it is. This also allows the developer to dedicate more full time resources to the project.
 
H

Hocky

Full Audioholic
This is just my personal opinion, but I'm not a big fan of hardware based RAID in the home sector. RAID-5 in particular is overkill when current 2-3TB drives can burst transfer at 130Mb/s and the user may want to easily and efficiently upgrade the array later. Further, the heat, power, and expensive hardware really make it a professional solution.

I think a software based solution is far more elegant. I personally use FlexRaid with 15 HDDs and I can easily add or remove drives whenever I want. I can have as many redundancy drives as I want and if anything breaks I don't have to 1. spend ages rebuilding the array and praying that nothing takes a dive during the rebuild and 2. don't have to worry about having identical hardware to what the array was originally created on. If everything but my HDDs in my server suddenly disappeared, I could pull out each HDD and get the data off individually. If I have more simultaneous failures than I do redundancy drives, no problem, I only lose the data on the individual dead drives and not the entire array.

Overall I just think software RAID is unbeatable in pretty much every aspect except pure, on paper, performance.
I don't have any experience with FlexRAID, but I have a couple of Drobos which operate on a somewhat similar principal. The only thing I can tell you about the Drobo is that I would never use them for anything. lol. Unreliable, poor performance, and overall a nuisance. Clearly you've had good experiences, so maybe that is a good product - I'll take a closer look at it some time.

The bottom line is that it just comes down to what your tolerances for data loss and performance are. That is why I have 6TB of real data in use at home for under $1000 and in my office, the same 6TB of real data costs $225,000 in hardware alone.
 
L

Lordhumungus

Audioholic
I don't have any experience with FlexRAID, but I have a couple of Drobos which operate on a somewhat similar principal. The only thing I can tell you about the Drobo is that I would never use them for anything. lol. Unreliable, poor performance, and overall a nuisance. Clearly you've had good experiences, so maybe that is a good product - I'll take a closer look at it some time.

The bottom line is that it just comes down to what your tolerances for data loss and performance are. That is why I have 6TB of real data in use at home for under $1000 and in my office, the same 6TB of real data costs $225,000 in hardware alone.
The data loss portion is covered by having as many or as few redundant drives as you want. For example, if you have a total of 15 HDDs and have 12 for storage and 3 of them dedicated as a backup, the array will handle 3 simultaneous drive failures and still be able to rebuild. If more drives fail than you have redundancy drives, you only lose the data on those drives rather than the whole array. Now obviously you are still limited by real life (i.e. it doesn't make your PC fire/flood/tornado proof), but other than alternate media like magnetic tapes and/or offsite back-up I don't see the data being a whole lot safer.

As for performance, since most current 2TB+ drives burst speed of ~130Mb/s more or less saturates Gigabit Ethernet I don't see a reason to have the drives in a hardware array. It's also nice that you can use the standard WD Green drives without worrying about TLER since the entire array doesn't have to spin up to access data.

One thing I will say is, I have not tried their real time RAID implementation, but only use the snapshot RAID once every 2 weeks or so. I have heard that the real time RAID is somewhat slower for writing data due to the overhead. That being said, in most at home NAS environments, changes aren't being made frequently enough to warrant real time RAID protection anyway.

I'm curious about your experience with the Drobo. I've always heard really good things about them, but never tried them out due to the cost.
 
H

Hocky

Full Audioholic
The data loss portion is covered by having as many or as few redundant drives as you want. For example, if you have a total of 15 HDDs and have 12 for storage and 3 of them dedicated as a backup, the array will handle 3 simultaneous drive failures and still be able to rebuild. If more drives fail than you have redundancy drives, you only lose the data on those drives rather than the whole array. Now obviously you are still limited by real life (i.e. it doesn't make your PC fire/flood/tornado proof), but other than alternate media like magnetic tapes and/or offsite back-up I don't see the data being a whole lot safer.
What does the disk overhead look like? Is it basically a RAID 5 style parity + hot spares if chosen? Or is there additional disk overhead per drive?

As for performance, since most current 2TB+ drives burst speed of ~130Mb/s more or less saturates Gigabit Ethernet I don't see a reason to have the drives in a hardware array. It's also nice that you can use the standard WD Green drives without worrying about TLER since the entire array doesn't have to spin up to access data.
Sustained throughput is a lot less important to me than IOPS. An average 2TB drive is actually very slow.

One thing I will say is, I have not tried their real time RAID implementation, but only use the snapshot RAID once every 2 weeks or so. I have heard that the real time RAID is somewhat slower for writing data due to the overhead. That being said, in most at home NAS environments, changes aren't being made frequently enough to warrant real time RAID protection anyway.
How long does it take to create the "snapshot" and with what kind of data change rate?

I'm curious about your experience with the Drobo. I've always heard really good things about them, but never tried them out due to the cost.
It was nothing but bad. It was slow and crashed constantly. When it would come up, often times it would lose its shares and/or data. To be fair, this was in a corporate environment, but we weren't exactly working it hard. Drobo's support team was worthless, of course, but it is what it is when you're supporting a product that just doens't work, I think. I have 2 big dollar drobo pros sitting on a desk that haven't been used for more than 3 months total. Not even worthy of a backup target for me.
 
L

Lordhumungus

Audioholic
The drives are not aggregated in any way other than appearing as a single drive if you choose to pool them, so the performance is equal to whatever the drives actual performance is. As for the snapshot, the initial build and any rebuild due to failure is what takes a long time and increases pretty much linearly as you add more storage (referred to as Data Risk Units) or more parity (referred to as Parity Protection Units). If I remember correctly, it takes somewhere in the neighborhood of 2 hours per drive, but I don't recall for sure. The snapshot itself is pretty quick though as it only focuses on changed data which usually isn't much for a home server/NAS.

I'd recommend checking out this Wiki page to get a quick idea of the principles it uses.
 
H

Hocky

Full Audioholic
The drives are not aggregated in any way other than appearing as a single drive if you choose to pool them, so the performance is equal to whatever the drives actual performance is.
To be in any way redundant, the drives HAVE to be aggregated so that parity can be distributed. Based on the little detail available, it seems to be basically a raid 5 deployment, but on top of the native file system which allows them to be independently readable. I wonder if you pull a live disk out of that raid and insert it into another computer, would the parity data just be held in a hidden file somewhere on the disk? I would think that it must in order to work within the native file system. Regardless, it is an interesting tool.
 
L

Lordhumungus

Audioholic
To be in any way redundant, the drives HAVE to be aggregated so that parity can be distributed. Based on the little detail available, it seems to be basically a raid 5 deployment, but on top of the native file system which allows them to be independently readable. I wonder if you pull a live disk out of that raid and insert it into another computer, would the parity data just be held in a hidden file somewhere on the disk? I would think that it must in order to work within the native file system. Regardless, it is an interesting tool.
Negative Ghost Rider, that's the old hardware RAID way of thinking there :) The parity data is not contained on each disk but on a single (or more if you so choose) drive (Parity Protection Unit). Therefore, only the disk being accessed in the background of the storage pool is spun up. The exception would be if you are using the real time RAID implementation in which case it would need to read/write to the storage drive as well as the parity drive.
 
H

Hocky

Full Audioholic
Negative Ghost Rider, that's the old hardware RAID way of thinking there :) The parity data is not contained on each disk but on a single (or more if you so choose) drive (Parity Protection Unit). Therefore, only the disk being accessed in the background of the storage pool is spun up. The exception would be if you are using the real time RAID implementation in which case it would need to read/write to the storage drive as well as the parity drive.
What you're describing is still the "old" style raid, it is just raid 4 instead of 5. No one really uses 4 because it acts the same as 5, but performs slower.
 
L

Lordhumungus

Audioholic
What you're describing is still the "old" style raid, it is just raid 4 instead of 5. No one really uses 4 because it acts the same as 5, but performs slower.
This is basically true if you are using the real time RAID engine with only a single parity disk, except it is data level and not block level parity. Where it differs is the ability to add as many parity disks as you wish to account for failures. For example you can add in 12 parity disks which will allow for 12 simultaneous drive failures.

Something else that is nice about this solution is allowing ANY type, style, and size of storage to be a part of the pool. The only limitation is that the parity drive capacity has to be equal to or greater than the largest storage drive in the pool. So for example you can make a pool out of a 2TB SATA drive, a 4GB USB flash drive and a 60GB PATA drive formatted to FAT32 as long as the parity drive is 2TB or larger.
 
H

Hocky

Full Audioholic
Yea, regardless of the technicalities, media serving is fairly low intensity to the disk and that makes this system pretty attractive. I have a handful of 2tb drives laying around- I will plug them in and try it out.
 
itschris

itschris

Moderator
Yea, regardless of the technicalities, media serving is fairly low intensity to the disk and that makes this system pretty attractive. I have a handful of 2tb drives laying around- I will plug them in and try it out.
I've doing a lot of reading on this stuff. It appears the main downside to building your own PC based NAS is the power consumption issue, i.e. it take a lot more juice to run a PC 24/7 than just a four bay mission specific NAS box.

The thing is, how much are we talking? I can't imaging the electric being so much higher that it cover the cost of a decent NAS which costs about $400-ish on average for a four bay model before you even add hard drives.

I think I'd rather build a standalone box with say 2- 2 or 3 TB drives and just add as I go along. I"m not even sure I want a RAID5 setup. I think most importantly having one large drive combined volume would be more suitable and easier manage. I don't think I want to have this set of stuff on the D: drive, the other set on the E: drive, etc.

As far as backup goes... once I get this setup, the bulk of my inventory probably won't change all that much. I think I'd rather just have an alternative backup strategy... which I'll need anyway even if I go RAID5.
 

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top