RAID5 is often very unfriendly to consumer drives, especially "green" drives. Sleep mode and spin up delays can cause the controller will falsely assume a drive died. Have you heard of any such issues with this technology?
Yes, and they will also excessively park and un-park the heads unless you use WDIDLE3 to increase the idle time before parking. The excessive parking causes "hitching" as it spins up in single drive setup and can cause a RAID drop out as well. Along with excessive wear on the drive if it's accessed frequently. (Never use a green drive as a boot drive without changing this setting) I'm serious in saying you need to use WDIDLE ASAP because the green drives park the read head in ridiculously short periods of time. Like less then a minute short.
The other setting you need to fix is TLER (amount of time given to go back and retry/fix a read error). Sadly WD removed this unadvertised setting from their green drives sometime after the 1 TB (not sure if 1.5 TB drives have the setting, I know the 2 TB don't) green. Without lowering this setting to ~7 you will experience drives dropping from the array. The tool to change it is called WDTLER, you can find guides and info on google.
Another thing to consider is I ran a software RAID5 on a nvidia chipset for a couple years with 4 of Seagates "enterprise" hard drives, rebuild time for a 760gb array (4x250gb) was around 12-14 hours. It did perform well enough though (read wise) at 160 MB/s avg sustained read.
I honestly don't know what to recommend other than to just using software to clone/backup drives for the safety of redundancy because using the green drives will cause you problems if you can't change the 2 settings I mentioned above.