Hello,
Although I am phrasing this as a question, it is possibly more of an observation. Although I would be happy for someone better versed in pool design/layout to confirm. If true, on limited drive numbers, mirrors are not as wasteful of space as one might otherwise consider.
As an interim solution at work, I have created some FreeNAS appliances for a datawarehouse DB store. This is because the existing storage is ~7 years old and lacks the grunt to drive the new DB. Mission accomplished, because performance has been good. Unfortunately, all I have had to work with are Dell R730s (not the XD variant) and so am limited to 8x2.5" slots.
As I was told that the use pattern was likely to be random access, I went for 8 x 2TB Samsung SSDs in mirrored vdevs, although I have a suspicion that it is a lot more sequential in nature - mainly from looking at ARC requests (prefetch data).
As the procurement process is going to take a while to get in the new storage system, I have been asked whether the current interim solution could host a second (smaller, about 50% of the size) data warehouse DB. I suggested that it most likely would, as I believed 2 x raidz1 or even a single raidz2 would probably offer good enough (possibly better) IO given the largely sequential type of access. The theory being 4 x 2TB mirrors ~ 8TB raw, whereas 2 x 6TB Raidz1 ~ 12TB raw. Ideal, I thought.
Initial testing looked promising, performance likely to be adequate, so I went ahead and replicated the currently used datasets to the new appliance (which would then replace the original after a storage cutover).
Horror! The datasets consume 1.44 (or so) as much space on the two raidz1s as on the 4xmirrors. Therefore pretty much completely negating the increased apparent size:
So, to get any effective extra space I believe I will have to configure as a raidz1 (7 drives + parity, offends my sense of symmetry).
Would some kind and knowledgeable soul please confirm my theory is correct, and that I haven't done something daft?
Thanks a lot :)
Although I am phrasing this as a question, it is possibly more of an observation. Although I would be happy for someone better versed in pool design/layout to confirm. If true, on limited drive numbers, mirrors are not as wasteful of space as one might otherwise consider.
As an interim solution at work, I have created some FreeNAS appliances for a datawarehouse DB store. This is because the existing storage is ~7 years old and lacks the grunt to drive the new DB. Mission accomplished, because performance has been good. Unfortunately, all I have had to work with are Dell R730s (not the XD variant) and so am limited to 8x2.5" slots.
As I was told that the use pattern was likely to be random access, I went for 8 x 2TB Samsung SSDs in mirrored vdevs, although I have a suspicion that it is a lot more sequential in nature - mainly from looking at ARC requests (prefetch data).
As the procurement process is going to take a while to get in the new storage system, I have been asked whether the current interim solution could host a second (smaller, about 50% of the size) data warehouse DB. I suggested that it most likely would, as I believed 2 x raidz1 or even a single raidz2 would probably offer good enough (possibly better) IO given the largely sequential type of access. The theory being 4 x 2TB mirrors ~ 8TB raw, whereas 2 x 6TB Raidz1 ~ 12TB raw. Ideal, I thought.
Initial testing looked promising, performance likely to be adequate, so I went ahead and replicated the currently used datasets to the new appliance (which would then replace the original after a storage cutover).
Horror! The datasets consume 1.44 (or so) as much space on the two raidz1s as on the 4xmirrors. Therefore pretty much completely negating the increased apparent size:
Code:
Mirrors: NAME USED AVAIL REFER MOUNTPOINT ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@OBIPRE-20180126-KEEP 978M - 533G - ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@AftFEB18_pre.620OBI 0 - 533G - ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@Post_Feb18_PSUOBI 0 - 533G - ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@POST.FEB18.PSU.OBI 0 - 533G - ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@PRE_ETL-20180201 8.33G - 536G - ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@POSTetl-20180205 120G - 519G - ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@Kayes_test 15.9G - 530G - ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@auto-20180403.0300-2d 7.10G - 531G - ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@auto-20180404.0300-2d 6.59G - 531G - ssdpool8k/obi/mcoradata_obipre_12c_ssd8k@auto-20180405.0300-2d 169M - 531G - Raidz1 NAME USED AVAIL REFER MOUNTPOINT pressd01/obi/mcoradata_obipre_12c_ssd8k@OBIPRE-20180126-KEEP 1.35G - 772G - pressd01/obi/mcoradata_obipre_12c_ssd8k@AftFEB18_pre.620OBI 0 - 772G - pressd01/obi/mcoradata_obipre_12c_ssd8k@Post_Feb18_PSUOBI 0 - 772G - pressd01/obi/mcoradata_obipre_12c_ssd8k@POST.FEB18.PSU.OBI 0 - 772G - pressd01/obi/mcoradata_obipre_12c_ssd8k@PRE_ETL-20180201 11.9G - 777G - pressd01/obi/mcoradata_obipre_12c_ssd8k@POSTetl-20180205 173G - 751G - pressd01/obi/mcoradata_obipre_12c_ssd8k@Kayes_test 22.5G - 767G - pressd01/obi/mcoradata_obipre_12c_ssd8k@auto-20180403.0300-2d 10.2G - 769G - pressd01/obi/mcoradata_obipre_12c_ssd8k@auto-20180404.0300-2d 9.49G - 769G - pressd01/obi/mcoradata_obipre_12c_ssd8k@auto-20180405.0300-2d 0 - 769G
So, to get any effective extra space I believe I will have to configure as a raidz1 (7 drives + parity, offends my sense of symmetry).
Would some kind and knowledgeable soul please confirm my theory is correct, and that I haven't done something daft?
Thanks a lot :)