[prev in list] [next in list] [prev in thread] [next in thread] 

List:       pfsense-discussion
Subject:    Re: [pfSense] Soeckris Net5501 SSD
From:       Karl Fife <karlfife () gmail ! com>
Date:       2016-05-19 19:34:16
Message-ID: 0a0eef99-7526-76e3-16df-a01b4d94fbd3 () gmail ! com
[Download RAW message or body]

Noteworthy differences between the 3700 and 3500 series, and a nod to 
the 710:

Both the 3500 and 3700 have capacitor-backed write cache, so power 
events are unlikely to be cataclysmic, but the 3700 series has roughly 
42x better write endurance than the 3500.

Intel publishes that the 80GB 3500 is good for 45 TB (same as the 3510).
By contrast, they publish that the 100GB 3700 is good for 1874 TB.  This 
is apparently due to a suite of technologies called HET, which includes 
differences in both silicon and the controller. The older 710 series 
share this HET technology (and share capacitor-backed write cache), but 
the 710 drive I/O is slower, ergo, so is the cost, possibly making the 
710 a better value in terms of pursuing marginally higher reliability.

Neither the 3510, nor the 710 have what Intel calls End-to-End Data 
Protection, which just appears to be parity on steroids. Opinions 
welcome on this, but I would be surprised if the the susceptibility to 
bit rot on an datacenter-grade SSD did not already far exceed that of a 
CF card.  As such, the 710 seems like it may be an affordable little 
corner in the realm of SSD drive overkill for a pfSense install.

-Karl

On 5/18/2016 1:35 PM, Steve Yates wrote:
> The Intel S37xx is their data center line right?  We've had some weird stuff in \
> Windows and Linux servers get fixed by drive firmware updates.  There have been \
> multiple updates since fall 2015.  Weird as in the Intel software in Windows showed \
> both drives in a RAID 1 failed, though Windows could still read and write to that \
> drive letter.  Based on the Linux errors I suspect the drives were temporarily \
> dropping out and/or taking too long to access. 
> That said, I know you were asking for real world experience, but Intel does list \
> reliability and drive write life specs for their SSDs if you open the PDFs on their \
> site.  They do list compressed read and write speeds for some drives so be careful \
> what table you're reading. 
> --
> 
> Steve Yates
> ITS, Inc.
> 
> -----Original Message-----
> From: List [mailto:list-bounces@lists.pfsense.org] On Behalf Of Karl Fife
> Sent: Wednesday, May 18, 2016 1:18 PM
> To: pfSense Support and Discussion Mailing List <list@lists.pfsense.org>
> Subject: Re: [pfSense] Soeckris Net5501 SSD
> 
> Ed, you said it well here:  "wear leveling work is in SATA and DOM"
> 
> I think this is an important point, because If I understand correctly, there is \
> nothing inherent to DOM or SATA to make it more or less suitable to the excellent \
> implementations we've seen of over-provisioning, wear-leveling etc. in the other \
> storage form factors. As you say though, that's were the work is taking place, so \
> if you want it, DOM and SATA appear to be the devices to use. Funny how that works, \
> but it appears to be market forces only, not technology which informs this detail. 
> Thanks too for the info on the Soekris 6501.  I have one in the feild, also with an \
> MSata module.  I'm really glad I didn't try to upgrade that in place, or I might be \
> talking ethyl the 60-year-old office manager through router-resurrection.  Fun.  \
> You just saved my bacon.  Thanks for that. 
> In the realm of SSD's I have been using Intel S37xx's as ZFS intent log \
> accelerators for as long as they've been available. Great devices.  Some installs \
> have seen many terabytes of writes per week for years without issue.  For a pfSense \
> install, it's an absurd amount of overkill. Still, as you say, 'pro grade' SSD's \
> are a mere $50, so 'pro' SSD's start to become an economical choice. 
> In particular, I see the Intel S35x0 ~80GB for $60.  Do you know if the reliability \
> is in the same league as the s3700 series, it would be an easy choice given the \
> high cost of downtime in a remote install.  Any experience with that series of \
> devices in particular? 
> Thanks a lot Ed.  Your input was exactly what I was looking for!
> -Karl
> 
> On 5/18/2016 10:11 AM, ED Fochler wrote:
> > Karl,
> > 	There are numerous other similar answers to be found, but here's mine:
> > 
> > Get away from CF if you can.  The modern performance and wear leveling work is in \
> > sata and DOM, those are better devices.  Abandon the nano-BSD and just find the \
> > miscellaneous checkbox to put /tmp and /var in ram.  That's the bulk of the \
> > benefit without the separate distribution.  Although that is seldom necessary any \
> > more either. 
> > My Soekris 6501 still doesn't like the upgrade to PFSense 2.3 on mSata, but I'm \
> > running one from a Sata disk on 2.3 just fine.  This problem seems Soekris \
> > specific, but my summary is still that sata seems to be where the support is.  \
> > And with SSD, I don't see any benefit to staying away from sata even if you are \
> > allergic to spinning disks.  Market forces have made 100GB SSD's available for \
> > less than $50, and that's some wild over-provisioning for an install that is \
> > happy in < 4GB.  You can get a nice Intel or "pro" samsung for a little more if \
> > you want more insurance against having to visit those devices.  I'm generally a \
> > fan of the SSDs with metal cases for heat dissipation. 
> > 	ED.
> > 
> > 
> > 
> > 
> > 
> > > On 2016, May 17, at 6:09 PM, Karl Fife <karlfife@gmail.com> wrote:
> > > 
> > > I have about 15 Net5501's OR Lanner FW-7541D's in the field running
> > > embedded/Nano on CF cards.  There's not enough space on a 1GB  CF to
> > > upgrade to v2.3.  Of course I can upgrade to larger CF cards, however
> > > the eventual phase-out of NanoBSD makes me wonder if it's better to
> > > install a SATA SSD (or SATA DOM) which would possibly eliminate the
> > > need to re-re-factor storage in the near future (e.g with the release
> > > of v 2.4, and the phase-out of NanoBSD:
> > > https://doc.pfsense.org/index.php/Upgrade_Guide#Planning_for_the_Futu
> > > re )
> > > 
> > > Question:
> > > I'd like to ask what solid-state storage others are using on non-NanoBSD \
> > > installs.  If running the "full" version of pfSense, Is it sufficient 'simply' \
> > > to use a quality wear-leveling SATA DOM, or is it recommended to use something \
> > > with even better write endurance?  I wouldn't have thought the pfSense write \
> > > load is high, but blog posts from early adopters of SSD's + pfSense seem to \
> > > have run into write endurance problems.   SSD's have improved greatly \
> > > especially in terms of wear-leveling, over-provisioning etc.. What's a \
> > > recommended non-disk drive for full pfSense these days? 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > _______________________________________________
> > > pfSense mailing list
> > > https://lists.pfsense.org/mailman/listinfo/list
> > > Support the project with Gold! https://pfsense.org/gold
> > _______________________________________________
> > pfSense mailing list
> > https://lists.pfsense.org/mailman/listinfo/list
> > Support the project with Gold! https://pfsense.org/gold
> _______________________________________________
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold
> _______________________________________________
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold

_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic