Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Crucial launches MX100 SSD, uses 16 nm NAND flash for aggressive pricing

QuizzicalQuizzical Member LegendaryPosts: 25,350

I noted recently on another thread that Crucial and Mushkin are the two SSD vendors that have done the most to push prices down on quality drives.  Most SSD launches aren't terribly scintillating; the performance numbers basically don't matter past verification that the drive isn't terrible, and Crucial easily met that threshold three generations ago.

But I think this one is notable for several reasons:

1)  It's the first SSD with 16 nm NAND flash.  As compared to previous generation 20 nm chips, a back-of-the-envelope computation is that this takes only 64% of the die space for the same capacity--meaning that if wafer cost is the same, they can pack 56% more capacity into a given cost.  There are a lot of process node reasons why the 64% figure could be off, and wafer cost on a new node probably isn't the same.  Regardless, die shrinks are what drive prices down, and this is the first SSD on the new generation of process nodes.

2)  The pricing is notably aggressive.  New products often charge a premium price at first, and then the price settles down a ways.  Here, Crucial is starting at an MSRP of $110 for 256 GB and $225 for 512 GB.  That's competitive with the cheapest 240 and 480 GB drives on the market, respectively, and while offering the latest features.

3)  The catch is--well, there isn't one, really.  The drive fares well in AnandTech's performance consistency measurements that were the last benchmark to find performance problems with a lot of SSDs.  Crucial claims idle power consumption of a mere 0.1 W.  For a while, brand new SSD launches tended to have problems, but it's been a while since there was a major flop there.

It's worth noting that Crucial is the consumer brand name for Micron, which owns half of IMFT, one of the major NAND flash producers.  That naturally gets them early access to NAND flash, which can help in writing firmware to use the new flash sooner.  I sometimes think that Crucial pushes SSDs as a way to sell more NAND, and that the latter is what they're really interested in.  Regardless of whether that is the case, the new MX100 is priced to sell an awful lot of NAND.

A lot of people seem to tout the Samsung 840 EVO as a budget SSD.  The TLC NAND certainly puts it in that market, but the price tag really doesn't.  For example, New Egg has a shell shocker deal on the 250 GB drive right now--at $135, or $25 more than the 256 GB Crucial MX100.

Comments

  • syntax42syntax42 Member UncommonPosts: 1,378
    Process shrinks on SSD memory chips result in reduced durability, usually.  I would like to see this new 16nm compared to others in a destructive test.
  • DihoruDihoru Member Posts: 2,731
    -points up- What syntax said.. I heard 20 nm had a durability decrease over the generation before it.

    image
  • QuizzicalQuizzical Member LegendaryPosts: 25,350

    The general tendency has been toward fewer write cycles per cell but larger capacities in a given die space.  I'm not sure if there is any technical reason for this, but judging by the write endurance numbers that various vendors have claimed, it looks like for an x nm process node, the number of writes you get is proportional to x--so yes, older, larger process nodes get you more writes.  But the amount of data you can store in a given die space is proportional to 1/x^2--and yes, there are technical reasons for this one.  So on net, the amount of data that you can write before wearing out a drive at a given price tag tends to increase as you go to new process nodes, but just not as fast as the capacity of the drive increases.

    Regardless, the amount of writes you can do with one of these drives is still enormous.  You could install and delete a large game every single day, and you'd have several years before you had to worry about wearing the drive out from the writes.  Furthermore, going from MLC to TLC has a much larger impact on write endurance than a die shrink, so if write endurance becomes a problem, I'd bet on TLC SSDs--such as the Samsung 840 EVO--as being the first place it surfaces.  Indeed, that's one reason why I've been hesitant to recommend TLC SSDs, recommending MLC alternatives for the same price--even if they happened to be somewhat slower.

    Ultimately, NAND flash is probably going to go 3D, with many cells stacked on top of each other.  Samsung is already shipping NAND flash with cells stacked 32 high, and IMFT and Hynix plan to start production on their own 3D NAND later this year.  Everyone seems to expect that it will be a couple of years before anyone can do this cheaply enough to compete with traditional methods, though; the process geometries are much larger to start.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Also keep in mind that SSDs have aggressive self-testing to detect write-induced cell degredation, have a certain large amount of overprovisioning and to compensate for cells that may get flagged as degraded.

    As the density goes up, that extra space that is overprovisioned and not normally used as "drive capacity" gets cheaper as well. So even if the write durability per cell goes down, the overprovisioning can be adjusted to compensate for that, and you won't necessarily see any meaningful change in the overall durability of the drive.

  • DihoruDihoru Member Posts: 2,731
    Originally posted by Quizzical

    The general tendency has been toward fewer write cycles per cell but larger capacities in a given die space.  I'm not sure if there is any technical reason for this, but judging by the write endurance numbers that various vendors have claimed, it looks like for an x nm process node, the number of writes you get is proportional to x--so yes, older, larger process nodes get you more writes.  But the amount of data you can store in a given die space is proportional to 1/x^2--and yes, there are technical reasons for this one.  So on net, the amount of data that you can write before wearing out a drive at a given price tag tends to increase as you go to new process nodes, but just not as fast as the capacity of the drive increases.

    Regardless, the amount of writes you can do with one of these drives is still enormous.  You could install and delete a large game every single day, and you'd have several years before you had to worry about wearing the drive out from the writes.  Furthermore, going from MLC to TLC has a much larger impact on write endurance than a die shrink, so if write endurance becomes a problem, I'd bet on TLC SSDs--such as the Samsung 840 EVO--as being the first place it surfaces.  Indeed, that's one reason why I've been hesitant to recommend TLC SSDs, recommending MLC alternatives for the same price--even if they happened to be somewhat slower.

    Ultimately, NAND flash is probably going to go 3D, with many cells stacked on top of each other.  Samsung is already shipping NAND flash with cells stacked 32 high, and IMFT and Hynix plan to start production on their own 3D NAND later this year.  Everyone seems to expect that it will be a couple of years before anyone can do this cheaply enough to compete with traditional methods, though; the process geometries are much larger to start.

    If I am understanding of what you said is correct then to get similar endurance you'd have to go higher in storage capacity, so a 20 nm 120 gb drive would last longer than a 16nm 120 gb drive, am I correct? (not taking TLC or MLC into account here, I rather go with proven metrics, IE quality of SSD provider, over theoretical ones)

    image
  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Dihoru
    If I am understanding of what you said is correct then to get similar endurance you'd have to go higher in storage capacity, so a 20 nm 120 gb drive would last longer than a 16nm 120 gb drive, am I correct? (not taking TLC or MLC into account here, I rather go with proven metrics, IE quality of SSD provider, over theoretical ones)


    Each cell has a limited number of write cycles - this is true. As the dies get smaller, that number goes down.

    SSD firmware does a couple of things to combat this. First, it has a random allocation pattern - it mixes up your data across random cells and spreads the wear around, so that the first few cells don't get hit with all the write cycles and wear out early. That is called "wear leveling" Second, it either checks or counts (depending on the firmware) each cell, and once a cell is degraded, it flags that cell as exhausted and stops using it. Now, you don't want the size of your drive to shrink over time, so SSD manufacturers actually put in extra cells to compensate - that's called "overprovisioning". Your 120G SSD actually has something on the order of 140-160G of cells in it, but the drive only uses 120G at a time, and uses that extra space in the wear leveling algorithm, then as cells get flagged, that comes out of the overprovision stash, not your usable drive space.

    So where the smaller die size comes in -- smaller die size reportly lowers write endurance, but it also makes a cell much cheaper to produce (or rather, the density of cells on a wafer goes up, and the cost per cell on a wafer goes way down). So wear leveling and overprovisioning already compensate for that, since cells are cheaper, you just throw more in there for the wear leveling and overprovisioning to have to work with, and you don't see any change in overall drive life.

    A 16nm Cell won't last as long as a 20nm cell - that is true.

    But a drive based on 16nm cells could have a similar lifespan as one with 20nm cells -- depending on how the manufacturer programs the firmware. I know in Samsung drives you can actually change the amount of overprovisioning.

  • QuizzicalQuizzical Member LegendaryPosts: 25,350
    Originally posted by Dihoru
    Originally posted by Quizzical

    The general tendency has been toward fewer write cycles per cell but larger capacities in a given die space.  I'm not sure if there is any technical reason for this, but judging by the write endurance numbers that various vendors have claimed, it looks like for an x nm process node, the number of writes you get is proportional to x--so yes, older, larger process nodes get you more writes.  But the amount of data you can store in a given die space is proportional to 1/x^2--and yes, there are technical reasons for this one.  So on net, the amount of data that you can write before wearing out a drive at a given price tag tends to increase as you go to new process nodes, but just not as fast as the capacity of the drive increases.

    Regardless, the amount of writes you can do with one of these drives is still enormous.  You could install and delete a large game every single day, and you'd have several years before you had to worry about wearing the drive out from the writes.  Furthermore, going from MLC to TLC has a much larger impact on write endurance than a die shrink, so if write endurance becomes a problem, I'd bet on TLC SSDs--such as the Samsung 840 EVO--as being the first place it surfaces.  Indeed, that's one reason why I've been hesitant to recommend TLC SSDs, recommending MLC alternatives for the same price--even if they happened to be somewhat slower.

    Ultimately, NAND flash is probably going to go 3D, with many cells stacked on top of each other.  Samsung is already shipping NAND flash with cells stacked 32 high, and IMFT and Hynix plan to start production on their own 3D NAND later this year.  Everyone seems to expect that it will be a couple of years before anyone can do this cheaply enough to compete with traditional methods, though; the process geometries are much larger to start.

    If I am understanding of what you said is correct then to get similar endurance you'd have to go higher in storage capacity, so a 20 nm 120 gb drive would last longer than a 16nm 120 gb drive, am I correct? (not taking TLC or MLC into account here, I rather go with proven metrics, IE quality of SSD provider, over theoretical ones)

    If you're comparing a 20 nm 120 GB drive to a 16 nm 120 GB drive and everything else is just as well optimized around the NAND flash for both, then yeah, it would take longer to wear out the NAND flash on the 20 nm drive.  But it could easily be a difference between 50 years and 40 years--long enough that the drive will probably die of something else first either way.

    But my argument is that that's looking at things the wrong way.  If you're comparing a 25 nm 120 GB drive to a 16 nm 240 GB drive, the latter will probably give you more writes before it wears out.  It will probably also be cheaper to build--and hence cheaper eventually at retail.  And that's even ignoring that you get double the capacity.

    Also, Crucial is only using 16 nm NAND flash for their 256 and 512 GB versions of the MX100.  The 128 GB version uses the older 20 nm NAND flash--and at $80, really isn't that good of a deal.

  • fineflufffinefluff Member RarePosts: 561
    So I just put one of these in my laptop but is there any utility I can use for maintenance purposes? I use samsung 830 in my desktop and that has the "Samsung SSD Magician" but as far I know crucial has not made a similar utility.
  • syntax42syntax42 Member UncommonPosts: 1,378
    Originally posted by naami
    So I just put one of these in my laptop but is there any utility I can use for maintenance purposes? I use samsung 830 in my desktop and that has the "Samsung SSD Magician" but as far I know crucial has not made a similar utility.

    From skimming over this document, it looks like the SSD Magician only sets some registry settings and other things you can do manually in Windows.  It does nothing to make the drive perform better, nor does it do anything for maintaining the drive.  It only changes OS settings to make the OS utilize the drive better.  You could completely skip those settings and you wouldn't likely notice much of a difference.

    You might even be able to use that software without a Samsung SSD since it doesn't do anything to the drive.

    Crucial's SSDs come with Acronis TrueImage for migrating your OS and resizing partitions.

  • syntax42syntax42 Member UncommonPosts: 1,378
    Originally posted by Quizzical
    Originally posted by Dihoru
    Originally posted by Quizzical

    The general tendency has been toward fewer write cycles per cell but larger capacities in a given die space.  I'm not sure if there is any technical reason for this, but judging by the write endurance numbers that various vendors have claimed, it looks like for an x nm process node, the number of writes you get is proportional to x--so yes, older, larger process nodes get you more writes.  But the amount of data you can store in a given die space is proportional to 1/x^2--and yes, there are technical reasons for this one.  So on net, the amount of data that you can write before wearing out a drive at a given price tag tends to increase as you go to new process nodes, but just not as fast as the capacity of the drive increases.

    Regardless, the amount of writes you can do with one of these drives is still enormous.  You could install and delete a large game every single day, and you'd have several years before you had to worry about wearing the drive out from the writes.  Furthermore, going from MLC to TLC has a much larger impact on write endurance than a die shrink, so if write endurance becomes a problem, I'd bet on TLC SSDs--such as the Samsung 840 EVO--as being the first place it surfaces.  Indeed, that's one reason why I've been hesitant to recommend TLC SSDs, recommending MLC alternatives for the same price--even if they happened to be somewhat slower.

    Ultimately, NAND flash is probably going to go 3D, with many cells stacked on top of each other.  Samsung is already shipping NAND flash with cells stacked 32 high, and IMFT and Hynix plan to start production on their own 3D NAND later this year.  Everyone seems to expect that it will be a couple of years before anyone can do this cheaply enough to compete with traditional methods, though; the process geometries are much larger to start.

    If I am understanding of what you said is correct then to get similar endurance you'd have to go higher in storage capacity, so a 20 nm 120 gb drive would last longer than a 16nm 120 gb drive, am I correct? (not taking TLC or MLC into account here, I rather go with proven metrics, IE quality of SSD provider, over theoretical ones)

    If you're comparing a 20 nm 120 GB drive to a 16 nm 120 GB drive and everything else is just as well optimized around the NAND flash for both, then yeah, it would take longer to wear out the NAND flash on the 20 nm drive.  But it could easily be a difference between 50 years and 40 years--long enough that the drive will probably die of something else first either way.

    But my argument is that that's looking at things the wrong way.  If you're comparing a 25 nm 120 GB drive to a 16 nm 240 GB drive, the latter will probably give you more writes before it wears out.  It will probably also be cheaper to build--and hence cheaper eventually at retail.  And that's even ignoring that you get double the capacity.

    Also, Crucial is only using 16 nm NAND flash for their 256 and 512 GB versions of the MX100.  The 128 GB version uses the older 20 nm NAND flash--and at $80, really isn't that good of a deal.

    I'm currently using an OCZ Agility 3 SSD.  I have been using the drive a little over 2 years with some heavy gaming.  I bounce around a lot between games and uninstall/install games more often than some people.  Crystal Disk Info reports the drive health to be 77%.  I'm guessing that is based on the manufacturer's guaranteed writes per block and my total writes to the drive.  Based on that information, I can expect the drive to last about 10 years before blocks start failing.

    I have read some destructive SSD test articles which showed many SSDs getting double or even triple the number of manufacturer guaranteed writes before they start to fail.  While I can expect my drive to last ten years, it may actually last twenty.  However, I'm definitely not going to hang onto it that long.  I would be surprised if my computer lasts ten years before being replaced.  Newer and better storage technology could be out by then.  Even if we don't have faster storage, we will at least have cheaper storage per byte and I will likely have a need for the increased capacity.

    My point is that very few people may actually push their SSDs to failure.  As long as the drives have at least a ten-year life for heavy gaming use, I will continue to recommend others build their systems with a SSD.

  • QuizzicalQuizzical Member LegendaryPosts: 25,350
    Originally posted by syntax42

    I'm currently using an OCZ Agility 3 SSD.  I have been using the drive a little over 2 years with some heavy gaming.  I bounce around a lot between games and uninstall/install games more often than some people.  Crystal Disk Info reports the drive health to be 77%.  I'm guessing that is based on the manufacturer's guaranteed writes per block and my total writes to the drive.  Based on that information, I can expect the drive to last about 10 years before blocks start failing.

    I have read some destructive SSD test articles which showed many SSDs getting double or even triple the number of manufacturer guaranteed writes before they start to fail.  While I can expect my drive to last ten years, it may actually last twenty.  However, I'm definitely not going to hang onto it that long.  I would be surprised if my computer lasts ten years before being replaced.  Newer and better storage technology could be out by then.  Even if we don't have faster storage, we will at least have cheaper storage per byte and I will likely have a need for the increased capacity.

    My point is that very few people may actually push their SSDs to failure.  As long as the drives have at least a ten-year life for heavy gaming use, I will continue to recommend others build their systems with a SSD.

    I agree with your main point, but I'd like to emphasize that using SSDs until they die is a bad idea, for about the same reasons that doing that with hard drives is also a bad idea.  Losing data is a much bigger problem than merely having to replace a part.

    We really don't know if SSDs would realistically last 10 years under typical consumer use.  USB flash drives don't seem to last me that long, even with fairly little activity.  There haven't been good SSDs on the market for 10 years yet, and the typical lifespan of the early Intel and Indilinx drives might not be at all similar to the typical lifespan of today's latest.

  • RidelynnRidelynn Member EpicPosts: 7,383

    A USB flash drive gets subjected to more harsh environment than a typical SSD would as well..

    Or at least, I know if I were a USB drive I would rather be installed inside a ventilated computer case rather than thrown on a keychain, carried around in a pocket, run through the laundry, left out in a car in the winter/summer, chewed on by the dog, constructed in extreme bulk at a cut-rate factory by children using tears and unicorn blood for solder, etc...

    That, and expecting any drive to actually last 10 years in unrealistic, SSD or not. A traditional HDD drive I would ~expect~ to last 5 years, and I would seriously consider replacing after 3 years. Quiz is right - having a drive outright fail is bad - you don't want to get to that point, but on the flip side, you don't want to waste money unless you have to, so I can understand both sides of the equation. Right now, with the lack of history data, I have no reason to treat SSDs any differently than HDDs with regard to lifespan, either for good or bad, and for me, data loss (or the potential of it) is often worth much, much more to me than the price of a drive.

    The benchmark for longevity for the SSD is the traditional HDD. And if you can exceed that, all the better, although we won't know if SSDs can actually do that for some time.

  • syntax42syntax42 Member UncommonPosts: 1,378
    Originally posted by Ridelynn

    A USB flash drive gets subjected to more harsh environment than a typical SSD would as well..

    Or at least, I know if I were a USB drive I would rather be installed inside a ventilated computer case rather than thrown on a keychain, carried around in a pocket, run through the laundry, left out in a car in the winter/summer, chewed on by the dog, constructed in extreme bulk at a cut-rate factory by children using tears and unicorn blood for solder, etc...

    That, and expecting any drive to actually last 10 years in unrealistic, SSD or not. A traditional HDD drive I would ~expect~ to last 5 years, and I would seriously consider replacing after 3 years. Quiz is right - having a drive outright fail is bad - you don't want to get to that point, but on the flip side, you don't want to waste money unless you have to, so I can understand both sides of the equation. Right now, with the lack of history data, I have no reason to treat SSDs any differently than HDDs with regard to lifespan, either for good or bad, and for me, data loss (or the potential of it) is often worth much, much more to me than the price of a drive.

    The benchmark for longevity for the SSD is the traditional HDD. And if you can exceed that, all the better, although we won't know if SSDs can actually do that for some time.

    It wouldn't surprise me if the electronic components fail before the memory components show failed blocks.

    I have washed a few  MicroCenter flash drives in the laundry.  They worked fine after going through the dryer.

  • VrikaVrika Member LegendaryPosts: 7,888
    Originally posted by syntax42
    I'm currently using an OCZ Agility 3 SSD.  I have been using the drive a little over 2 years with some heavy gaming.  I bounce around a lot between games and uninstall/install games more often than some people.  Crystal Disk Info reports the drive health to be 77%.  I'm guessing that is based on the manufacturer's guaranteed writes per block and my total writes to the drive.  Based on that information, I can expect the drive to last about 10 years before blocks start failing.

    Can I ask, what "Total Host Writes" does CrystalDiskInfo show for your SSD?

    I've got OCZ Agility 3 120GB that shows total host writes 31 173GB and disk health status is still 98%.

     
  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910
    Originally posted by Quizzical
    Originally posted by syntax42

    I'm currently using an OCZ Agility 3 SSD.  I have been using the drive a little over 2 years with some heavy gaming.  I bounce around a lot between games and uninstall/install games more often than some people.  Crystal Disk Info reports the drive health to be 77%.  I'm guessing that is based on the manufacturer's guaranteed writes per block and my total writes to the drive.  Based on that information, I can expect the drive to last about 10 years before blocks start failing.

    I have read some destructive SSD test articles which showed many SSDs getting double or even triple the number of manufacturer guaranteed writes before they start to fail.  While I can expect my drive to last ten years, it may actually last twenty.  However, I'm definitely not going to hang onto it that long.  I would be surprised if my computer lasts ten years before being replaced.  Newer and better storage technology could be out by then.  Even if we don't have faster storage, we will at least have cheaper storage per byte and I will likely have a need for the increased capacity.

    My point is that very few people may actually push their SSDs to failure.  As long as the drives have at least a ten-year life for heavy gaming use, I will continue to recommend others build their systems with a SSD.

    I agree with your main point, but I'd like to emphasize that using SSDs until they die is a bad idea, for about the same reasons that doing that with hard drives is also a bad idea.  Losing data is a much bigger problem than merely having to replace a part.

    We really don't know if SSDs would realistically last 10 years under typical consumer use.  USB flash drives don't seem to last me that long, even with fairly little activity.  There haven't been good SSDs on the market for 10 years yet, and the typical lifespan of the early Intel and Indilinx drives might not be at all similar to the typical lifespan of today's latest.

     

    I would fall into the category of "slow adopter", upgrading my PC only when the games I want to play just don't work any longer, no matter what settings I pick.  I've never used a hard drive until it's reached "end of life".  They have either failed quickly, or end up on the stack of hard drives in my basement.  If SSDs have a lifespan comparable to HDDs, then it doesn't seem like their total lifespan would be a real issue for most gamers, given their tendency to have faster upgrade cycles than regular users.

     

    I can not remember winning or losing a single debate on the internet.

  • nbtscannbtscan Member UncommonPosts: 862

    I might have to look into one of those new Crucial drives.  I originally bought a 128GB Intel SSD a couple years ago, but games are getting much larger in size now and I can only manage to fit maybe 2 or 3 installed games + my OS before it's reaching the point where I'm not comfortable with how much space is left on it.

    Crucial/Micron has been a respectable brand for well over a decade so I wouldn't have any qualms about picking up one of their drives.  I think the technology has matured enough at this point.

  • syntax42syntax42 Member UncommonPosts: 1,378
    Originally posted by Vrika
    Originally posted by syntax42
    I'm currently using an OCZ Agility 3 SSD.  I have been using the drive a little over 2 years with some heavy gaming.  I bounce around a lot between games and uninstall/install games more often than some people.  Crystal Disk Info reports the drive health to be 77%.  I'm guessing that is based on the manufacturer's guaranteed writes per block and my total writes to the drive.  Based on that information, I can expect the drive to last about 10 years before blocks start failing.

    Can I ask, what "Total Host Writes" does CrystalDiskInfo show for your SSD?

    I've got OCZ Agility 3 120GB that shows total host writes 31 173GB and disk health status is still 98%.

    Maybe my power on hours has an effect on it.  Total writes for me show 5017GB.

     

    ----------------------------------------------------------------------------

    CrystalDiskInfo 5.6.2 (C) 2008-2013 hiyohiyo

                                    Crystal Dew World : http://crystalmark.info/

    ----------------------------------------------------------------------------

    ----------------------------------------------------------------------------

     (1) OCZ-AGILITY3

    ----------------------------------------------------------------------------

               Model : OCZ-AGILITY3

            Firmware : 2.15

       Serial Number : Redacted

           Disk Size : 240.0 GB (8.4/137.4/240.0/240.0)

         Buffer Size : Unknown

         Queue Depth : 32

        # of Sectors : 468862128

       Rotation Rate : ---- (SSD)

           Interface : Serial ATA

       Major Version : ATA8-ACS

       Minor Version : ACS-2 Revision 3

       Transfer Mode : SATA/600

      Power On Hours : 5739 hours

      Power On Count : 933 count

          Host Reads : 7573 GB

         Host Writes : 5017 GB

         Temparature : 30 C (86 F)

       Health Status : Good (77 %)

            Features : S.M.A.R.T., APM, 48bit LBA, NCQ, TRIM

           APM Level : 00FEh [ON]

           AAM Level : ----

     

  • VrikaVrika Member LegendaryPosts: 7,888
    Originally posted by syntax42
    Originally posted by Vrika
    Originally posted by syntax42
    I'm currently using an OCZ Agility 3 SSD.  I have been using the drive a little over 2 years with some heavy gaming.  I bounce around a lot between games and uninstall/install games more often than some people.  Crystal Disk Info reports the drive health to be 77%.  I'm guessing that is based on the manufacturer's guaranteed writes per block and my total writes to the drive.  Based on that information, I can expect the drive to last about 10 years before blocks start failing.

    Can I ask, what "Total Host Writes" does CrystalDiskInfo show for your SSD?

    I've got OCZ Agility 3 120GB that shows total host writes 31 173GB and disk health status is still 98%.

    Maybe my power on hours has an effect on it.  Total writes for me show 5017GB.

     ....

      Power On Hours : 5739 hours

      Power On Count : 933 count

    Can't be power on hours either, mine is about the same. And my power on count is 3 times as large as yours.

    Maybe it has something to do with:

    Raw Read Error Rate: 94

    Retired Block Count: 100

     
  • RidelynnRidelynn Member EpicPosts: 7,383

    I've got a Crucial C300 that I got near release date (mid-2010). It has almost 10,000 hours on-time. CrystalDisk isn't showing me read/write amounts (maybe I'm doing it wrong). It's at "60%" health, whatever that means. This is my boot volume, and has been for quite some time. I do have an older SSD around somewhere, but this was the replacement, and this one is due to be replaced "soon" - I've been holding out longer than normal looking at just building a new system outright.

    I have an M4 in the same system, only around ~5000 hours, 100% health.
    And a standard Toshiba HDD, maybe 6 months old, 1800 hours on it, with a "Caution" flag based on reallocated sector count.

    I did notice you can set the threshold for CrystalDisk, so I think that 60% and warnings are more or less arbitrary and generic values just based on some SMART data, and may not really have much of a basis on anything. Then again, I'm not terribly familiar with this program, I've just been poking around with it this afternoon.

  • syntax42syntax42 Member UncommonPosts: 1,378

    I did some reading on various sites to get some insight on the SSD life expectancy issue.

    It appears the life remaining on the drive is calculated by the drive itself.  It can vary between manufacturers, models, and even firmware versions.  

    From what I have learned about SSDs, there are three values to watch for:  current pending sectors, reported uncorrectable sectors, and reallocated sectors.  The threshold for the first two is zero.  If they increase, it is time to replace the drive.  The reallocated sector count depends more on the size of your drive and the over-provisioning area.  A general rule of thumb would be to replace the drive at your earliest convenience if it goes over 1000 reallocated sectors.

Sign In or Register to comment.