Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

M.2 drives affecting PCIe performance?

laxielaxie Member RarePosts: 1,118
edited July 2017 in Hardware
If this comes across as confused and nonsensical, I apologize in advance.

I recall reading that occupying multiple PCIe slots can actually limit how much performance the individual PCIe lanes are getting. From my understanding, some mother boards limit the PCIe speed of the fastest lane, if other lanes are occupied. Is that correct?

If M.2 drives use PCIe slots, does this mean they will reduce the performance of my graphic card's PCIe slot? If I have my GTX1080 in one slot and buy two M.2 drives for the other PCIe slots, do I have to worry about impacting my graphic card's performance?

This does not make sense to me at all - am I mixing up different things in my mind?

Comments

  • RenoakuRenoaku Member EpicPosts: 3,157
    edited July 2017
    Um I beleive it can because your CPU is likely limited to 16 PCIE lanes therefore if you use a Hard Drive which uses bandwidth, I believe it could cause slower perforamnce with a 1080ti because it would use some of the 16X speeds when reading / writing from the drive, most CPU's and I7's only have 16 lanes, except the newer models that I am aware of socket 2066 has CPU's that go up to 44 and can handle up to 16x16 SLI and have more bandwidth / lanes for a PCIE SSD.

    I am not 100% sure on this but I know if I used 2x 1080ti's on my current system it would be a waste of $900 because it would slow them down to 8x8 which means each card runs at 50% of the bandwidth compared to if I purchased a I7 with 28 PCIE lanes, or a I9 With 44 PCIE lanes and ran 16x16 SLI which is two 1080ti's, and then threw in a SSD it would all be fine.

    Personally I wouldn't buy the 28 PCIE lane, I would go with the 44 for Dual SLI, and a hard drive unless I didn't care about running SLI.

    https://www.newegg.com/Product/Product.aspx?Item=N82E16819117795&ignorebbr=1

    Look on the CPU where it says: Max Number of PCI Express Lanes

    Personally I would just go with a regular SSD and connect it to one of the highest speed internal SATA III Ports, and then connect other regular Hard Drives, move all your personal files as you wish over to over drives downloads folder, users etc, this way you don't waste writes.

    Gdemami
  • GdemamiGdemami Member EpicPosts: 12,342
    edited July 2017
    laxie said:
    If this comes across as confused and nonsensical, I apologize in advance.

    I recall reading that occupying multiple PCIe slots can actually limit how much performance the individual PCIe lanes are getting. From my understanding, some mother boards limit the PCIe speed of the fastest lane, if other lanes are occupied. Is that correct?

    If M.2 drives use PCIe slots, does this mean they will reduce the performance of my graphic card's PCIe slot? If I have my GTX1080 in one slot and buy two M.2 drives for the other PCIe slots, do I have to worry about impacting my graphic card's performance?

    This does not make sense to me at all - am I mixing up different things in my mind?
    1) M.2 is just a form factor, it does not mean it must use PCIe.

    2) Graphics card uses CPU lanes pretty much exclusively, SSD will likely use lanes provided by chipset.
    [Deleted User]holdenfive
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    edited August 2017
    There are only so many PCI Express lanes coming off of the socket or chipset.  Which things share lanes with which others depends on how the motherboard is laid out.  If you get something like Threadripper that has a ton of PCI Express lanes, a reasonable motherboard will let you connect everything you reasonably want.

    The more mainstream consumer platforms are more likely to have some contention, as 16 lanes for a video card and 4 each for two m.2 slots is 24 PCI Express lanes right there.  It's also possible that if you connect two m.2 drives, they'll be competing with each other for bandwidth and not each have their own full x4 connection, and my guess is that that's more common than splitting lanes off of the x16 connection for the video card.  That's just a guess, however, and could easily be wrong.
    Phry
  • CleffyCleffy Member RarePosts: 6,412
    Need to read the board and CPU/chipset specifications as it is a case by case. Most mobos treat m.2 drives as any other PCI-e lane. They will split it based on what slots are populated. I know for something like Ryzen it has a dedicated x4 lane just for m.2 and the x370 chipset allows for an additional x4 by disabling 2 SATA connections. Then it has a x16 dedicated just for the GPU that can be 2 x8 with 2 GPUs. This will be noted in any motherboard documentation.
  • RidelynnRidelynn Member EpicPosts: 7,383
    In real world use, I've rarely found that PCI bandwidth/saturation has been a huge bottleneck. You have to drop down to x4 on the GPU before you see a lot of impact on the card. This is an older chart (from here), but I think it still holds up fairly well. 




    Is it theoretically possible that a m.2 drive on direct PCI could impact GPU performance? Sure, you can contrive some situation where it does. But I don't think it would be terribly common place, if at all found anywhere in the real world.  Streaming high bit rate video to the m.2 controller at +Gbs (you might hit it with an uncompressed 4K stream) while SLI/CF gaming? Running a high volume production database while crunching GPU-AI calculations?
    [Deleted User]AmazingAvery
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Many motherboards have the PCI Express x16 connection set up such that it is paired with another PCI Express slot.  If you put something in the secondary slot, you get an x8 connection to both slots.  If the secondary slot is empty, the primary slot gets the full x16 bandwidth.  If the primary slot is empty, the secondary slot can still only get x8 bandwidth because that's all that it's wired for.

    It is possible to have a PCI Express x16 connection such that two slots share its bandwidth and either can use the just about all of the x16 bandwidth even if the other slot has something in it that merely isn't using very much bandwidth at the moment.  That's far more expensive to build, however, so it's almost never done.

    For what it's worth, PCI Express 3.0 x16 gives you 16 GB/s theoretical bandwidth, but real-world measured bandwidth generally tops out at around 10 GB/s, even in simple synthetic cases of copy a bunch of data and don't do anything else.  You have to jump through some hoops to even get that 10 GB/s, so you could run into meaningful problems from PCI Express data transfers while using far less bandwidth than that, even.

    Some programs can overwhelm a PCI Express 3.0 x16 connection such that the GPU is mostly waiting for data to come in and out.  Games tend not to need all that much bandwidth, though, as stuff gets buffered on the GPU.  If you run out of video memory so that the game has to constantly shuffle things in and out as they get used, you can get a huge PCI Express bottleneck in a hurry.

    The real fix to that is more video memory or turning down settings, not more PCI Express bandwidth.  This is much like saying that if you're running out of system memory and paging to disk constantly, the real fix is getting more system memory, not getting a faster SSD to make paging to disk less painful.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Torval said:
    It does depend on the board, but newer boards shouldn't have the problem because PCIe is split on two different buses.

    Here is an easy to read chart from PugetSystems. https://www.pugetsystems.com/labs/articles/Z170-H170-H110-B170-Q150-Q170---What-is-the-Difference-635/

    It looks like the Z170 chipset is basically a splitter chip that has 4 GB/s theoretical bandwidth to connect it to the CPU.  If you want to connect an m.2 SSD and get 3 GB/s of bandwidth, it's possible, at least with a sufficiently fast SSD.  If you want to connect three m.2 SSDs and get 3 GB/s of bandwidth from each of them, you can't do all of that at once.  But one SSD could use all of that bandwidth while the other two are idle.

    If you wanted to get full bandwidth to three m.2 SSDs all at once, that would still be possible if you split the processor x16 connection into x8-x4-x4 and used two of the x4s for m.2 SSDs.  But needing to do that would be pretty rare for consumer use, so I'd expect motherboards to rarely to never set it up that way.
    [Deleted User]AmazingAvery
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Torval said:
    Quizzical said:
    Torval said:
    It does depend on the board, but newer boards shouldn't have the problem because PCIe is split on two different buses.

    Here is an easy to read chart from PugetSystems. https://www.pugetsystems.com/labs/articles/Z170-H170-H110-B170-Q150-Q170---What-is-the-Difference-635/

    It looks like the Z170 chipset is basically a splitter chip that has 4 GB/s theoretical bandwidth to connect it to the CPU.  If you want to connect an m.2 SSD and get 3 GB/s of bandwidth, it's possible, at least with a sufficiently fast SSD.  If you want to connect three m.2 SSDs and get 3 GB/s of bandwidth from each of them, you can't do all of that at once.  But one SSD could use all of that bandwidth while the other two are idle.

    If you wanted to get full bandwidth to three m.2 SSDs all at once, that would still be possible if you split the processor x16 connection into x8-x4-x4 and used two of the x4s for m.2 SSDs.  But needing to do that would be pretty rare for consumer use, so I'd expect motherboards to rarely to never set it up that way.
    I'm still learning about it but it does seem that way. If I'm understanding it right, for Z170, there are 16 CPU lanes and 20 chipset lanes and both of those are separate buses with the ability to slip them up. Everything on the chipset lane shares that bandwidth. Everything on the CPU lane shares that.

    I'm piecing together that Puget article with some of the comments from the reddit thread specifically this comment from euvie:

    Note that while an M.2 SSD won't suck bandwidth from anything attached to the 16 PCIe 3.0 lanes from the CPU, the DMI link between the z170 and the CPU is more limited. So everything attached to the z170 effectively shares the bandwidth of 4 PCIe 3.0 lanes. This includes M.2, SATA, network adapters, USB, etc.

    Which isn't an issue in practice unless you're trying to RAID PCIe SSDs off of the z170.

    I may be misinterpreting where the total lanes and how things are physically and logically split. It's not something I've done much reading about yet.
    Think of it is the CPU has 16 dedicated PCI Express lanes coming off of it, and it can use that as a single x16 or split those as x8/x8 or x8/x4/x4.  However it splits them, they're dedicated lanes that don't share any bandwidth.  If you split it as x8/x8, both connections can use the full x8 bandwidth at the same time, but neither can use more than an x8 connection even if the other is idle.

    The chipset has 20 PCI Express lanes coming off of it that have dedicated access to the chipset, but the entire chipset only has an x4 connection to the CPU.  (The connection from the chipset to the CPU isn't truly PCI Express, but that doesn't matter for this comparison.)  Thus, everything coming off of the chipset has to share that x4 connection to the CPU.  If you want one SSD to use the full x4 bandwidth and nothing else is using any bandwidth at all, it can.  But if you want two SSDs to both use the full x4 bandwidth at once, they can't because they share the x4 bandwidth to the CPU.  They can get their data to the chipset just fine, but the chipset can't get it all to the CPU fast enough.

    If the SSDs were plugged into the x16 connection from the CPU split as x8/x4/x4 and used all three of those for SSDs, then then could each have their own x4 connection from that just fine and all use their full bandwidth simultaneously.  But going through the chipset, they could all use perhaps 1 GB/s at once, or any one could use its full bandwidth while the other two are idle, but they can't all use full bandwidth at once as they'll overwhelm the single x4 connection from the chipset to the CPU.

    Thus, if Jean-Luc were decide to get three of his SSDs, put them in m.2 slots and push them all at once, it would probably top out at round 3 GB/s reads total.  For consumer use, that's plenty.  For some enterprise uses, it's not.  A new AMD Epyc CPU would allow you to have a single socket server with perhaps 30 or so m.2 SSDs all using their full bandwidth simultaneously, at least if you don't overwhelm system memory bandwidth as where the data comes from or goes to, as it has enough bandwidth for that many dedicated PCI Express x4 connections to the CPU.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    edited August 2017
    Quizzical said:
    Thus, if Jean-Luc were decide to get three of his SSDs, put them in m.2 slots and push them all at once, it would probably top out at round 3 GB/s reads total.
    Depends of the motherboard and chipset.
    That is certainly true, and why I had to say "probably".  But I'd be surprised if you've got a Sky Lake motherboard that makes it so that if you use an m.2 slot, you can't get full bandwidth to the video card.  It's certainly possible and even easy to build such a motherboard, but it makes little sense for consumer use.

    If you got a dedicated RAID card that can handle m.2 slots (I assume that such a thing exists, though I don't know of any) and plugged it into the x16 slot instead of a video card, then you'd be able to push all of the SSDs at once.  Performance in some games would suffer, however.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Not hard to verify with Aida 64 for instance...

    Video Card:
    Device Description    Gigabyte GeForce GTX 980 Ti Video Adapter
    Bus Type    PCI Express 3.0 x16

    SSD:
    Device Description    Phison PS5007 NVMe SSD Controller
    Bus Type    PCI Express 3.0 x4

    For now at least, everything is ok ;)
    Z270X has 24 PCIe Express lanes anyway, and 30 HSIO lanes.
    Last line of the chart is interesting too.

    httpswwwtechpowerupcomimg16-11-2192a821c0d441jpg

    "Thirdly, and this could be of more relevance to PC enthusiasts, the 200-series chipsets have more downstream (general purpose) PCI-Express gen 3.0 lanes. The chipsets have 14 downstream PCIe lanes; compared to 10 on the 100-series chipsets. The LGA1151 processor has 16 PCI-Express gen 3.0 lanes it sets aside for graphics, and four lanes that go to the chipset as physical layer of the DMI 3.0 chipset bus. This means motherboard designers can cram in additional bandwidth-heavy onboard devices such as Thunderbolt and USB 3.1 controllers; additional M.2 slots, or just more PCIe slots with greater than x1 bandwidth. This takes the platform's total PCIe lane budget to 30, compared to 26 on the 100-series chipset motherboards."
    No matter how many lanes you have coming off of the chipset, DMI 3.0 means you still have essentially an x4 connection from the the chipset to the CPU, and everything coming off of the chipset has to share that bandwidth.  The chipset does mean that, for example, you could have three m.2 drives with their own x4 connection to the chipset and one that is active can use all of that bandwidth to the CPU if the others are idle.  That's plenty good enough for nearly all consumer use, which is why Intel designed it that way.

    But if you try to put three m.2 drives in RAID 0 on that motherboard, you're likely to be disappointed with the results.  The HEDT and server platforms exist for a reason.
Sign In or Register to comment.