Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

We are now officially living in a 64-bit world

QuizzicalQuizzical Member LegendaryPosts: 25,348
AMD just announced the end of video drivers for 32-bit operating systems:

https://www.anandtech.com/show/13520/amd-ceases-32-bit-graphics-driver-development

Nvidia had already ceased support of 32-bit operating systems earlier this year:

https://www.anandtech.com/show/12191/nvidia-to-cease-driver-development-for-32bit-operating-systems

That means that there are no longer any supported GPUs on 32-bit operating systems.  Some game developers have already felt free to ignore the existence of 32-bit operating systems; any game that requires more than 4 GB of memory certainly does.  But this pretty much makes it official that 32-bit PC gaming is dead.
ScotBeezerbeezGdemami
«1

Comments

  • mmoloummolou Member UncommonPosts: 256
    Games with only 32-bit clients and no 64-bit client beg to differ.
    GdemamiTheScavenger
    It is a funny world we live in.
    We had Empires run by Emperors, we had Kingdoms run by Kings, now we have Countries...
  • ScotScot Member LegendaryPosts: 22,955
    mmolou said:
    Games with only 32-bit clients and no 64-bit client beg to differ.
    Those rebels!
    ConstantineMerus
  • RidelynnRidelynn Member EpicPosts: 7,383
    mmolou said:
    Games with only 32-bit clients and no 64-bit client beg to differ.
    Good thing 32-bit clients run on x64
    Quizzical[Deleted User]Asm0deus
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    My point is not that all 32-bit executables must immediately vanish.  Rather, it is that if a game developer wants to go exclusively 64-bit, use more than 2 GB of memory in their game process, and not care that it can't run on a 32-bit operating system, it's now pretty safe to do so.  A decade ago, trying that would have killed the game.

    The transition has been gradual, of course.  For a game developer to go 64-bit only a week ago wasn't substantially different from doing so today.  But the end of GPU driver support for 32-bit operating systems is about as clear of a milestone on that path as we're going to get, so I thought I'd highlight it.
    BeezerbeezRidelynnAsm0deusrojoArcueid[Deleted User]ScotGdemami
  • CleffyCleffy Member RarePosts: 6,412
    edited October 2018
    But what about people playing on a tablet with an Intel GPU. Certainly Intel didn't drop support. Of course it's also a completely irrelevant group.
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    Cleffy said:
    But what about people playing on a tablet with an Intel GPU. Certainly Intel didn't drop support. Of course it's also a completely irrelevant group.
    As best as I can tell, Intel's modern driver support is only for the integrated GPU in Sky Lake and later, plus the AMD GPU in Kaby Lake-G.  Meanwhile, for Broadwell and later, they never had 64-bit OS support for any GPUs.  The only 32-bit GPU drivers I can find on Intel's site look like they're legacy drivers for GPUs that aren't getting the modern updates, but only an occasional security fix.

    So among the Intel GPUs on active support, the only reason why they haven't yet dropped support for 32-bit is that they never supported it in the first place.
    gervaise1
  • TheScavengerTheScavenger Member EpicPosts: 3,321
    What prehistoric caveman still games on a 32bit system?

    My Skyrim, Fallout 4, Starbound and WoW + other game mods at MODDB: 

    https://www.moddb.com/mods/skyrim-anime-overhaul



  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    edited October 2018
    What prehistoric caveman still games on a 32bit system?
    For starters, most of the ones still running Windows XP, even though it's long since off support.  But not enough for AMD or Nvidia to care about, which is the point.
    Kyleran
  • TheScavengerTheScavenger Member EpicPosts: 3,321
    Quizzical said:
    What prehistoric caveman still games on a 32bit system?
    For starters, most of the ones still running Windows XP, even though it's long since off support.  But not enough for AMD or Nvidia to care about, which is the point.
    what games would they be able to play on windows xp? besides old archaic ancient games from the past? Cause steam got rid of windows xp support. WoW got rid of windows xp support. And I'm sure many are following on stopping windows xp support.

    I guess some people might have super old pcs and can't run new games anyway? 

    But I don't really know what other use having such an old pc would bring. Wouldn't even be able to do GOOD youtube videos if the pc is so old and ancient (and no one is gonna watch someone play on a junk PC)...so wouldn't be able to have a youtube career. Most art based programs take a pretty good PC to run...at least the good ones...so can't do digital art either.

    I think those stuck in the mindset they can't upgrade windows xp are probably mostly seniors that can't adapt or those in super poor countries (which I'll concede that its a shame for them since upgrading can cost 10s of thousands of dollars when converted to USD, when for US it be like 1k for a really decent setup).

    But it be so limiting because at that point likely the PC sucks anyway. All the new PCs have long ditched XP and at best might have windows 7 if they don't include windows 10...and again...can't do youtube videos or anything else on the pc on a bad PC. So I dunno what use it would have, except for seniors and for poor undeveloped countries.

    Maybe I'm missing something. But all the stuff I personally use needs 64bit to be efficent. Photoshop? Needs a pretty decent PC or its too slow. Sony vegas pro for youtube? Again needs a good PC. To do high quality gaming videos? Again needs a good PC...and all the newer games really need a 64bit system to run.

    My Skyrim, Fallout 4, Starbound and WoW + other game mods at MODDB: 

    https://www.moddb.com/mods/skyrim-anime-overhaul



  • RidelynnRidelynn Member EpicPosts: 7,383
    X64 doesn’t have anything really to do with speed. It’s all about maximum supported memory access.

    PCs with <= 4GB of RAM are still sold today all the time.

    I do agree it would be silly to buy a brand new i9 and just install 2GB of RAM though.
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    Ridelynn said:

    I do agree it would be silly to buy a brand new i9 and just install 2GB of RAM though.
    As best as I can tell, you can't do that.  DDR4 memory modules don't seem to come smaller than 4 GB.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Quizzical said:
    Ridelynn said:

    I do agree it would be silly to buy a brand new i9 and just install 2GB of RAM though.
    As best as I can tell, you can't do that.  DDR4 memory modules don't seem to come smaller than 4 GB.
    See? Silly.
  • CleffyCleffy Member RarePosts: 6,412
    edited October 2018
    You can process more data 64-bits at a time than 32-bits at a time. A 64-bit float is also more precise than a 32-bit float. Of course a 64-bit float really isn't something necessary for consumer applications. It's really an edge case for certain types of precise computations
    [Deleted User]
  • RenoakuRenoaku Member EpicPosts: 3,157
    Great news can we get 128 bit?
  • VrikaVrika Member LegendaryPosts: 7,888
    Renoaku said:
    Great news can we get 128 bit?
    I imagine we'll make the move as soon as we'll start developing systems with more than 10 000 000 000 GB memory.
     
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    Cleffy said:
    You can process more data 64-bits at a time than 32-bits at a time. A 64-bit float is also more precise than a 32-bit float. Of course a 64-bit float really isn't something necessary for consumer applications. It's really an edge case for certain types of precise computations
    Programmers commonly use doubles (64-bit floating-point) just so that they don't have to stop to think about whether a float will be sufficient precision.  And then sometimes find a creative way to do something stupid so that a double isn't sufficient precision, either.
  • MadFrenchieMadFrenchie Member LegendaryPosts: 8,505
    Quizzical said:
    Cleffy said:
    You can process more data 64-bits at a time than 32-bits at a time. A 64-bit float is also more precise than a 32-bit float. Of course a 64-bit float really isn't something necessary for consumer applications. It's really an edge case for certain types of precise computations
    Programmers commonly use doubles (64-bit floating-point) just so that they don't have to stop to think about whether a float will be sufficient precision.  And then sometimes find a creative way to do something stupid so that a double isn't sufficient precision, either.
    I can second the fact that doubles are seen as a default.  I've started taking classes towards an IT degree and the material and professor always use doubles whenever integers aren't sufficient.

    It spent like one paragraph explaining the difference between float and double, then discarded float completely.

    image
  • PhryPhry Member LegendaryPosts: 11,004
    no doubt in a few years time, when we're discussing the benefits of 128 bit Processors etc. we'll be looking back on 64 bit systems with nostalgia, probably necropost it too  ;)
    ScotKyleran
  • ScotScot Member LegendaryPosts: 22,955
    Phry said:
    no doubt in a few years time, when we're discussing the benefits of 128 bit Processors etc. we'll be looking back on 64 bit systems with nostalgia, probably necropost it too  ;)
    My understanding is that 64 was never a priority for gaming, I guess at least it means we have bigger drives and better chips, even if only to handle it! Roll on 124 Bloatbit. :)
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    Scot said:
    Phry said:
    no doubt in a few years time, when we're discussing the benefits of 128 bit Processors etc. we'll be looking back on 64 bit systems with nostalgia, probably necropost it too  ;)
    My understanding is that 64 was never a priority for gaming, I guess at least it means we have bigger drives and better chips, even if only to handle it! Roll on 124 Bloatbit. :)
    The move from 8-bit to 16-bit to 32-bit was driven by the need to efficiently process larger numbers for a wide variety of reasons.  In a sense, the move from 32-bit to 64-bit was also driven by that, but there, the dominant reason to need 64-bit numbers was memory addressing.

    It is sometimes said that you can't address more than 4 GB of memory with a 32-bit CPU.  That's not actually true, as you can do it by chaining together computations.  For example, a Sega Master System had 8 KB of system memory with an 8-bit CPU, which is a lot more than the 256 bytes that you can directly handle with 8-bit computations.  What is true is that you can't do it efficiently, and needing to do several computations to get a memory address every time you want to access memory will cause a huge performance hit.

    A lot of things would eventually want more than 4 GB of memory, but servers were the first thing that really drove it.  You might reasonably think of 64 GB of memory as being a lot today, and for a desktop or laptop it is, but it's really not very much for a server.  Today, you can easily get 192 GB per socket even with cheap 16 GB modules, or as much as 2 TB per socket with some fancier server stuff, and the reason you can get it is that there are customers that need it.  When AMD launched the Athlon 64 about 15 years ago, 1 GB was a lot in a desktop, but 4 GB wasn't nearly enough for a lot of servers.

    There is already a need for larger than 64-bit computations for a variety of purposes.  It's just a lot less common than needing 64-bit in cases where 32-bit isn't enough.  For now, this is handled by chaining together 64-bit instructions.  There are a lot of programming tools to do this for you under the hood, such as Java's BigInteger class.

    If applications that need 128-bit computations become very common, then we'll see a rise of 128-bit CPUs to handle those computations.  But until then, it's more efficient to just make cores better at 64-bit instructions and handle larger numbers by chaining together multiple instructions, as that gives better performance in most of the things that people do.
    Scot
  • RidelynnRidelynn Member EpicPosts: 7,383
    Scot said:
    Phry said:
    no doubt in a few years time, when we're discussing the benefits of 128 bit Processors etc. we'll be looking back on 64 bit systems with nostalgia, probably necropost it too  ;)
    My understanding is that 64 was never a priority for gaming, I guess at least it means we have bigger drives and better chips, even if only to handle it! Roll on 124 Bloatbit. :)
    Fun fact.

    Nintendo 64 was a 64-bit gaming system, and it released in 1996. Granted, subsequent consoles were mixed: Nintendo went back to 32-bit PPC for their next consoles, PS1 was 32-bit, PS2 was sort of 128 bit, and current consoles from MS/Sony are all x86-64.

    guess it kinda goes to show 64-bit in and of itself is just a means to an end, it doesn’t really do much by itself
    [Deleted User]maskedweasel
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    Quizzical said:
    Cleffy said:
    You can process more data 64-bits at a time than 32-bits at a time. A 64-bit float is also more precise than a 32-bit float. Of course a 64-bit float really isn't something necessary for consumer applications. It's really an edge case for certain types of precise computations
    Programmers commonly use doubles (64-bit floating-point) just so that they don't have to stop to think about whether a float will be sufficient precision.  And then sometimes find a creative way to do something stupid so that a double isn't sufficient precision, either.
    I can second the fact that doubles are seen as a default.  I've started taking classes towards an IT degree and the material and professor always use doubles whenever integers aren't sufficient.

    It spent like one paragraph explaining the difference between float and double, then discarded float completely.
    A lot is driven by what the hardware you're is built to do.  On a 64-bit x86 CPU, if doing computations with floats isn't really any faster than doing them with doubles, then why not just use doubles?  That avoids potential issues where floating-point rounding errors would cause trouble if using floats but not doubles.

    Besides, floating-point data types are algebraically weird, as neither addition nor multiplication is associative.  Very few people really want to know or care what they're doing, or just how big the rounding errors are.  More bits in the mantissa gives you smaller rounding errors, which is sometimes a good thing and sometimes irrelevant, but pretty much never a bad thing.

    When you run into situations where it does make a big performance difference, then maybe you use floats.  This could be because you have a large number of them and doubles take twice as much memory.  The difference between 4 bytes of memory for a variable and 8 doesn't matter if it's just one variable, but if you have an array of a billion of them, the difference between 4 GB and 8 GB might.  Or if you're doing those computations on a system heavily optimized for 32-bit, such as a GPU, then you don't use doubles unless you're forced to or you're careless.
    MadFrenchie
  • MadFrenchieMadFrenchie Member LegendaryPosts: 8,505
    Quizzical said:
    Quizzical said:
    Cleffy said:
    You can process more data 64-bits at a time than 32-bits at a time. A 64-bit float is also more precise than a 32-bit float. Of course a 64-bit float really isn't something necessary for consumer applications. It's really an edge case for certain types of precise computations
    Programmers commonly use doubles (64-bit floating-point) just so that they don't have to stop to think about whether a float will be sufficient precision.  And then sometimes find a creative way to do something stupid so that a double isn't sufficient precision, either.
    I can second the fact that doubles are seen as a default.  I've started taking classes towards an IT degree and the material and professor always use doubles whenever integers aren't sufficient.

    It spent like one paragraph explaining the difference between float and double, then discarded float completely.
    A lot is driven by what the hardware you're is built to do.  On a 64-bit x86 CPU, if doing computations with floats isn't really any faster than doing them with doubles, then why not just use doubles?  That avoids potential issues where floating-point rounding errors would cause trouble if using floats but not doubles.

    Besides, floating-point data types are algebraically weird, as neither addition nor multiplication is associative.  Very few people really want to know or care what they're doing, or just how big the rounding errors are.  More bits in the mantissa gives you smaller rounding errors, which is sometimes a good thing and sometimes irrelevant, but pretty much never a bad thing.

    When you run into situations where it does make a big performance difference, then maybe you use floats.  This could be because you have a large number of them and doubles take twice as much memory.  The difference between 4 bytes of memory for a variable and 8 doesn't matter if it's just one variable, but if you have an array of a billion of them, the difference between 4 GB and 8 GB might.  Or if you're doing those computations on a system heavily optimized for 32-bit, such as a GPU, then you don't use doubles unless you're forced to or you're careless.
    Yea, I was thinking I could see a case in really, really large computationally-centered programs of using floats whenever workable, but for the amount of computations performed by most applications, it doesn't seem like it would result in any noticeable difference as far as the memory required.

    image
  • ScotScot Member LegendaryPosts: 22,955
    Ridelynn said:
    Scot said:
    Phry said:
    no doubt in a few years time, when we're discussing the benefits of 128 bit Processors etc. we'll be looking back on 64 bit systems with nostalgia, probably necropost it too  ;)
    My understanding is that 64 was never a priority for gaming, I guess at least it means we have bigger drives and better chips, even if only to handle it! Roll on 124 Bloatbit. :)
    Fun fact.

    Nintendo 64 was a 64-bit gaming system, and it released in 1996. Granted, subsequent consoles were mixed: Nintendo went back to 32-bit PPC for their next consoles, PS1 was 32-bit, PS2 was sort of 128 bit, and current consoles from MS/Sony are all x86-64.

    guess it kinda goes to show 64-bit in and of itself is just a means to an end, it doesn’t really do much by itself
    I take back some of my never ending slagging-off of consoles. :)
  • ScotScot Member LegendaryPosts: 22,955
    "In school that's fine. In real world it's not really."

    A truism from Torval. :)
Sign In or Register to comment.