Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Star Citizen - 64 bit

2

Comments

  • 13lake13lake Member UncommonPosts: 719
    edited January 2017
    https://www.youtube.com/watch?v=TSaNhyzfNFk

    Everyone please watch this video.

    Almost everyone in this thread is either discussing the wrong thing, or not understanding exactly what the point is, ...
  • WizardryWizardry Member LegendaryPosts: 19,332
    I am not sure what this is suppose to prove?
    You can have a continuous visual without even moving.You could have a sky box with the surrounding textures moving,it would look like you are moving and to the average gamer,think you are moving but never really moving.
    I am sure the pros,the guys working with visual graphics everyday have tech and know how to pull off easy fake visuals,similar to people now a days still think there is such a thing as 3d video graphics.

    There is also time lapse technology that i am sure can do amazing stuff,basically anyone such as Chris with connections in the movie industry would have lots of tricks to pull off  stuff you think are really happening.

    Again w/o totally knowing what you gusy are talking about,it is possible to not move a single inch within the game but you THINK you are moving.

    Never forget 3 mile Island and never trust a government official or company spokesman.

  • MaxBaconMaxBacon Member LegendaryPosts: 7,766
    edited January 2017
    Who the hell cares about this? If the game does the job it needs to do, that's what matters.

    The peeps who were just preaching around accusing the developers of being liars and there's no 64bit, etc... It's hilarious when CIG's Ben Parry had a go at (guess who!) to call him out on the 64bit lies BS at Frontier.

    And that went as far as someone (guess who!) made a vague legal threat. Unbelievable!


    From that drama here's a quote from CIG's Dev:

    what I've tried, repeatedly, to explain here: After converting a module, some calculations make more sense to be done in 32-bit. If anything, the game currently does too much at 64-bit, because the guys doing the conversion erred on the safe side, so I'm always keeping my eye open for stuff that could be moved back to 32. So yeah, I'm pretty dang certain that no module (that deals with positions in the first place) has been left non-64'd. Just to be sure I wasn't leading you up the garden path on this one, I checked the AI code. Sure enough, it does its positioning in 64-bit.
  • MaxBaconMaxBacon Member LegendaryPosts: 7,766
    edited January 2017
    Wizardry said:
    Chris with connections in the movie industry would have lots of tricks to pull off  stuff you think are really happening.
    So Chris Roberts connections with the movie industry means that he could fake movement on an online game as Star Citizen? So you're moving but not moving, seeing other players moving (even on Quantum Jumps!) it's just one illusion?

    Okay... O.o
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    CrazKanuk said:
    I cannot believe someone spent 23 hours doing this to prove someone wrong.

    The joke is on them.
    Why focus on the negative implications ? 
    Or has that become standard practice regarding anything SC-related ?

    This person has proved it can be done ! The game tech has progressed to the point where it is possible to do this. Great news !

    New game play possibilities have opened up.

    And as an aside, some loudmouthed detractors have been "corrected"... :D

    Because it's fricking ridiculous, that's why.
    Going out of their way to film a ship flying through space for 24 hours, with the side effect of correcting a "loudmouthed detractor" who's opinion is not meant to worth anything in the first place, is pretty damn ironic.

    New gameplay possibilities have not been opened up. You can do this in Elite but nobody does it because what is the point? It adds absolutely nothing to the game because the time and distances involved are too big for anybody to bother with.

    I do think it's great that the tech supports space of this size.

    To be fair, he spent 24 hours proving something. There are plenty of people, present parties included, who have probably spent 24 hours or more discussing the game without proving anything. Honestly, if there were 100 people who could spend 24 hours to prove just 1 thing about the game, anything worth debating could likely be resolved in a matter of 1 day. That doesn't seem like a waste of time to me, in the context of SC.
    He spent 22 hours to prove nothing.
    In WoW you can change the zones and coordinates seemless without transition. Theoretically you can tack unlimited zones and have a quadtrillion kilometers to run - does this make WoW pushing 64Bit values to the GPU? I guess it doesn't.
    As I said before, take 3 32bit Vectors and you have 100 billion kilometers in every direction with a precision of 10cm. (without the need to have local zone coordinate system (Though it's almost the same))
    There's no need to push world coordinates of objects to the GPU.  That doesn't scale well to thousands of threads, so a GPU isn't good at handling global coordinates, anyway, unless your game world is extremely small.  Rather, you handle it on the CPU and compute relative coordinates of an object is this far away from the camera, and that's what you give to the GPU.  
  • Turrican187Turrican187 Member UncommonPosts: 787
    MaxBacon said:
    Who the hell cares about this? If the game does the job it needs to do, that's what matters.

    The peeps who were just preaching around accusing the developers of being liars and there's no 64bit, etc... It's hilarious when CIG's Ben Parry had a go at (guess who!) to call him out on the 64bit lies BS at Frontier.

    And that went as far as someone (guess who!) made a vague legal threat. Unbelievable!


    From that drama here's a quote from CIG's Dev:

    what I've tried, repeatedly, to explain here: After converting a module, some calculations make more sense to be done in 32-bit. If anything, the game currently does too much at 64-bit, because the guys doing the conversion erred on the safe side, so I'm always keeping my eye open for stuff that could be moved back to 32. So yeah, I'm pretty dang certain that no module (that deals with positions in the first place) has been left non-64'd. Just to be sure I wasn't leading you up the garden path on this one, I checked the AI code. Sure enough, it does its positioning in 64-bit.
    need more info which part of positioning? are they rendering the 64bit positions to the GPU and make all people buy radeon cards? did they invent 64bit vectors? or do they the position with 64bit floats and rescale them to a vector?
    Also please source your quotes otherwise they are useless.

    When you have cake, it is not the cake that creates the most magnificent of experiences, but it is the emotions attached to it.
    The cake is a lie.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Since the original post pretty much derailed the entire discussion, I might as well join in.

    There are actually some 64-bit computations that GPUs are quite good at.  These include the simple logical operations such as xor or and, as those can be done as two 32-bit operations.  64-bit addition is commonly fast, too, as is 64-bit shift and rotate, at least on GPUs that are good at the 32-bit versions of them.

    The whole "some GPUs are terrible at double precision" bit is specifically talking about floating point multiplication, fma, and possibly addition.  There, you can't just do a 64-bit operation by changing together a few 32-bit operations.  Some top end GPUs built for it have dedicated silicon to do 64-bit fma fast in silicon, though this is generally disabled in Radeon and GeForce cards anyway.
  • MaxBaconMaxBacon Member LegendaryPosts: 7,766
    edited January 2017
    need more info which part of positioning? are they rendering the 64bit positions to the GPU and make all people buy radeon cards? did they invent 64bit vectors? or do they the position with 64bit floats and rescale them to a vector?
    Also please source your quotes otherwise they are useless.
    You can go ask him if you care to know. Ben Parry is usually on the forum's Q&A sections, but you can reach him on Frontier, he must more free than usual now that (guess who!) got banned from Frontier again. source be here

    That discussion on the 64bit drama went for like a month, so it's very spread. At the end (guess who!) got frustrated enough to even made a vague legal threat against that CIG employee (who also did dev Elite)
  • Turrican187Turrican187 Member UncommonPosts: 787
    edited January 2017
    Quizzical said:
    Since the original post pretty much derailed the entire discussion, I might as well join in.

    There are actually some 64-bit computations that GPUs are quite good at.  These include the simple logical operations such as xor or and, as those can be done as two 32-bit operations.  64-bit addition is commonly fast, too, as is 64-bit shift and rotate, at least on GPUs that are good at the 32-bit versions of them.
    [...]
    This is correct but not feasible for a game engine where you need the exact vector and other vectors that interact with them.
    for example Physic engine operation (IK, Ragdoll etc) need to be calculated in real time where every bone in a simple character is a vector.

    When you have cake, it is not the cake that creates the most magnificent of experiences, but it is the emotions attached to it.
    The cake is a lie.

  • laseritlaserit Member LegendaryPosts: 7,591
    My experience is with flight sims. Flights sims have hit a wall with 32 bit. With the power of today's hardware and the amount of detail and data available through add-ons, flight sims quickly eat up the 4gig of memory access available to 32 bit. Out of memory errors are common place in 32 bit simulators with the level of detail in the scenery and the study level planes available today. X-plane converted to 64 bit a couple years ago and Prepar3D is in the process of converting into 64 bit for their next version.

    I would imagine that SC would face similar issues with 32 bit. 

    "Be water my friend" - Bruce Lee

  • Turrican187Turrican187 Member UncommonPosts: 787
    edited January 2017
    laserit said:
    My experience is with flight sims. Flights sims have hit a wall with 32 bit. With the power of today's hardware and the amount of detail and data available through add-ons, flight sims quickly eat up the 4gig of memory access available to 32 bit. Out of memory errors are common place in 32 bit simulators with the level of detail in the scenery and the study level planes available today. X-plane converted to 64 bit a couple years ago and Prepar3D is in the process of converting into 64 bit for their next version.

    I would imagine that SC would face similar issues with 32 bit. 
    You are confusing a 32bit engine with a 64bit engine - all major engines (UE, Unity, Cry[R.I.P.]) are on 64 bit memory allocation and internal operations. Though some house build engines may be still on 32bit and would have a memory problem when accessing too much assets at a time this would be due to texture cache, amount of calculations etc.
    But SC is claiming that they are passing 64Bit variables to a GPU (with using 64bit Vectors) ... which will work but most Video cards can compute this at 1/16th speed (the 980GTX at 1/32).
    They are beasically selling a point that all 64bit engines have as their own invention.

    When you have cake, it is not the cake that creates the most magnificent of experiences, but it is the emotions attached to it.
    The cake is a lie.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    edited January 2017
    Quizzical said:
    Since the original post pretty much derailed the entire discussion, I might as well join in.

    There are actually some 64-bit computations that GPUs are quite good at.  These include the simple logical operations such as xor or and, as those can be done as two 32-bit operations.  64-bit addition is commonly fast, too, as is 64-bit shift and rotate, at least on GPUs that are good at the 32-bit versions of them.
    [...]
    This is correct but not feasible for a game engine where you need the exact vector and other vectors that interact with them.
    for example Physic engine operation (IK, Ragdoll etc) need to be calculated in real time where every bone in a simple character is a vector.
    Doing extensive 64-bit floating point computations on a consumer GPU is what isn't feasible.  One of the first things you figure out in a frame is where the camera is going to be for that frame.  Then you figure out where everything else that might need to be drawn is relative to the camera.  It's all close to the camera, so 32-bit computations work fine here, with the possible exception of extremely faraway objects like stars that you want to draw, where rounding errors don't matter much, so 32-bit still works fine.

    You might in some cases need 64-bit positions in world coordinates for both the camera and various objects near it, but then once you subtract, the difference is small, so casting to 32-bit works fine.  The GPU never sees the 64-bit world coordinates, but only the 32-bit values for where things are relative to the camera.  Those numbers are all small enough that the GPU can do whatever it needs with 32-bit values.

    Also, you keep talking about "vectors", but I have no idea what you're talking about and suspect that you don't, either.  There are at least there different notions of vectors that are applicable here:

    1)  the C++ container type
    2)  the data type common to every significant GPU shader/kernel language
    3)  the mathematical objects from linear algebra

    The way you're using the word "vectors" doesn't fit any of those, and seems to be closer to, say, Java's BigInteger class.
  • ErillionErillion Member EpicPosts: 10,297
    MaxBacon said:
    Who the hell cares about this? If the game does the job it needs to do, that's what matters.

    The peeps who were just preaching around accusing the developers of being liars and there's no 64bit, etc... It's hilarious when CIG's Ben Parry had a go at (guess who!) to call him out on the 64bit lies BS at Frontier.

    And that went as far as someone (guess who!) made a vague legal threat. Unbelievable!


    From that drama here's a quote from CIG's Dev:

    what I've tried, repeatedly, to explain here: After converting a module, some calculations make more sense to be done in 32-bit. If anything, the game currently does too much at 64-bit, because the guys doing the conversion erred on the safe side, so I'm always keeping my eye open for stuff that could be moved back to 32. So yeah, I'm pretty dang certain that no module (that deals with positions in the first place) has been left non-64'd. Just to be sure I wasn't leading you up the garden path on this one, I checked the AI code. Sure enough, it does its positioning in 64-bit.
    need more info which part of positioning? are they rendering the 64bit positions to the GPU and make all people buy radeon cards? did they invent 64bit vectors? or do they the position with 64bit floats and rescale them to a vector?
    Also please source your quotes otherwise they are useless.
    You know that you COULD ask Ben Parry himself ?

    There is an "Ask a Dev" section on the forum for that... and there are thousands of dev answers.


    Have fun
  • CrazKanukCrazKanuk Member EpicPosts: 6,130
    edited January 2017
    CrazKanuk said:
    I cannot believe someone spent 23 hours doing this to prove someone wrong.

    The joke is on them.
    Why focus on the negative implications ? 
    Or has that become standard practice regarding anything SC-related ?

    This person has proved it can be done ! The game tech has progressed to the point where it is possible to do this. Great news !

    New game play possibilities have opened up.

    And as an aside, some loudmouthed detractors have been "corrected"... :D

    Because it's fricking ridiculous, that's why.
    Going out of their way to film a ship flying through space for 24 hours, with the side effect of correcting a "loudmouthed detractor" who's opinion is not meant to worth anything in the first place, is pretty damn ironic.

    New gameplay possibilities have not been opened up. You can do this in Elite but nobody does it because what is the point? It adds absolutely nothing to the game because the time and distances involved are too big for anybody to bother with.

    I do think it's great that the tech supports space of this size.

    To be fair, he spent 24 hours proving something. There are plenty of people, present parties included, who have probably spent 24 hours or more discussing the game without proving anything. Honestly, if there were 100 people who could spend 24 hours to prove just 1 thing about the game, anything worth debating could likely be resolved in a matter of 1 day. That doesn't seem like a waste of time to me, in the context of SC.
    He spent 22 hours to prove nothing.
    In WoW you can change the zones and coordinates seemless without transition. Theoretically you can tack unlimited zones and have a quadtrillion kilometers to run - does this make WoW pushing 64Bit values to the GPU? I guess it doesn't.
    As I said before, take 3 32bit Vectors and you have 100 billion kilometers in every direction with a precision of 10cm. (without the need to have local zone coordinate system (Though it's almost the same))

    Yeah, but you're missing the point, he proved something. I would argue that he effectively proved more than what's every been proven on here, lol. That was the point I was making. It wasn't related to proving anything about the underlying technology. It was more a tongue-in-cheek comment on how pointless all this conversation is and how much time people seem to be spending, probably, much more and proving less. The reality is that people simply aren't listening, they just want to argue on the Internet. If you want logic, go read a book or something, right? lol

    Case in point, I answered the question quite directly with a very probable explanation aaaaaand!!!! Crickets! Actually, not totally crickets, he did post something further down somewhat related which was essentially, "Fuck y'all, I'm taking my ball and going home." Instead of something intelligent like, "Hey, I never thought of it like that. Huh, maybe you're right." 

    Crazkanuk

    ----------------
    Azarelos - 90 Hunter - Emerald
    Durnzig - 90 Paladin - Emerald
    Demonicron - 90 Death Knight - Emerald Dream - US
    Tankinpain - 90 Monk - Azjol-Nerub - US
    Brindell - 90 Warrior - Emerald Dream - US
    ----------------

  • ste2000ste2000 Member EpicPosts: 6,194
    httpimageseurogamernet2015articles1821028steam-loophole-closed-after-watch-paint-dry-sim-snuck-onto-store-145926380381jpgEG11resize300x-1quality80formatjpg

  • Turrican187Turrican187 Member UncommonPosts: 787
    edited January 2017
    Quizzical said:
    Quizzical said:
    Since the original post pretty much derailed the entire discussion, I might as well join in.

    There are actually some 64-bit computations that GPUs are quite good at.  These include the simple logical operations such as xor or and, as those can be done as two 32-bit operations.  64-bit addition is commonly fast, too, as is 64-bit shift and rotate, at least on GPUs that are good at the 32-bit versions of them.
    [...]
    This is correct but not feasible for a game engine where you need the exact vector and other vectors that interact with them.
    for example Physic engine operation (IK, Ragdoll etc) need to be calculated in real time where every bone in a simple character is a vector.
    Doing extensive 64-bit floating point computations on a consumer GPU is what isn't feasible.  One of the first things you figure out in a frame is where the camera is going to be for that frame.  Then you figure out where everything else that might need to be drawn is relative to the camera.  It's all close to the camera, so 32-bit computations work fine here, with the possible exception of extremely faraway objects like stars that you want to draw, where rounding errors don't matter much, so 32-bit still works fine.

    You might in some cases need 64-bit positions in world coordinates for both the camera and various objects near it, but then once you subtract, the difference is small, so casting to 32-bit works fine.  The GPU never sees the 64-bit world coordinates, but only the 32-bit values for where things are relative to the camera.  Those numbers are all small enough that the GPU can do whatever it needs with 32-bit values.

    Also, you keep talking about "vectors", but I have no idea what you're talking about and suspect that you don't, either.  There are at least there different notions of vectors that are applicable here:

    1)  the C++ container type
    2)  the data type common to every significant GPU shader/kernel language
    3)  the mathematical objects from linear algebra

    The way you're using the word "vectors" doesn't fit any of those, and seems to be closer to, say, Java's BigInteger class.
    Regarding Vectors:
    A Vector in Cryengine is a C++ struct which contains the XYZ coordinate of the Asset.
    By default it stores 3 floats.
    It defines the XYZ position of the object
    https://en.wikipedia.org/wiki/Cartesian_coordinate_system#Representing_a_vector_in_the_standard_basis

    If they are using 64bit datatypes for global/world positioning they have to recalculate it to something down that the GPU (resp. Camera) understands on the fly which is a very time consuming process and would lead to lags, falling through geometry, and other strange behaviour ... but the more I think about it ...

    Edit: Maybe you where confused because I said that a bone is a vector, but basically that's what it is, a characters Skeletons Bones are just Vectors that define the lenght of the bones, position of eyes, jaw, fingers, legs etc etc ... they are defined 3d vectors formatted for the physics engine.

    When you have cake, it is not the cake that creates the most magnificent of experiences, but it is the emotions attached to it.
    The cake is a lie.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Quizzical said:
    Quizzical said:
    Since the original post pretty much derailed the entire discussion, I might as well join in.

    There are actually some 64-bit computations that GPUs are quite good at.  These include the simple logical operations such as xor or and, as those can be done as two 32-bit operations.  64-bit addition is commonly fast, too, as is 64-bit shift and rotate, at least on GPUs that are good at the 32-bit versions of them.
    [...]
    This is correct but not feasible for a game engine where you need the exact vector and other vectors that interact with them.
    for example Physic engine operation (IK, Ragdoll etc) need to be calculated in real time where every bone in a simple character is a vector.
    Doing extensive 64-bit floating point computations on a consumer GPU is what isn't feasible.  One of the first things you figure out in a frame is where the camera is going to be for that frame.  Then you figure out where everything else that might need to be drawn is relative to the camera.  It's all close to the camera, so 32-bit computations work fine here, with the possible exception of extremely faraway objects like stars that you want to draw, where rounding errors don't matter much, so 32-bit still works fine.

    You might in some cases need 64-bit positions in world coordinates for both the camera and various objects near it, but then once you subtract, the difference is small, so casting to 32-bit works fine.  The GPU never sees the 64-bit world coordinates, but only the 32-bit values for where things are relative to the camera.  Those numbers are all small enough that the GPU can do whatever it needs with 32-bit values.

    Also, you keep talking about "vectors", but I have no idea what you're talking about and suspect that you don't, either.  There are at least there different notions of vectors that are applicable here:

    1)  the C++ container type
    2)  the data type common to every significant GPU shader/kernel language
    3)  the mathematical objects from linear algebra

    The way you're using the word "vectors" doesn't fit any of those, and seems to be closer to, say, Java's BigInteger class.
    Regarding Vectors:
    A Vector in Cryengine is a C++ struct which contains the XYZ coordinate of the Asset.
    By default it stores 3 floats.
    It defines the XYZ position of the object
    https://en.wikipedia.org/wiki/Cartesian_coordinate_system#Representing_a_vector_in_the_standard_basis

    If they are using 64bit datatypes for global/world positioning they have to recalculate it to something down that the GPU (resp. Camera) understands on the fly which is a very time consuming process and would lead to lags, falling through geometry, and other strange behaviour ... but the more I think about it ...

    Edit: Maybe you where confused because I said that a bone is a vector, but basically that's what it is, a characters Skeletons Bones are just Vectors that define the lenght of the bones, position of eyes, jaw, fingers, legs etc etc ... they are defined 3d vectors formatted for the physics engine.
    For someone to create a C++ struct and call it a "vector" would be the worst abuse of terminology I've seen in a while.  Fortunately, it looks like you're just confused and the Cryengine did no such thing:

    https://www.cryengine.com/sdk/5.1.0/cpp_api/classcry_1_1vector.html

    cry::vector is just a subclass of std::vector.  I'm not sure why they felt the need to subclass it, but it is a legitimate thing to do.  As a templated class, it doesn't have a default base type, whether floats or anything else.  But it's also not the most sensible data structure to use for storing the position of a single object.  The main point to C++ vectors is that you can easily resize them, which is something that you absolutely don't want to do for coordinates in a game world.  Resizing a vector doesn't mean as in going from floats to doubles (which they can't do, anyway), but as in going from a vector of 3 elements to one of 4.

    To define the location of an object, you customarily need not just x, y, and z coordinates, but also the orientation of the object.  Perhaps the most mathematically natural way to describe them for computational purposes is as a 3-dimensional vector (in the linear algebra sense) and a 3x3 matrix, respectively.  Rotation matrices are necessarily orthogonal, so for a rigid object, you only have three degrees of freedom in that 3x3 matrix.  But keeping it as a matrix with 9 floats rather than 3 is more natural for doing computations--and describing a point in the SO(3) manifold with three floats is awkward to do.  You could cram all of that data into a single vector, but it's a goofy thing to do.

    Casting a double to a float should be fast, as it basically consists of ignoring some bits, at least apart from exponents outside of the range that floats can represent.  I don't know for certain that modern x86 CPUs have an instruction do that efficiently, but I'd be surprised if they don't.

    But more importantly, converting an object's position from world coordinates to camera coordinates is something that you only have to do once per object per frame.  The computational effort there is a rounding error as compared to the things that you have to do once per vertex, triangle, or pixel per frame.  That's why it can be done on the CPU without performance choking.

    Furthermore, modern 3D graphics forces you do change your coordinate bases many times.  If you want to use a modern graphics API at all, you're forced at various places in the pipeline to have a 4-dimensional vector of floats for homogeneous coordinates in RP^3 and later a 2-dimensional vector for coordinates in window space.  A change of bases is something to be embraced when it cleans things up for you (and when it lets you use efficient 32-bit floats on a GPU rather than slow 64-bit doubles), not something that should lead you to run away screaming.

    I haven't seen anything in this thread that would lead me to believe that the Star Citizen developers are doing something insanely stupid in their internal coding practices, though it does seem like some forum posters want them to.
  • rodarinrodarin Member EpicPosts: 2,611
    Like I said a year ago when they made these claims its all semantics. I am sure SOME of it is 64 bit WHATS parts and how they will affect a universe (once they actually build it) will be the big question.

    They did what they always do they said something most people dont understand or think they understand and go with it. But they also made a mistake they assumed that 64 bit was automatically better than 32 bit (across the board) now they have seen it isnt. Thats why they are trying now to back peddle and make distinctions.

    If you want to see it go play Ascent. Thats a game a single guy made in his basement. He also wrote quite a write up about it and I am sure Roberts and company have reached out to him.
  • 13lake13lake Member UncommonPosts: 719
    edited January 2017
    From what i understand, at least for their test builds, they are trying to use 64-bit precision with float numbers on the gpu/passed to the gpu for as many things as possible

    The trick they are using is a completely novel idea and a solution not used previously. I can't remember exactly how another space sim developer explained it, and how exactly it differs from the way other games have done it the past, so i won't even bother to piece it from memory.

    What i do remember is that Ascent guy is using a simple but effective solution, and that the CGi solution is extremely tedious and time consuming to implement.

    This wasn't the discussion but there's some more info here :

    http://massivelyop.com/2015/10/21/ascents-lead-dev-offers-insight-on-the-star-citizen-controversy/

    http://www.markjawad.com/thoughts-on-video-games--the-industry-at-large/star-citizens-switch-to-double-precision
  • SpottyGekkoSpottyGekko Member EpicPosts: 6,916
    It's a widely known fact that all game developers are incompetent idiots.

    Just check any games forum and it will be crystal clear. All the REAL experts are on the forums... :lol:
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    13lake said:
    From what i understand, at least for their test builds, they are trying to use 64-bit precision with float numbers on the gpu/passed to the gpu for as many things as possible

    The trick they are using is a completely novel idea and a solution not used previously. I can't remember exactly how another space sim developer explained it, and how exactly it differs from the way other games have done it the past, so i won't even bother to piece it from memory.

    What i do remember is that Ascent guy is using a simple but effective solution, and that the CGi solution is extremely tedious and time consuming to implement.

    This wasn't the discussion but there's some more info here :

    http://massivelyop.com/2015/10/21/ascents-lead-dev-offers-insight-on-the-star-citizen-controversy/

    http://www.markjawad.com/thoughts-on-video-games--the-industry-at-large/star-citizens-switch-to-double-precision
    Your first link there is interesting, but it's notable that he's talking about the depth buffer using 32-bit floats.  While that is troublesome, and probably the portion of graphics that could most benefit from 64-bit precision, it's also a fixed-function part of the graphics pipeline, so you're stuck with it being 32-bit even if you make all of your shader code 64-bit wherever possible.
  • Turrican187Turrican187 Member UncommonPosts: 787
    edited January 2017
    Quizzical said:
    Quizzical said:
    Quizzical said:
    Since the original post pretty much derailed the entire discussion, I might as well join in.

    There are actually some 64-bit computations that GPUs are quite good at.  These include the simple logical operations such as xor or and, as those can be done as two 32-bit operations.  64-bit addition is commonly fast, too, as is 64-bit shift and rotate, at least on GPUs that are good at the 32-bit versions of them.
    [...]
    This is correct but not feasible for a game engine where you need the exact vector and other vectors that interact with them.
    for example Physic engine operation (IK, Ragdoll etc) need to be calculated in real time where every bone in a simple character is a vector.
    Doing extensive 64-bit floating point computations on a consumer GPU is what isn't feasible.  One of the first things you figure out in a frame is where the camera is going to be for that frame.  Then you figure out where everything else that might need to be drawn is relative to the camera.  It's all close to the camera, so 32-bit computations work fine here, with the possible exception of extremely faraway objects like stars that you want to draw, where rounding errors don't matter much, so 32-bit still works fine.

    You might in some cases need 64-bit positions in world coordinates for both the camera and various objects near it, but then once you subtract, the difference is small, so casting to 32-bit works fine.  The GPU never sees the 64-bit world coordinates, but only the 32-bit values for where things are relative to the camera.  Those numbers are all small enough that the GPU can do whatever it needs with 32-bit values.

    Also, you keep talking about "vectors", but I have no idea what you're talking about and suspect that you don't, either.  There are at least there different notions of vectors that are applicable here:

    1)  the C++ container type
    2)  the data type common to every significant GPU shader/kernel language
    3)  the mathematical objects from linear algebra

    The way you're using the word "vectors" doesn't fit any of those, and seems to be closer to, say, Java's BigInteger class.
    Regarding Vectors:
    A Vector in Cryengine is a C++ struct which contains the XYZ coordinate of the Asset.
    By default it stores 3 floats.
    It defines the XYZ position of the object
    https://en.wikipedia.org/wiki/Cartesian_coordinate_system#Representing_a_vector_in_the_standard_basis

    If they are using 64bit datatypes for global/world positioning they have to recalculate it to something down that the GPU (resp. Camera) understands on the fly which is a very time consuming process and would lead to lags, falling through geometry, and other strange behaviour ... but the more I think about it ...

    Edit: Maybe you where confused because I said that a bone is a vector, but basically that's what it is, a characters Skeletons Bones are just Vectors that define the lenght of the bones, position of eyes, jaw, fingers, legs etc etc ... they are defined 3d vectors formatted for the physics engine.
    For someone to create a C++ struct and call it a "vector" would be the worst abuse of terminology I've seen in a while.  Fortunately, it looks like you're just confused and the Cryengine did no such thing:

    https://www.cryengine.com/sdk/5.1.0/cpp_api/classcry_1_1vector.html

    cry::vector is just a subclass of std::vector.  I'm not sure why they felt the need to subclass it, but it is a legitimate thing to do.  As a templated class, it doesn't have a default base type, whether floats or anything else.  But it's also not the most sensible data structure to use for storing the position of a single object.  The main point to C++ vectors is that you can easily resize them, which is something that you absolutely don't want to do for coordinates in a game world.  Resizing a vector doesn't mean as in going from floats to doubles (which they can't do, anyway), but as in going from a vector of 3 elements to one of 4.

    To define the location of an object, you customarily need not just x, y, and z coordinates, but also the orientation of the object.  Perhaps the most mathematically natural way to describe them for computational purposes is as a 3-dimensional vector (in the linear algebra sense) and a 3x3 matrix, respectively.  Rotation matrices are necessarily orthogonal, so for a rigid object, you only have three degrees of freedom in that 3x3 matrix.  But keeping it as a matrix with 9 floats rather than 3 is more natural for doing computations--and describing a point in the SO(3) manifold with three floats is awkward to do.  You could cram all of that data into a single vector, but it's a goofy thing to do.

    Casting a double to a float should be fast, as it basically consists of ignoring some bits, at least apart from exponents outside of the range that floats can represent.  I don't know for certain that modern x86 CPUs have an instruction do that efficiently, but I'd be surprised if they don't.

    But more importantly, converting an object's position from world coordinates to camera coordinates is something that you only have to do once per object per frame.  The computational effort there is a rounding error as compared to the things that you have to do once per vertex, triangle, or pixel per frame.  That's why it can be done on the CPU without performance choking.

    Furthermore, modern 3D graphics forces you do change your coordinate bases many times.  If you want to use a modern graphics API at all, you're forced at various places in the pipeline to have a 4-dimensional vector of floats for homogeneous coordinates in RP^3 and later a 2-dimensional vector for coordinates in window space.  A change of bases is something to be embraced when it cleans things up for you (and when it lets you use efficient 32-bit floats on a GPU rather than slow 64-bit doubles), not something that should lead you to run away screaming.

    I haven't seen anything in this thread that would lead me to believe that the Star Citizen developers are doing something insanely stupid in their internal coding practices, though it does seem like some forum posters want them to.
    I am always open to other programming ideas, so if you have a mod of cryengine where the engine does not use 3 Vector3 for Position, quaternion and scale with 3 floats in it that is actually working please share

    https://hawkes.info/2014/06/21/matrices-vectors-quaternions-and-cryengine-3/

    Putting It All Together

    CryEngine, like many other 3D game engines represents everything inside the game world using just these basic building blocks; every entity within the game has a position, a rotation and a scale.

    • Position is represented with a Vec3 – a type definition for a vector
    • Rotation is represented with a Quat – a type definition for a quaternion
    • Scale is represented with a Vec3 – a type definition for a vector which allows independent scaling in all three dimensions
    Where Position has absolutly nothing to do with rotation using euler angles instead of quat makes it a vector3 but makes it also slower.

    When you have cake, it is not the cake that creates the most magnificent of experiences, but it is the emotions attached to it.
    The cake is a lie.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    edited January 2017
    Quizzical said:
    For someone to create a C++ struct and call it a "vector" would be the worst abuse of terminology I've seen in a while.  Fortunately, it looks like you're just confused and the Cryengine did no such thing:

    https://www.cryengine.com/sdk/5.1.0/cpp_api/classcry_1_1vector.html

    cry::vector is just a subclass of std::vector.  I'm not sure why they felt the need to subclass it, but it is a legitimate thing to do.  As a templated class, it doesn't have a default base type, whether floats or anything else.  But it's also not the most sensible data structure to use for storing the position of a single object.  The main point to C++ vectors is that you can easily resize them, which is something that you absolutely don't want to do for coordinates in a game world.  Resizing a vector doesn't mean as in going from floats to doubles (which they can't do, anyway), but as in going from a vector of 3 elements to one of 4.

    To define the location of an object, you customarily need not just x, y, and z coordinates, but also the orientation of the object.  Perhaps the most mathematically natural way to describe them for computational purposes is as a 3-dimensional vector (in the linear algebra sense) and a 3x3 matrix, respectively.  Rotation matrices are necessarily orthogonal, so for a rigid object, you only have three degrees of freedom in that 3x3 matrix.  But keeping it as a matrix with 9 floats rather than 3 is more natural for doing computations--and describing a point in the SO(3) manifold with three floats is awkward to do.  You could cram all of that data into a single vector, but it's a goofy thing to do.

    Casting a double to a float should be fast, as it basically consists of ignoring some bits, at least apart from exponents outside of the range that floats can represent.  I don't know for certain that modern x86 CPUs have an instruction do that efficiently, but I'd be surprised if they don't.

    But more importantly, converting an object's position from world coordinates to camera coordinates is something that you only have to do once per object per frame.  The computational effort there is a rounding error as compared to the things that you have to do once per vertex, triangle, or pixel per frame.  That's why it can be done on the CPU without performance choking.

    Furthermore, modern 3D graphics forces you do change your coordinate bases many times.  If you want to use a modern graphics API at all, you're forced at various places in the pipeline to have a 4-dimensional vector of floats for homogeneous coordinates in RP^3 and later a 2-dimensional vector for coordinates in window space.  A change of bases is something to be embraced when it cleans things up for you (and when it lets you use efficient 32-bit floats on a GPU rather than slow 64-bit doubles), not something that should lead you to run away screaming.

    I haven't seen anything in this thread that would lead me to believe that the Star Citizen developers are doing something insanely stupid in their internal coding practices, though it does seem like some forum posters want them to.
    I am always open to other programming ideas, so if you have a mod of cryengine where the engine does not use 3 Vector3 for Position, quaternion and scale with 3 floats in it that is actually working please share

    https://hawkes.info/2014/06/21/matrices-vectors-quaternions-and-cryengine-3/

    Putting It All Together

    CryEngine, like many other 3D game engines represents everything inside the game world using just these basic building blocks; every entity within the game has a position, a rotation and a scale.

    • Position is represented with a Vec3 – a type definition for a vector
    • Rotation is represented with a Quat – a type definition for a quaternion
    • Scale is represented with a Vec3 – a type definition for a vector which allows independent scaling in all three dimensions
    Where Position has absolutly nothing to do with rotation using euler angles instead of quat makes it a vector3 but makes it also slower.
    If you want to call it a Vec3, then fine.  I think that terminology rather than a float3 is an unfortunate syntax from the early days of GPU programming, now that we sometimes want an int3 or a long3 or a double3, but we're probably stuck with it at least in the graphics APIs.  Your link looks like they're basically trying to take the syntax of a shader language and use it in C++ via operator overloading.  But don't call it a "Vector" in C++, as that means something totally different.

    http://www.cplusplus.com/reference/vector/vector/
  • Turrican187Turrican187 Member UncommonPosts: 787
    Basically you get used to call it a Vector if you work everyday with it, one object, 3 Vector3 to define it in the world.
    When I am programming a shader I use the Vector(4) as a container to save variable names.

    When you have cake, it is not the cake that creates the most magnificent of experiences, but it is the emotions attached to it.
    The cake is a lie.

  • KyleranKyleran Member LegendaryPosts: 43,507
    edited January 2017
    Really looking forward to playing this game. ;)

    "True friends stab you in the front." | Oscar Wilde 

    "I need to finish" - Christian Wolff: The Accountant

    Just trying to live long enough to play a new, released MMORPG, playing New Worlds atm

    Fools find no pleasure in understanding but delight in airing their own opinions. Pvbs 18:2, NIV

    Don't just play games, inhabit virtual worlds™

    "This is the most intelligent, well qualified and articulate response to a post I have ever seen on these forums. It's a shame most people here won't have the attention span to read past the second line." - Anon






Sign In or Register to comment.