Quantcast

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Intel Kaby Lake - no improvement over Skylake

124»

Comments

  • gervaise1gervaise1 Member EpicPosts: 6,919
    edited November 2016
    Malabooga said:

    You have absolutely no clue what youre talking about.

    As i said, instead of empty theorizing in your dreamland come down to real world and see how it actually functions rofl.

    Theory and practice are 2 completely different things, especially since your theory doesnt even have anything to do with actuall theory or even less with anythign you want to apply it to rofl
    Never said it was a theory; its reality.
  • TorvalTorval Member LegendaryPosts: 20,018
    edited November 2016
    barasawa said:
    Torval said:
    Moore's law is slowing down. Skylake to Kaby Lake will be 3 - 3.5 years instead of the traditional 2. Counts are still increasing but the pace at which they're increasing is slowing and what happens after 7nm is uncertain.

    10nm is likely more than a year away so I guess the attraction of the platform will be on price and features vs Skylake.
    Moores Law isn't about how fast a processor is, it's about how many transistors it has. 

    In the past there was an apparent correlation between two, but it's not so strong these days with the new architectural features they are playing with.
    I never said it was how fast a processor is. I said it was about counts, although I can't think of a processor that doubled its transistor count that was less powerful than the previous generation.

    Moore's Law (since several people here have forgotten or modified it's definition): "refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention." Much later revised that to two years, because he realized that the original pace was unsustainable and that there was an end to the law.

    The number of transistors isn't doubling every year, or even two years. I was wrong about the Skylake to Kaby Lake timeline as Quizzical so respectfully pointed out. I was looking at process node shifts and mixed up some dates. Nonetheless the point still stands that the time to double is slowing down as is the entire release and update cycle. Moore's Law has almost played itself out.
    Fedora - A modern, free, and open source Operating System. https://getfedora.org/

    traveller, interloper, anomaly, iteration


  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited November 2016
    Malabooga said:
    Quizzical said:
    Ozmodan said:
    Malabooga said:
      ... 6700k was barely ANY faster than 4790k.
    Bad luck kid, I owned the two processors successively.

    At same clock speed (4.4ghz here, conservative overclocking with stock voltage), the 6700k is ~15% faster than the 4790k. And that's not some website benchmarks but my own, done not only with 3DMark, but many other benchmark tools and real life applications like video encoding and 3D rendering.

    And you accuse me of lying? Well, if you aren't lying, then you are just blatantly ignorant. But it's more likely your usual bashing of anything not AMD.

    filmoret said:
    So you trust the 3dmark when it comes to proving your points.  But when it disproves your ideas about AMD GPU's then you completely discard them.  Go get it bro...
    It just proves once again that the person is not reliable when it comes to unbiased performance assessments. He's just reading some bullshit on the net and re-posting it, specially when he can turn it so it sounds negative for either Intel or nVidia.
    15% sounds very inflated, I have two pc's with one of each of the processors and I can discern very little difference either overclocked or stock.  Certainly NOT worth the increased price as most games are gpu locked verses cpu.
    What makes you think he's using gaming performance as the only or even primary comparison between the two CPUs?
    Indeed... even though I specified it in my post, the person you answer to choose to ignore it.

    Malabooga said:
    He is using Intel rigged data, in real world difference is 5% TOPS lol in fact go look reviews which are ALL disspaointed by lack of performance improvement over HAswell.
    I'm using my own data, as said in my previous post, and for both CPUs at exactly the same clock speed too.


    May I suggest this nice site for you two ? ;)


    yeah and well believe you instead of thousands upon thousands published tests ROFL Especially with your history of baslessly hyping Intels CPUs (and that hype curb stomped with published tests)
    Coming from the guy who regularly posts that AMD is better than Intel, this is amusing, at the least.
    Someone using facts (me), vs someone using wishful thinkings (you). Lemme guess who is the most believable... ;)

    Sorry, but I trust real user experience (and specially my own in "real word" situations, of course) over biased gaming websites. Specially when my own use of the hardware goes way beyond running video games. It's my job with 30 years of experience. What's yours?
    Its clear now that you have NO clue what i post.

    As i said before show me Intel CPU for 100$ that is better than FX83xx/FX9xxx

    Real user experience, as WELL as those who actually dive much deeper in performance like extreme OCers confirmed what is truth and what is crap...and well, as always, Intels pamflets (as well as you just parroting those) are crap rofl. If we actually add up all performance improvements INTEL claims over gens, Skylake is 70-100% faster than Snad Brigde ROFL

    Quizzical said:
    How big of a performance advantage there is depends tremendously on what you're doing.  Sky Lake has certain caches larger than Haswell, and certain additional instructions.  If you're doing something that can make huge use of those, doubling your performance is plausible.  And if you're doing something that can't make any use of them, even a 5% gain at the same clock speeds is probably not happening.

    If you know exactly what programs you care about and know how various hardware performs in those particular programs, I say it's perfectly reasonable to buy the hardware that you know is best for you.  Where it would become unreasonable is insisting that everyone else who doesn't care about your programs and has different use cases should also buy your preferred hardware.  We've seen plenty of that from GPU fanboys ("I had this GPU 15 years ago and something bad happened, so never buy anything from them again"), but Jean-Luc's post doesn't do that.
    Thats all nice and dandy, but with such selective reasning AMD CPUs easily beat Intels 2-3 times more expencive offerings. Just use integer operations and all 8 cores lol

    AND this is gaming site and primarily gaming performance in question, im sure if you search far and wide youll find isolated cases where i7 6700 beats i7 4790 by 15% but in VAST MAJORITY of stuff (including MOST USED stuff) AND IPC its barely 5% faster, and in gaming specifically theres pretty much NO difference even to 2nd gen Sandy Bridge, let alone Devils Canyon lol

    gervaise1 said:
    Malabooga said:

    You have absolutely no clue what youre talking about.

    As i said, instead of empty theorizing in your dreamland come down to real world and see how it actually functions rofl.

    Theory and practice are 2 completely different things, especially since your theory doesnt even have anything to do with actuall theory or even less with anythign you want to apply it to rofl
    Never said it was a theory; its reality.
    Its pure theory, and it isnt even applicable to what you want to apply it. Its obvious that you have 0 experience iny any of it.


    Post edited by Malabooga on
  • RidelynnRidelynn Member EpicPosts: 7,061
    Torval said:
    barasawa said:
    Torval said:
    Moore's law is slowing down. Skylake to Kaby Lake will be 3 - 3.5 years instead of the traditional 2. Counts are still increasing but the pace at which they're increasing is slowing and what happens after 7nm is uncertain.

    10nm is likely more than a year away so I guess the attraction of the platform will be on price and features vs Skylake.
    Moores Law isn't about how fast a processor is, it's about how many transistors it has. 

    In the past there was an apparent correlation between two, but it's not so strong these days with the new architectural features they are playing with.
    I never said it was how fast a processor is. I said it was about counts, although I can't think of a processor that doubled its transistor count that was less powerful than the previous generation.

    Moore's Law (since several people here have forgotten or modified it's definition): "refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention." Much later revised that to two years, because he realized that the original pace was unsustainable and that there was an end to the law.

    The number of transistors isn't doubling every year, or even two years. I was wrong about the Skylake to Kaby Lake timeline as Quizzical so respectfully pointed out. I was looking at process node shifts and mixed up some dates. Nonetheless the point still stands that the time to double is slowing down as is the entire release and update cycle. Moore's Law has almost played itself out.
    If you just look at a CPU core, your right, transistor count hasn't moved much in recent years.

    But if you look at per silicon die (which is what, if we are being technical, Moore's Law states) - between improvements in IGP on consumer chips, and additional cores and cache on server chips, Moore's Law is pretty well still holding up.

  • gervaise1gervaise1 Member EpicPosts: 6,919
    edited November 2016
    Malabooga said:

    Its pure theory, and it isnt even applicable to what you want to apply it. Its obvious that you have 0 experience iny any of it.
    Pure theory? Even though SAP have a market capitalization of c.$100 billion.

    What does that make AMD then with a capitalisation of c. $6 billion? An illusion. You mean your post is just a fairy tale? 

    There are providers other than SAP as well. I mentioned SAP because Intel in c. 2005 transitioned from in-house ERP stuff to SAP HANA. A move that took c. 5 years. And guess what the Business Intelligence (BI) modules do? Collect and analyse e.g. warehouse data. 

    Maybe I need to mention the $100 billion number again in case you skipped over it. 

    And as for thinking I know nothing about this keep dreaming. 


  • QuizzicalQuizzical Member LegendaryPosts: 22,135
    Malabooga said:
    Quizzical said:
    How big of a performance advantage there is depends tremendously on what you're doing.  Sky Lake has certain caches larger than Haswell, and certain additional instructions.  If you're doing something that can make huge use of those, doubling your performance is plausible.  And if you're doing something that can't make any use of them, even a 5% gain at the same clock speeds is probably not happening.

    If you know exactly what programs you care about and know how various hardware performs in those particular programs, I say it's perfectly reasonable to buy the hardware that you know is best for you.  Where it would become unreasonable is insisting that everyone else who doesn't care about your programs and has different use cases should also buy your preferred hardware.  We've seen plenty of that from GPU fanboys ("I had this GPU 15 years ago and something bad happened, so never buy anything from them again"), but Jean-Luc's post doesn't do that.
    Thats all nice and dandy, but with such selective reasning AMD CPUs easily beat Intels 2-3 times more expencive offerings. Just use integer operations and all 8 cores lol

    AND this is gaming site and primarily gaming performance in question, im sure if you search far and wide youll find isolated cases where i7 6700 beats i7 4790 by 15% but in VAST MAJORITY of stuff (including MOST USED stuff) AND IPC its barely 5% faster, and in gaming specifically theres pretty much NO difference even to 2nd gen Sandy Bridge, let alone Devils Canyon lol

    Yes, there are situations where AMD CPUs outperform Intel.  For example, something very heavy on random lookups into a 1 MB table would probably do it, as some AMD CPUs can fit that table in L2 cache and Intel can't.  Heavy use of FMA4 in situations where FMA3 doesn't work might do it, too, especially your algorithm makes it impractical to use SSE or AVX.  And if you know that's your use case, you absolutely should buy AMD CPUs rather than Intel.

    Just because a given product is best for the "average" consumer doesn't mean it's best for everyone.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited November 2016
    gervaise1 said:
    Malabooga said:

    Its pure theory, and it isnt even applicable to what you want to apply it. Its obvious that you have 0 experience iny any of it.
    Pure theory? Even though SAP have a market capitalization of c.$100 billion.

    What does that make AMD then with a capitalisation of c. $6 billion? An illusion. You mean your post is just a fairy tale? 

    There are providers other than SAP as well. I mentioned SAP because Intel in c. 2005 transitioned from in-house ERP stuff to SAP HANA. A move that took c. 5 years. And guess what the Business Intelligence (BI) modules do? Collect and analyse e.g. warehouse data. 

    Maybe I need to mention the $100 billion number again in case you skipped over it. 

    And as for thinking I know nothing about this keep dreaming. 


    And this PROVES you have 0 experience with that in practice and real world.

    Thats the problem with your "know about". It rarely EVER applies in real world.

    Quizzical said:
    Malabooga said:
    Quizzical said:
    How big of a performance advantage there is depends tremendously on what you're doing.  Sky Lake has certain caches larger than Haswell, and certain additional instructions.  If you're doing something that can make huge use of those, doubling your performance is plausible.  And if you're doing something that can't make any use of them, even a 5% gain at the same clock speeds is probably not happening.

    If you know exactly what programs you care about and know how various hardware performs in those particular programs, I say it's perfectly reasonable to buy the hardware that you know is best for you.  Where it would become unreasonable is insisting that everyone else who doesn't care about your programs and has different use cases should also buy your preferred hardware.  We've seen plenty of that from GPU fanboys ("I had this GPU 15 years ago and something bad happened, so never buy anything from them again"), but Jean-Luc's post doesn't do that.
    Thats all nice and dandy, but with such selective reasning AMD CPUs easily beat Intels 2-3 times more expencive offerings. Just use integer operations and all 8 cores lol

    AND this is gaming site and primarily gaming performance in question, im sure if you search far and wide youll find isolated cases where i7 6700 beats i7 4790 by 15% but in VAST MAJORITY of stuff (including MOST USED stuff) AND IPC its barely 5% faster, and in gaming specifically theres pretty much NO difference even to 2nd gen Sandy Bridge, let alone Devils Canyon lol

    Yes, there are situations where AMD CPUs outperform Intel.  For example, something very heavy on random lookups into a 1 MB table would probably do it, as some AMD CPUs can fit that table in L2 cache and Intel can't.  Heavy use of FMA4 in situations where FMA3 doesn't work might do it, too, especially your algorithm makes it impractical to use SSE or AVX.  And if you know that's your use case, you absolutely should buy AMD CPUs rather than Intel.

    Just because a given product is best for the "average" consumer doesn't mean it's best for everyone.
    The scope of this forum is avearge home user AND mostly on games. Delving into stuff that 99.9% of your average users will never encounter is pretty much beyond this forum.

  • gervaise1gervaise1 Member EpicPosts: 6,919
    Malabooga said:
    gervaise1 said:



    And this PROVES you have 0 experience with that in practice and real world.

    Thats the problem with your "know about". It rarely EVER applies in real world.



    How does modern manufacturing work then? Eh?
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    Thsi is not about manufacturing, its abot selling lol. Whole selling channel is not single company, in fact its quite a few comanies along the channel which differ greatly in....everything lol
  • gervaise1gervaise1 Member EpicPosts: 6,919
    Malabooga said:
    Thsi is not about manufacturing, its abot selling lol. Whole selling channel is not single company, in fact its quite a few comanies along the channel which differ greatly in....everything lol
    Ah you don't have a clue.

    And did I say anything at any point about single company? No. That was your position remember that Intel / AMD don't talk to mobo makers. 

    As I  said above: Modern manufacturing companies work in collaboration as well as in competition.

    And no it is not just about selling. Again as I said above sales data is fed back and influences future manufacturing.
  • filmoretfilmoret Member EpicPosts: 4,906
    Sometimes a company releases something that doesn't quite work the way they hoped.  It happens to them all.  Its the overrral track record that matters.
    Are you onto something or just on something?
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited November 2016
    gervaise1 said:
    Malabooga said:
    Thsi is not about manufacturing, its abot selling lol. Whole selling channel is not single company, in fact its quite a few comanies along the channel which differ greatly in....everything lol
    Ah you don't have a clue.

    And did I say anything at any point about single company? No. That was your position remember that Intel / AMD don't talk to mobo makers. 

    As I  said above: Modern manufacturing companies work in collaboration as well as in competition.

    And no it is not just about selling. Again as I said above sales data is fed back and influences future manufacturing.
    Uh huh, how did that "collaboration" turned up for Intels mobile pitful endevours?

    You have 0 clue what youre talking about.
  • gervaise1gervaise1 Member EpicPosts: 6,919
    edited November 2016
    Malabooga said:
    gervaise1 said:
    Malabooga said:
    Thsi is not about manufacturing, its abot selling lol. Whole selling channel is not single company, in fact its quite a few comanies along the channel which differ greatly in....everything lol
    Ah you don't have a clue.

    And did I say anything at any point about single company? No. That was your position remember that Intel / AMD don't talk to mobo makers. 

    As I  said above: Modern manufacturing companies work in collaboration as well as in competition.

    And no it is not just about selling. Again as I said above sales data is fed back and influences future manufacturing.
    Uh huh, how did that "collaboration" turned up for Intels mobile pitful endevours?

    You have 0 clue what youre talking about.
    Deflect away; still waiting for your pearls of wisdom about manufacturing. Going on about how Intel and AMD make CPUs without talking to mobo manufacturers and just rely on selling even sounds stupid. Duh. 

    As far as your deflection about Intel's mobile "endeavours" go - which has nothing to do with the thread but heh what the heck - adopted over 10 years ago following a strategic corporate business decision to pursue more power per watt the decision has been widely credited with propelling Intel to what it is today - market cap c. $176 billion.

    And in a roundabout way that brings things back on topic since what Kaby Lake does is to - once again - deliver more cpu power per watt than Sky Lake. 

    I'm sure you won't let it lie though and will continue to insist that Intel and AMD don't talk to mobo manufacturers, they just make and sell. Yeah, sure.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited November 2016
    Ahhhhhh, so you have othing to actually say and contribute that is actually based in reality and not some theoretical ideal thats just that - theory lol

    Intel failed miserably in mobile, was even giving away their mobile chips and even THAT didnt work out lol It surely did "propel" them to actually have to rent out their fabs (that were supposed to produce Intel chips) to produce ARM chips.

    I wonder why SAP and srps. mbdt, bsldfnsl, or w/e didnt "tell" them how theyll fare, that theyll lose pile of money along everyone who bought chips from them rofl
  • Jean-Luc_PicardJean-Luc_Picard Member LegendaryPosts: 8,347
    edited November 2016
    “Social media gives legions of idiots the right to speak when they once only spoke at a bar after a glass of wine, without harming the community. Then they were quickly silenced, but now they have the same right to speak as a Nobel Prize winner. It’s the invasion of the idiots.”

    Umberto Eco.

    Everyone is free to interpret this quote as he sees fit ;)
    "The ability to speak doesn't make you intelligent" - Qui-gon Jinn in Star Wars.
    After many years of reading Internet forums, there's no doubt that nor does the ability to write.
    CPU: Intel Core I7 9700k (4.90ghz) - GPU: ASUS Dual GeForce RTX 2070 SUPER EVO 8GB DDR6 - RAM: 32GB Kingston HyperX Predator DDR4 3000 - Motherboard: Gigabyte Z390 Aorus Ultra - PSU: Antec TruePower New 750W - Storage: Kingston KC1000 NVMe 960gb SSD and 2x1TB WD Velociraptor HDDs (Raid 0) - Main display: Samsung U32J590 32" 4K monitor - Second display: Philips 273v 27" monitor - VR: Pimax 8K headset - Sound: Sony STR-DH550 AV Receiver HDMI linked with the GPU and the TV, with Jamo S 426 HS 3 5.0 speakers and Pioneer S-21W subwoofer - OS: Windows 10 Pro 64 bits.


Sign In or Register to comment.