Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

what is a cpu for microsoft ? what is a cpu for hardware and part maker?

drbaltazardrbaltazar Member UncommonPosts: 7,856
Ok I always assumed this question didn't need asking!I believe I and a huge part of part maker assumed like me!I think I was wrong !so here I am asking all the overclocker out there!if I understand ms view correctly ?a physical core is a CPU!so my i5 2500k would have 4 CPU.why this question?
I was reading ms suggestion for optimal result on message signal interrupt and its extended variant.and ms clearly say they suggest one interrupt per CPU!so I checked my system !mine wasn't set at all but it is activated!so I assume since it isn't set the default value would be one?but ?I got 4 CPU on each socket!so I checked everything using MSI !none had the value set!why would hardware maker not tweak this for optimal performance when they go to insane extreme to gain 2 or 3% gain?is it a dynosaure setting forgotten since it came into existence when vista was lunched!(ya we can forgive all who didn't know what was born in vista.bottom line can any look into this and adust MSI and/or msix to proper value and revisit PCPER test about fps and frame pacing or the need of it
«1

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383

    Microsoft recently (well, 2000ish) had to change their definition of this because of multicore CPUs and Hyperthreading.

    Before, when a CPU was only single threaded - each CPU was discrete. It was pretty easy - each CPU processed one thread at a time and life was good.

    Microsoft does have several licenses that specify number of CPUs, but multicore processors essentially work as SMP systems (those that physically had 2+ CPUs). Early on, if the software was limited to one CPU, you could only use one thread on a dual core CPU. At the software level, it doesn't care if the threads go through 2 cores on a single die, or 2 discrete CPUs in their own package on a SMP motherboard.

    It kinda sucked. Fortunately they fixed it years ago. Now they distinguish between a physical CPU (each individual die installed in a socket) and a logical CPU (a single core in a die, a hyperthreaded virtual core, etc).

    Their current definition is that your software can run on an essentially unlimited number of threads (so multicore CPUs and technology like Hyperthreading can be used to the maximum extent of the CPU). But each discrete CPU is counted as an individual CPU.

    This mainly applies to server-based products, like SQL Server, where they could very well be running on a quad-CPU system, each CPU being 8-core, for a total of 32 cores available for processing. If you license SQL Server for 1 CPU, you get 8 threads on one CPU. If you license it for SMP, you get all 32 threads from all 4 CPUs.

    Consumer software has pretty strict restrictions. Windows XP/Vista/7/8 all have CPU restrictions: The Basic/Home editions will only run on 1 physical CPU (but as many threads as you have). Pro/Ultimate will run on 2 (except Vista - Pro only supports 1, Ultimate/Enterprise support 2). If you want to run more than 2 physical CPUs, you need to jump to Server.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    So basicly!I am right?I read wikipedia definition and each physical core is a CPU!bottom line !message signal interrupt is left at the IRQ default of one per socket.instead of one per processor.mm!not a big issue if you do light computing.but if you game you stream you chat on various media ?1irq per socket instead of recommended 1 MSI per CPU (meaning 4 MSI on a i5 2500k or 8 MSI on a fx from and(not sure if it can be considered 8 tho because of the way its made)anyhow ty for info . can't wait to come back home and fix this.would be nice if ms had a fixit for this.scan you system and optimize MSI according to ms recommendation.
  • RidelynnRidelynn Member EpicPosts: 7,383

    Yes, interrupts are per socket, not per core.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    That doesn't make sense?so basicly ms is willfully throttling?I cknow it was per socket till ms changed definition of a CPU but they stuck to this even tho a i5 2500k as 4 CPU (core)
  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    I'm beginning to hate backward compatibility!
  • RidelynnRidelynn Member EpicPosts: 7,383

    Well, the Socket only has one set of pins to communicate with the PCI bus. Especially now that the PCI controller is on the CPU die. You can add more PCI controllers and get more lanes, but you are still ultimately limited to the number of pins in the socket for communication to the outside world.

    You could make a socket that has multiple sets of pins - but current sockets don't have that. It would be a huge socket, and pretty expensive, and most of the time the cores are sitting around waiting on whatever is on the other end of the PCI bus, but the PCI bus itself, so adding more channels wouldn't necessarily speed things up that much.

    So it isn't MS'es fault.

  • jesteralwaysjesteralways Member RarePosts: 2,560
    If motherboard were preconfigured  for optimal performance it might wind up frying your entire system. say for some reason your power supply is not up to par with your new cpu+motherbaord, you thought you just want to check out how the system performs then you would buy a new power supply, but if the motherboard were on optimal cpu usage mode; the amount of power the cpu would require would be more than power supply can provide and from my experience it would cause all the IC in your power supply to explode. this explosion has a high chance of ruining your cpu and motherboard. this is just an example, the main reason why motherboard come pre configured is : safety. as for why microsoft does it; i guess for not overexerting your system? both intel and amd has a special software to turn on multi core function, you should try to download from their product driver page.

    Boobs are LIFE, Boobs are LOVE, Boobs are JUSTICE, Boobs are mankind's HOPES and DREAMS. People who complain about boobs have lost their humanity.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    OK !what you talk about,PCI irq pin is disabled when MSI is enable for say a GPU.MSI isn't using that pin,this is the whole reason MSI exist.interrupt is done in CPU ,ram(And if I understood correctly ecn.each core has their interrupt controller.regedit still use name PCI but it is referring to pciex.ty for Info tho .I had forgotten about the IRQ/PCI pin.you helped me a lot.CHEER!I was right this stupid messagenumberlimit should be set one per core,they don't just in case the os is on a Pentium 4. I suspect(ROFL)again ty,with your info and mine everything make sense.
    http://msdn.microsoft.com/en-us/library/windows/hardware/ff544246(v=vs.85).aspx
  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Drivers can register a single InterruptMessageService routine that handles all possible messages or individual InterruptService routines for each message

    Now I know why only one MSI/msi-x is enabled.but they suggest one per CPU so if you add some don't pass 1 per core per device.at least this subject is considered fixed for me!cheer!
  • barasawabarasawa Member UncommonPosts: 618
    Originally posted by drbaltazar
    That doesn't make sense?so basicly ms is willfully throttling?I cknow it was per socket till ms changed definition of a CPU but they stuck to this even tho a i5 2500k as 4 CPU (core)

    CPUs changed. It used to be that a CPU was one CPU. One core, one thread, etc. Now, with multicores, it's a bit different. Each core of MOST of the components of a CPU. But they also share some parts, and have some extra parts that allow that sharing to work without crashing code all over the place.

    Technology advanced and the definitions are struggling to keep up. After all, your car has no horses, despite it having horsepower. (Look up the definition of horsepower sometime, it's not exactly what I expected.)

     

    Also remember, everything in the field of computers, other than the humans, is moving and changing much faster than the rest of the world.

     

    Lost my mind, now trying to lose yours...

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Ya ms define a CPU in a bipolar way!let me say what they say:you can define up to 2048 interrupt in w8 or 910 in visit .from what I read most support 64 for 64 bit .but this can be edited via c++.(nvm that)if a device maker is too lazy to support the max there are always other makers.when the definition change and this is what I had an issue with is :let's say you are CCP (eve online)and you need more interrupt then the 2048 ,ms say use one interrupt per socket in this case.since this is hardcase event that is rare I understand ms for not bothering with it too long.after all who will even need 2048 interrupt per device ROFL!I all set mine to one per core just because I know one interrupt is never enough .I ear people that stream everyday wondering wtf is going on!it isn't exactly their system,its because driver aren't allowed to allocate more then one and if plug and play doesn't detect it also only supply one interrupt.(I think this either a typo or someone was smoking good stuff back then.I wrote them today about it,and I asked them to create a fix it that would ask user how many core their CPU has and the fix it would change value correspondingly!like and and 8 core would get 8 interrupt and a i5 2500k would get 4 .a GPU with 1 interrupt? Network card with one interrupt?come on everybody know this create havok .we all experienced it . we just didn't know what it was.oh I doubt we all gain more the 10 % in performance but on smoothness and quality of sound image etc?people might be surprised!
  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    OK!i tester tok!this is stupid on a galactic level!and lunch mantle,nvidia lunch battle box but everybody ignore the blatant throttling via MSI/msix (message signal interrupt)I did manually set ,sadly hardware can't make use of added channel since ms lock driver to 1 MSI activation per device.can any help(suggestion )and no I don't want to stay at 1,I understand this is is intended for huge amount(like corp with 2048 CPU socket or more.but I am a normal gamer just need 4 msi / device.any know if hardware maker will stop copying and pasting server setting (being ultra uber lazy)!And start setting gamer hardware at proper value or supply user the way and user can do it!ya at os end user can set it if hardware is set to 1.aaaarrrg .dam these things are like Linus torval said:janitor shouldn't create timing(hpet)I would say same janitor also cooked MSI/msix
  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Finally, the native interrupt mechanism for PCI Express is MSI (message-signaled interrupts). This is also true for PCI-X. You cannot use MSI without APIC.And invariant tsc isn't used by window if deep sleep is enabled?so how can I use both apic , msix and invariant tsc?I guess I'll force it enabled via bcdedit.I don't like this tho I wish I knew how to set apic in bios so I get both MSI and invariant tsc (I got apic apic c2 and apic c3.
  • ZezdaZezda Member UncommonPosts: 686

    This was basically me after reading the last post there.

     

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    As far as I know there is no way to do this,so only debug way is avail via bcdedit.
  • ZezdaZezda Member UncommonPosts: 686
    And what benefit do you believe you will have from configuring your computer in this manner?
  • RidelynnRidelynn Member EpicPosts: 7,383

    World domination, obviously.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Un short?it dix mots if not all computer issue.invariant tsc fix a timing problem that always plagued computing !MSI /msi x change from 1 MSI per CPU socket to 1 MSI are per CPU core will be felt a lot for :gamer,gamer with a lot of stuff going on (sodapoppin,swifty,towellie!like I say ,everything in window is controlled by interrupt.but this is limited ,if you got 4000 CPU core you can't use this !
  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Previous best?it was tsc,but it had small flaw!
  • ZezdaZezda Member UncommonPosts: 686
    So why not go make your changes and benchmark the difference to prove your point that all of this rubbish actually makes a difference?
  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    I hhavent found how to enable invariant tsc and setting 1 msi per cpu core without forcing it.I don't want to force it since ms always has throttling measure or disabling acceleration etc when you force it.iI all find how ms is making this happen.was hopping people used to benchmark would test it,since they already have the methodology.as for me being sure ?lol I ain't the one responsible for the timer project it is people that are very knowledgeable in the field.as for Msi ?it is on ms site ROFL .been there a long time


    http://www.windowstimestamp.com



    http://msdn.microsoft.com/en-us/library/windows/hardware/ff544246(v=vs.85).aspx














  • ZezdaZezda Member UncommonPosts: 686

    So, correct me if I'm wrong.

     

    You're looking to do things to your computer that are not supposed to be done in order to obtain some imaginary performance gains that have never been properly documented by anyone? You can't do this because you think when you force these changes some sort of throttling will apply to you even though you have no evidence to suggest that is the case?

    So really then by your own admission, assuming you are correct in your assumptions, it was a complete waste of time to look into this and you still have no idea even that if you did manage to do all of this if it would actually be of any perceivable benefit?

  • jdnewelljdnewell Member UncommonPosts: 2,237
    Originally posted by Zezda

    So, correct me if I'm wrong.

     

    You're looking to do things to your computer that are not supposed to be done in order to obtain some imaginary performance gains that have never been properly documented by anyone? You can't do this because you think when you force these changes some sort of throttling will apply to you even though you have no evidence to suggest that is the case?

    So really then by your own admission, assuming you are correct in your assumptions, it was a complete waste of time to look into this and you still have no idea even that if you did manage to do all of this if it would actually be of any perceivable benefit?

    I think you pretty much nailed it.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    any got a bug with the raid software from intel for caching etc!i had finally enabled my invariant tsc and msi but this raid thingy from intel enabled the older timer again. is there a power saving mesure this thing use?if so where is it ?I had to enable bcd edit way because of this!grr!
  • PhryPhry Member LegendaryPosts: 11,004
    Originally posted by jdnewell
    Originally posted by Zezda

    So, correct me if I'm wrong.

     

    You're looking to do things to your computer that are not supposed to be done in order to obtain some imaginary performance gains that have never been properly documented by anyone? You can't do this because you think when you force these changes some sort of throttling will apply to you even though you have no evidence to suggest that is the case?

    So really then by your own admission, assuming you are correct in your assumptions, it was a complete waste of time to look into this and you still have no idea even that if you did manage to do all of this if it would actually be of any perceivable benefit?

    I think you pretty much nailed it.

    Sometimes a little knowledge can be a dangerous thing image

Sign In or Register to comment.