Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Intel Games Hardware Technology

AMD Ryzen Game Patch Optimizations Show Significant Gains On Zen Architecture (hothardware.com) 121

MojoKid writes: AMD got the attention of PC performance enthusiasts everywhere with the recent launch of its Ryzen 7 series processors. The trio of 8-core chips competitively take on Intel's Core i7 series at the high-end of its product stack. However, with the extra attention AMD garnered, came significant scrutiny as well. With any entirely new platform architecture, there are bound to be a few performance anomalies -- as was the case with the now infamous lower performance "1080p gaming" situation with Ryzen. In a recent status update, AMD noted they were already working with developers to help implement "simple changes" that can help a game engine's understanding of the AMD Zen core topology that would likely provide an additional performance uplift with Ryzen. Today, we have some early proof-positive of that, as Oxide Games, in concert with AMD, released a patch for its game title Ashes Of The Singularity. Ashes has been a "poster child" game engine of sorts for AMD Radeon graphics over the years (especially with respect to DX12) and it was one that ironically showed some of the worst variations in Ryzen CPU performance versus Intel. With this new patch that is now public for the game, however, AMD claims to have regained significant ground in benchmark results at all resolutions. In the 1080p benchmarks with powerful GPUs, a Ryzen 7 1800X shows an approximate 20% performance improvement with the latest version of the Ashes, closing the gap significantly versus Intel. This appears to be at least an early sign that AMD can indeed work with game and other app developers to tune for the Ryzen architecture and wring out additional performance.
This discussion has been archived. No new comments can be posted.

AMD Ryzen Game Patch Optimizations Show Significant Gains On Zen Architecture

Comments Filter:
  • by TimothyHollins ( 4720957 ) on Friday March 31, 2017 @05:29AM (#54149543)

    That's it for me. I was holding out on AMD specifically because I was worried about the gaming performance. I know it's a small leap of faith at this point, but everything is starting to look great with AMD's latest series.

    The earlier benchmarks showed AMD pretty much taking the crown in everything *except* gaming (and I do a fair bit of scientific computing on my home machine), and if these results are possible (1800X performing on par with a 7700K in gaming) then I have no reason to go with Intel.
    My next purchase will be a Ryzen 7 cpu (all of which performed similarly in gaming tests), something I hope will help me, AMD, and every consumer out there due to the competition finally revving up again.

    Now to see if AMD's Vega architecture can compete with nVidia's price-dropped GTX 1080.

    • I hope AMD has figured this out. The last few years AMD chips just haven't been all that competitive. I've continued to build the occasional AMD based machine in the hope they wouldn't go under and would turn it around. We really need at least two chip makers making really viable CPUs. The consumer always wins when companies are forced to really compete with each other.

    • Re: (Score:1, Troll)

      Let's look at the hype?

      Intel CPUs have significant per core IPC still. Try %30 per game with massive fps differences on stick settings ram while Intel ones are all over locked and can take on faster ram frequencies giving the +25 fps advantage.

      Worse Windows 10 cripples them! Windows 7 benchmarked on YouTube showed massive performance increases on Nvidia optimized games like Tomb Raider as the CPU scheduling bug is real. AMD is now downplaying it as MS won't fix it due to lack of marketshare.

      Neowin.net repor

      • by higuita ( 129722 )

        All this "news" is just FUD, probably paid by intel!
        AMD do have lower market share, but it is not one that you can ignore, if you look to steam, 20% of CPU share is still a huge market

      • Funny modded -1 for disagreeing with AMD hype.

        To prove I am not paid by Intel my facts stand by themselves. Intels are much faster in games as IPC still did not catch up. Blender and cinebench perform better simply because of more cores

    • We already bought two 1700 systems here.

    • by jwhyche ( 6192 )

      I don't think I would let the gaming performance make me snub AMD at this moment. There is a number of videos on youtube that show some real live benchmarks on AMD vs Intel for these chips.

      In most cases the AMD does lag behind the Intel versions. But it only lags behind by a max of say 5%. To me this is insignificant when you take into account the cost of the chips involved. $599 vs $1000.

      One things that did make me think though was in some of those benchmarks they included a i7-7700K, a $349 chip

    • by jon3k ( 691256 )
      Same here. I'm not even much of a gamer but I'll be buying Ryzen 7, motherboard and some ECC ram this weekend.
  • by leathered ( 780018 ) on Friday March 31, 2017 @05:57AM (#54149609)

    The problem for many gamers is that they will have a vast library of games that are not optimized for Ryzen, and never will be.

    It's the same story as the old 3DNow! instructions which vastly improved the gaming performance of the K6-2, a small number of developers released patches to support them. The majority did not.

    • There really isn't a problem with gaming performance in the first place. Looking at the benchmarks I see no game tested with Ryzen that doesn't have an acceptable frame rate. Half the games tested were GPU bound anyway. And a few were actually faster on Ryzen.

      • The half-life of a game library is three months, if that, before the games go on the shelf to gather dust forever, or a faster machine comes out that makes performance quibbles silly. I've done my research, I'm sold on Ryzen. I'll be picking up a box as soon as they start moving through the channel, with a modest lag in case of motherboard kinks. This is not primarily for gaming, by the way, this is supposed to be a cost effective work horse. Being a great game machine is just a bonus. Oh, and Vulkan.

    • Certain improvements could involve simply compiler updates. Apparently, Ryzen could be better than Intel's Bridgelakes in decoding complex instructions [youtube.com], which is something that contemporary compilers are unlikely to emit due to Intel's dominance.
    • Some outfits drop support alarmingly quickly; or take a 'if it isn't crashing to desktop more than once an hour, it's totally fine" approach to quality; but it helps that the games most likely to never get Ryzen support are the older ones, which are also the games targeted at the specs of older hardware.

      If it turns out that, even for recent and future releases, only a couple of AMD's best-buddies publishers ever bother then there is a problem. If it's just older games, contemporary PCs are comfortably ov
    • by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Friday March 31, 2017 @07:28AM (#54149977) Homepage

      The problem for many gamers is that they will have a vast library of games that are not optimized for Ryzen, and never will be.

      Vast libraries take time to accumulate. A game will be designed to perform well on whatever hardware is available at the time that it is released. New hardware is faster than old, so that the game was not optimised for a processor that did not exist when it was released does not matter as long as Ryzen is faster than processors of yesteryear.

      • It's a little different this time. This is NUMA-aware design. Something that is actually pretty difficult, requiring broad architectural changes that can't simply be bolted onto an app. Most parallel apps don't bother. And similarly, the change isn't "free" like a new instruction: older apps that don't get with the program will run slower and might hit pathological cases. Realistically, NUMA is going to be needed to efficiently scale CPUs beyond 4 cores. Without it, die size increases really fast and it o
    • by AvitarX ( 172628 )

      Existing games run well enough.

      What's important is that future games that are more demanding do too. As long as optimization happens going forward, it will be fine.

    • by wbr1 ( 2538558 )
      With older games this really wont matter, and even with newer games that go unpatched they still perform enough for clean 60fps gaming with a good GPU. Even if it performs less than an intel CPU with the same GPU, if it is good enough and about .5 the price, what are you bitching about?
    • by dave562 ( 969951 )

      I was going to make a similar comment, so I will add it in here.

      Just because AMD can work with game developers to optimize code for the CPU does not necessarily mean that the game developers want to, or can even afford to, optimize their code for two different platforms.

      Where do things go from here?

      Do we see a fragmentation in the market, where the chip manufacturers try to woo AAA studios to provide "exclusive optimizations" targeted at a specific platform?

      Do the developers of the game engines themselves (

      • Do we see a fragmentation in the market, where the chip manufacturers try to woo AAA studios to provide "exclusive optimizations" targeted at a specific platform?

        Its not hard to see, it is already happening on the GPU side.

        • by dave562 ( 969951 )

          Agreed. On the GPU side, it is slightly more cut and dried. The hardware vendors implement specific effects and then work with the studios who want to leverage those effects. The CPUs are decidedly more complex because at some point the high level object code needs to be broken down into machine code. Without compiler optimizations at the low level, or structural changes at the higher level in terms of functions or what have you, it is difficult to make the most of any CPU architectural enhancements.

      • by jwhyche ( 6192 )

        I have been building my own computers since the 1990s, and I have given a couple of AMD chips a chance over the years. My anecdotal experience, sample size of one experience has been that the AMD chips never "feel" as fast. The OS (Windows) is not as responsive. Applications are not as snappy

        I'm glad to see that I'm not the only one that noticed this. I just recently built a i7-6700k system to replace my AMD 8150.

        Sure, I'm sure that a lot of that can be explained that I'm moving from a 6 year old design to a modern design. But that doesn't explain why I have the same feeling on my i7-2600K. A chip designed about the same time as the 8150.

        I don't feel that difference on Linux though. My bitch box is a 8350 running Centos 6. We installed a some Xeon blades at work that run the same loa

        • by dave562 ( 969951 )

          I believe that it does not matter with Linux. I am as certain as I can be based on nothing but observation and a modicum of understanding about operating systems that it has to do with MS DLLs being optimized. It probably has something to do with how they are compiled, and the compiler being designed to leverage optimizations for the Intel architecture. If the optimizations are not there, the code branches to the 'other' execution pipeline for 'generic x86 instruction set' or whatever.

      • Just because AMD can work with game developers to optimize code for the CPU does not necessarily mean that the game developers want to, or can even afford to, optimize their code for two different platforms.

        That won't be necessary. For every game house now, Vulkan is a top priority. Invented by AMD, it works best on everything [youtube.com] but especially, AMD.

        • by dave562 ( 969951 )

          That's interesting. Do you have anything to backup your assertion that "Vulkan is a top priority [for EVERY game house]." ?

    • If you need a game optimized for your CPU, why not instead do this thing called "wait two years" and just play it with a faster CPU?

      Slashdot in a nutshell: "Optimization is the root of all evil! BUT I'm PISSED WHEN PEOPLE DON'T DO IT FOR MY CPU."

      I really don't get everyone's strange fascination with needing to play stuff the second it comes out. I've got an AMD FX-8370 and it runs games in 4K just fine. Why the hell would I care whether I get 85 or 110 FPS? Likewise, even if a Ryzen "isn't as fast as an i7"

  • by GeekWithAKnife ( 2717871 ) on Friday March 31, 2017 @06:49AM (#54149801)

    I thought to myself "Can AMD deliver 40% IPC improvement?! - this is going to be a failed Phenom launch isnt it??"

    I was waiting for an Intel beating CPU from AMD since the Athlon Thunderbird C. I never bought Intel because of the underhanded tactics intel used to keep market share and bribe OEMs.

    Not only has AMD delivered with Ryzen it has far exceeded all expectations from IPC to TDP to (optimized) gaming performance and just amazed on multithreaded anything.

    To say that Ryzen and no doubt AMD's upcoming GPU will be a worthy upgrade for my FX-8350/R9 290X is an understatement.

    I was never happier to pay top dollar for a CPU.

    Congratualtions and well done AMD! (and it's about fucking time!)
    • Far Side Cartoon; "Not too close Higgins... This geek's got a knife." https://ifunny.co/tags/Higgins... [ifunny.co]
    • Sorry still behind Intel ... Even with the patches if you read the article. Ryzen is good for video editing and compiling large amounts of code. However, it needs alot of work for Ryzen 2.0 as the infinity fabric and NUMA architecture slow down Windows 10 scheduling threads which spins them around cores causing latency spikes each time in games

      • by Anonymous Coward

        Christ, are you morons still parroting this? AMD has already put the kibosh on this but yet it still is going around.

        https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update

      • by jwhyche ( 6192 )

        Oh BS. The benchmarks I have seen show the performance of the Ryzen chips to be close enough to their Intel counterparts. Close enough that the difference won't matter.

        Unless you are so picky that a half a dozen frames actually matter out of a 130 fps.

        • Oh BS. The benchmarks I have seen show the performance of the Ryzen chips to be close enough to their Intel counterparts. Close enough that the difference won't matter.

          Unless you are so picky that a half a dozen frames actually matter out of a 130 fps.

          Then explain this [youtube.com]? 40 fps difference between Windows 7 and Windows 10 is NOT SMALL.

          Need more proof? [youtube.com]. In Battlefield 1 the same Ryzen 1700 CPU has a 25 FPS drop when going on Windows 10. Put down the AMD fan hat and ballpart game style BIG finger for a minute and look at the evidence? I want AMD to win but right now it has some design problems as the CPU was made during the Windows 7 era so I can't blame this entirely on AMD.

          • by jwhyche ( 6192 )

            First of all calling me a fanboy is pretty silly. I mean I did just drop over 2000 bones on a intel rig.

            Have you every really looked at the videos? They are around 400 fps. I did some math and the difference at times are between 2% and 5%, with a high as 10%. When you scale these down to real world numbers they are pretty insignificant.

    • I was never happier to pay top dollar for a CPU.

      I'm doing a Ryzen build next month and even though the 1700 is the best bang for the buck, I'm actually contemplating buying the 1800X just because I want to support AMD. (Note: not a fanboy, current system runs on an i7 860)

      This is the same feeling I had when I bought the Blackberry Priv as soon as I could get my hands on it. Saving physical keyboards on phones was the driving force there.

  • To begin with the Ryzen 7 1800X doesn't end up giving the same performance as the i7 5960X even with this update as shown in the article.
    It's completely fair to to deal with them having different spec, 3.0-3.5 GHz vs 3.6-4.1 ghz on the Ryzen 7 1800X processor, but the 5960X is also one generation old, the 6900K would be the current top of the line 8 core one and the 6950X the best one. So if one is going to do a best 8 core vs best 8 core or best consumer line processor vs consumer/enthusiast one this test

    • Will the faster memories for the 6950X erase the $1200 difference in price?
      • by aliquis ( 678370 )

        Will the faster memories for the 6950X erase the $1200 difference in price?

        I'm just saying it's ~dishonest.
        I'm not saying the 6900K is a better buy than the 1700X.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      To begin with the Ryzen 7 1800X doesn't end up giving the same performance as the i7 5960X even with this update as shown in the article.

      More than double the price for a 3.6% performance increase? Face it, Intel loses that comparison.

      The benchmark shows that 1800X is neck-on-neck with a much-higher price bracket, and Intel doesn't have anything cheaper that can even compete.

      • What's worse, even the 12C/24T chip will most likely be cheaper than the Intel one.
        • Are you talking about the server processor code named Naples? From what ive read its a 32 core, i dont know if it has SMT but i would assume it would, meaning on dual socket motherboard a theoretical 128 threads. and yes im sure it will be cheaper than the 16 core XEON's available.

          • More like Snowy Owl. Naples is four dies with up to 32C, Snowy Owl is two dies with up to 16C. Apparently Snowy Owl is coming out sooner than Naples.
      • by aliquis ( 678370 )

        More than double the price for a 3.6% performance increase? Face it, Intel loses that comparison.

        It was never about who win.
        It was about them using slow 2133 MHz memory on generation old Intel platform.

        For games Intel have the i7 7700K which is cheaper and can compete.

        I'd recommend Ryzen 5 1600X and Ryzen 7 1700X over both the i5 7600K and i7 7700K but that doesn't change that they used slow RAM for the Intel CPU.

    • I didn't even think of that, since I got the first review from http://www.tomshardware.com/ne... [tomshardware.com] where they use a 6900K. However, since an 1800X costs about half of the 6900K I don't think it's fair to consider them competing in the same segment.

    • why wasn't the 5960X also given the same speed RAM? It can't run it?

      You answered your own question in the form of another question.

      Both systems were given the best RAM they could handle.

      • by aliquis ( 678370 )

        I go with an expensive one from a brand which have fast memory support for Ryzen:
        https://www.msi.com/Motherboar... [msi.com]
        "DDR4 Steel Armor with Best signal stability , Quad Channel DDR4-3466+(OC)"

        As for the 5960X I don't know.
        I totally doubt the 5960X couldn't use faster RAM than 2133. 2133 MHz is stock though. There's a difference.
        For Broadwell-E the stock is 2400 MHz but as you can see above you can use at-least 3466 MHz DDR4 with it.
        Stock for Ryzen 7 is 2400 MHz but multiple boards support 3200 MHz now and like

        • There's a difference.

          The key difference being potentially unstable system. The processor and architecture is only capable of the slower speed. Anything else is tantamount to overclocking and kind of defeats the purpose of testing two systems against each other.

          • I personally feel that when they had the New Horizon event they should have also Overclocked atleast 10% each system and gave us realworld benchmarks there also. But i can see where most people dont overclock. so they werent worried about that. I guess it was just my personal greed.

          • by aliquis ( 678370 )

            The key difference being potentially unstable system. The processor and architecture is only capable of the slower speed. Anything else is tantamount to overclocking and kind of defeats the purpose of testing two systems against each other.

            The Ryzen processor use overclocked memory.
            They could had used 2133 MHz CL15 on both systems.

            • They could have but then they would be artificially handicapping the Ryzen platform which supports out of the box 3400 officially with several modes on the 1800x.

              Overclocking is the process of running something faster than spec, not running something on spec.

              • by aliquis ( 678370 )

                They could have but then they would be artificially handicapping the Ryzen platform which supports out of the box 3400 officially with several modes on the 1800x.

                Feel free to go with either the maximum or equal (or even standard), everything else is just weird.
                I don't know if the i7 5960X can do 3400 MHz. Ryzen only really want to do the higher speeds with max 2x8 Samsung B-die.

    • Re: (Score:2, Informative)

      The reason is the infinity fabric with it's NUMA architecture sucks with Windows 10 and is server oriented.

      What's going on is Windows 10 CPU scheduling and power management LOVES randomly throwing processes and threads around cores during workloads. So imagine your a game busy working on something under Ryzen? You get interrupted and asked to move. Cache is now lost thanks to NUMA and needs to be reloaded from ram and the ram OC is bottlenecked to 2999 MHz. Now you continue. Think that would cause a stutter

      • by aliquis ( 678370 )

        I follow all this stuff very actively and I know about the Pcper speculation and possible indication that's not the case.

        AMD have even gone out themselves and said they don't think there is an issue with the Windows scheduler. But you feel free to keep on believing that. If you want to test against it get the Insider preview / edition and run in game mode which lock the game down to 6 cores instead.

        Of course Microsoft would fix it if they could. There has been speculation a recent update had some quiet / un

        • Here is my proof?

          https://www.youtube.com/watch?... [youtube.com]

          So my guess is AMD is denying it as MS won't change it as AMD owns .2% of the market right now and a major scheduling change will drastic alter power savings on their phones and laptop devices which use the same kernel.

          • So youre saying windows sucks.. Got it. Will remember to run my ryzen rig purely on linux when i build it, which was already my plan. but thanks for reassuring it for me.

          • by aliquis ( 678370 )

            All that video show is that Windows 7 run the game faster for whatever reason.

            If AMD doesn't think there will be a scheduler fix which improves things and that end up being the case then that's how it is and there won't be any changes and fix with improved performance.

          • If Microsoft so chose, they could implement a CPU check and provide an alternate scheduler scheme for systems running a Ryzen CPU. They already do this sort of thing with other AMD vs Intel CPU instruction sets, in fact.

    • "However what really disturb me from a comparison stand-point is that they gave the i7 5960X 2133 MHz DDR4 vs 2933 MHz DDR4 for the Ryzen 7 1800X, that give the 1800X another opportunity to shine since infinity fabric run at the same clock as RAM but why wasn't the 5960X also given the same speed RAM? It can't run it?"

      Per Intel's ARK, the i7 5960X supports up to 2133 MHz DDR4 [intel.com]. It has four memory channels, though, giving it more memory bandwidth on paper. The 1800X still makes do with only two memory chann

      • by aliquis ( 678370 )

        Per Intel's ARK, the i7 5960X supports up to 2133 MHz DDR4. It has four memory channels, though, giving it more memory bandwidth on paper. The 1800X still makes do with only two memory channels, so the fact that it matches the i7 in speed is quite impressive.

        I'm very aware of both but that doesn't mean the 5960X can't run faster RAM.
        https://www.msi.com/Motherboar... [msi.com]
        "DDR4 Steel Armor with Best signal stability , Quad Channel DDR4-3466+(OC)" .. and the memory channel is a feature of the Intel HEDT platform.

        You can get stable 3200 MHz out of Ryzen 7 at-least and since that motherboard mention 3466+ I assume you can get stable 3466 out of the latest Intel HEDT platform at-least, as for whatever 5960X is somewhat less capable I don't know.

  • It would have been nice if the summary hinted at the nature of those optimizations, unless all they are have to do with the SMT and cache topology.

    • Supposedly it was :
      https://twitter.com/FioraAeter... [twitter.com]
      http://x86.renejeschke.de/html... [renejeschke.de]

      Writes to memory which bypass the cache hierarchy totally and are very much processor implementation specific in their speeds.

    • Re: (Score:2, Informative)

      by Anonymous Coward
      CFLAGS=-g0 -DTT_CONFIG_OPTION_BYTECODE_INTERPRETER -pipe -O3 -march=native -fweb -funswitch-loops -funroll-all-loops -funit-at-a-time -fsched2-use-traces -fsched2-use-superblocks -fsched-stalled-insns=12 -frename-registers -fprefetch-loop-arrays -fpeel-loops -fomit-frame-pointer -fmerge-all-constants -finline-limit=32768 -finline-functions -ffunction-sections -ffast-math -fdata-sections -fbranch-target-load-optimize2 CXXFLAGS=$CFLAGS
  • Intel has just come out with a new version of their compiler to build games that will run even faster on Intel chips. AMD chips, well, it's not like we're sabotaging them by making it as slow as possible. I mean, that would be unethical!

    Use our compiler and chips or you're a communist! ;)

    • by jwhyche ( 6192 )

      Nothing stopping AMD with doing the same thing.

      • Nothing stopping AMD with doing the same thing.

        Acting like an unethical amoral sociopath is not something to aspire to, it's something to avoid.

        • by jwhyche ( 6192 )

          You're talking about a serial killer. There is nothing amoral, unethical, or sociopathic about designing a complier stream line code for your processor. That is just business.

          • You're talking about a serial killer.

            No, I'm not. There are lots of sociopaths, specifically people with AntiSocial Personality Disorder in business. Wall street is especially fond of them for their sheer ruthlessness and disregard for the wellbeing of the people they are screwing over daily. Sociopaths are just people with significantly reduced capacity for empathy. You specifically could even be one and not know it.

            There is nothing amoral, unethical, or sociopathic about designing a complier stream line code for your processor.

            While it's not wrong to optimize a compiler for your processor, it is quite amoral, unethical and antisocial to intentionall

  • Power consumption aside, I wonder how long my mildly overclocked Sandy Bridge machine will remain competitive...
    • by slaker ( 53818 )

      I repurposed six 6C/12T LGA1366 Xeon workstations into mid-range gaming rigs last year. I paired them with GTX1060s and 240GB SSDs. For 1080p gaming, there's really no subjective difference between those machines and a latter-day Kaby Lake i5 PC with the same GPU, at least among the games I tried on them. Even lacking amenities like USB 3 and updated PCI-e slots, those ~6 year old machines could keep up just fine with Mechwarrior Online and X-Com 2. The contemporary i5 is assuredly faster, but I got those c

      • And that is because for years now, the Xeons and Core i CPUs have had virtually no difference in them whatsoever other than their bin level and brand name (you can even find this info inside Intel's tech sheets for the Xeons and i series), to the point that I actually prefer the Xeons because they've been running more stable without attaching some monster watercooling system, for a consistently longer period of time. Some subsets of the Core i5 series are in their entirety down-binned Xeons, instead of what

  • I am mulling over the prospect of zen architecture. How is this done? I'm picturing a flat and roof-less square of sand which, thorough the art of zen consciousness and being in the moment, gives the enlightened zen master all the support needed.
  • I am building a new data management system that takes advantage of every core/thread available. Speed is everything! I upgraded my development machine from a i7-3770K to a i7-6800K last fall, but now I wish I had waited and got a Ryzen 1800X instead. The 6-core 6800K gave me about a 50% boost in performance, but I had hoped for more. I am looking to get my hands on an AMD machine to test it out with my program. By taking advantage of all the threads, my system can break up a single database query and run it
    • by dave562 ( 969951 )

      Sounds like an interesting project and challenge to solve. How do you define "big relational tables" in this context? Hundreds of gigabytes? Terabytes?

      Do they fit into RAM or are you having to pull data from the disk subsystem?

      • The tables that were taking a couple minutes to query on Postgres had about 20 columns and 5 million rows. Not huge, but respectable. They are still small enough to fit in RAM (32 GB), but I was testing how long it took to query after a cold boot and all data had to be brought in from disk.
      • Another nice thing about my system is that each relational table is basically a collection of columnar (key-value) stores. If you have a 100-column table with 10 million rows in it and you perform a query like "Select , , FROM WHERE ilike '%Apple%' OR > 1000;" then I only have to read in 5 of the 100 columns from disk no matter how many rows the query matches. With multi-threading, I can have separate threads checking Column32 and Column48 at the same time.
        • Great! The formatting screwed up my select statement. Second attempt without brackets: "Select Column1, Column5, Column19 FROM table WHERE Column32 ilike '%Apple%' OR Column48 > 1000;"
        • by jwhyche ( 6192 )

          I'm not a database person and I really have no clue what you are talking about so I will just say this. Sweet :)

  • Of all the games to pick, this is not a good example.

    It has been positively plagued with random stability issues and glitches. 'Tuning' doesn't account for it, as many many people have invested significant time trying to get it to work well.

    If the game itself offered more engaging gameplay or actual technological advances, it'd be one thing. As it stands there are many superior alternatives. It's just a poorly written, bloated game.

  • Not 100% sure but I think this particular speedup was due to an issue with non-temporal writes to memory. Such instructions are used in heavily optimized game code but not generally used in critical paths elsewhere. They are also known to be highly temperamental instructions even across Intel cpus. The Ryzen box was synchronizing the memory writes to all cores which imploded some of the heavily optimized algorithms.

    So far my tests with a 1700X show Ryzen to be an excellent performance cpu, it goes up wel

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...