Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel's Single Thread Acceleration 182

SlinkySausage writes "Even though Intel is probably the industry's biggest proponent of multi-core computing and threaded programming, it today announced a single thread acceleration technology at IDF Beijing. Mobility chief Mooly Eden revealed a type of single-core overclocking built in to its upcoming Santa Rosa platform. It seems like a tacit admission from Intel that multi-threaded apps haven't caught up with the availability of multi-core CPUs. Intel also foreshadowed a major announcement tomorrow around Universal Extensible Firmware Interface (UEFI) — the replacement for BIOS that has so far only been used in Intel Macs. "We have been working with Microsoft," Intel hinted."
This discussion has been archived. No new comments can be posted.

Intel's Single Thread Acceleration

Comments Filter:
  • Overclocking? (Score:5, Insightful)

    by Nuffsaid ( 855987 ) on Monday April 16, 2007 @08:18AM (#18749409)
    For a moment, I hoped Intel had come out with something like AMD's rumored reverse-Hyperthreading. That would be a real revolution!
    • Re:Overclocking? (Score:5, Informative)

      by Aadain2001 ( 684036 ) on Monday April 16, 2007 @09:12AM (#18749973) Journal
      I did my MS thesis on a topic very similar to this. Trust me, it's not worth it. While some applications that have inherent parallelism (image manipulation, movie encoding/decoding, etc) can see between 2x to 4x improvements when dynamically threaded, the majority of your basic applications are too linear and have too many dependencies between instructions for dynamic threading to really be worth the investment in hardware.
      • ACK!!! (Score:2, Funny)

        by Gr8Apes ( 679165 )
        Good lord, let me sell all my web, application, and DB servers then!!!! I've overpaid for 32 CPU systems!!!! ACK!!!
        • Re: (Score:2, Insightful)

          by Chapium ( 550445 )
          "the majority of your basic applications are too linear and have too many dependencies between instructions for dynamic threading to really be worth the investment in hardware"

          Good lord, let me sell all my web, application, and DB servers then!!!! I've overpaid for 32 CPU systems!!!! ACK!!!

          I wouldn't call server applications or dbms "basic applications."
          • Re: (Score:3, Funny)

            by Zombywuf ( 1064778 )
            In which case the bottleneck is not the CPU, it's the idiot writing the software.
          • by treeves ( 963993 )
            NO, no, no! BASIC applications!

          • Nor would server or DBMS applications require dynamic threading at the hardware level. I'm quite certain threading is already built-in to the software for such applications.

            The original poster is talking about reintroducing the idea of threads into single-threaded code, which is an incredibly difficult task. This is even more complicated a task than just out-of-order execution on scalar code.

            I'm hardly surprised that Intel took the easy way out on this one - the "hard" fixes for this problem are on the ve
    • by Spokehedz ( 599285 ) on Monday April 16, 2007 @09:24AM (#18750121)
      See... I thought it was from that Red Dwarf episode, where Kryten put all the CPU time through one processor--exponentionally increasing it's computing power, but shortening it's overall lifespan.

      Holly only had 3min before she would be gone forever... And that bloody toaster had to ask if she wanted toast.

      Lets hope that Intel has solved this issue with their new CPU's.

      I for one which welcome, in soviet russia we compute you, and PROFIT!
      • Re: (Score:3, Funny)

        by baadger ( 764884 )

        Lister: No, I don't want any toast. In fact, no one around here wants any toast. Not now, not ever. NO TOAST. OR muffins! We don't LIKE muffins around here! We want no muffins, no toast, no teacakes, no buns, baps, baguettes or bagels, no croissants, no crumpets, no pancakes, no potato cakes and no hot-cross buns and DEFINITELY no smegging flapjacks!

        Talkie Toaster: Ahh so you're a waffle man.

        ..off topic... so shoot me.

  • It makes perfect sense that you'd still try to speed up single-threaded applications. After all, if you have 4 cores, then any speedup to one core is a speedup to all of them. I realize that's not what this article is about. In this case, they are speeding up one at the expense of the other, but the article's blurb makes it sound like Intel shouldn't be interested in per-core speedups when that is clearly false.
    • by mwvdlee ( 775178 )
      I thought so too, until I actually read TFA.
      This optimalization essentially shuts down the other cores in order to let the remaining core perform faster.
      So this optimalization is counterproductive when you have applictions that actually use multiple cores.
  • by EMB Numbers ( 934125 ) on Monday April 16, 2007 @08:26AM (#18749471)
    EFI is used by more than just Apple. For example, HP Itanium systems use EFI. By virtue of being "extensible", EFI is vastly better than the BIOS which has frankly failed to evolve since Compaq reverse engineered it in the early 1980s.

    It is well past time that BIOS went to the grave.
    • by pla ( 258480 ) on Monday April 16, 2007 @09:16AM (#18750019) Journal
      By virtue of being "extensible", EFI is vastly better than the BIOS

      Yeah... Why, that nasty ol' standard BIOS makes hardware-level DRM just so pesky. And vendor lock-in for replacement hardware? Almost impossible! Why, how will Dell ever survive if it can't force you to use Dell-branded video cards as your only upgrade option? And of course, WGA worked so well, why not include it at the firmware level? Bought a "OS-less" PC, did we? No soup for you!


      Sorry, EFI has some great potential, but it has far too much potential for vendor abuse. The (somewhat) standardized PC BIOS has made the modern era of ubiquitous computers possible. Don't take a "step forward" too quickly without first looking to see if it will send you over a cliff.
      • Re: (Score:3, Informative)

        by ThisNukes4u ( 752508 ) *
        And besides, most modern OSes basically relegate the bios to the back burner. Its not like we're still calling bios interrupts from DOS anymore.
        • by Bozdune ( 68800 )
          Its not like we're still calling bios interrupts from DOS anymore.

          Speak for yourself! I, for one... oh, never mind.
        • And besides, most modern OSes basically relegate the bios to the back burner. Its not like we're still calling bios interrupts from DOS anymore.

          It's not as good as you hope. I have three new machines all with BIOS bugs that are a real problem - a SiS mobo that doesn't setup my MTTR registers correctly and so causes the machine to run murderously slow unless I tell the kernel to map out the last bit of RAM or setup my own MTRR registers by hand, an Asus mobo that causes all kinds of problems and kernel pani
      • by 99BottlesOfBeerInMyF ( 813746 ) on Monday April 16, 2007 @10:43AM (#18751171)

        Yeah... Why, that nasty ol' standard BIOS makes hardware-level DRM just so pesky.

        Not really. It just makes improvements and DRM hacks. Add a TPM module to a BIOS-based system and include support in the OS and it will be just as effective for MS's purposes as an EFI one. BIOS makes modern hardware a pain in the butt. The fact that DRM modules are modern hardware is sort of orthogonal to the issue.

        And vendor lock-in for replacement hardware? Almost impossible! Why, how will Dell ever survive if it can't force you to use Dell-branded video cards as your only upgrade option?

        Umm, Dell is not even the biggest player in a market that is not monopolized. If Dell requires Dell branded video cards and people care (most probably won't) then people will switch to a vendor that does not do this and Dell will change or die. I don't think Dell or any other PC vendor has enough influence to force such a scheme upon the existing graphics card makers. Only MS really has that much influence and I don't think they have the motivation.

        Bought a "OS-less" PC, did we? No soup for you!

        I don't think you have to worry about this problem unless you're running Windows on it.

        Sorry, EFI has some great potential, but it has far too much potential for vendor abuse.

        I disagree. I don't see that vendors will abuse this any more than they already abuse BIOS. In any case, the change is coming. You just need to decide which side of the curve you want to be on. (Typed from an EFI laptop.)

    • Open Firware [openfirmware.org] has been around a lot longer than intel's EFI, and is used by Sun, IBM and Apple for their RISC boxes.

      There is a free-as-in-speech implementation for the PeeCee called OpenBIOS [openbios.org].

      It's implemented in FORTH.

  • EFI (Score:2, Informative)

    by Anonymous Coward

    Universal Extensible Firmware Interface (UEFI) -- the replacement for BIOS that has so far only been used in Intel Macs

    Really. I know Google is hard to use, but even Wikipedia [wikipedia.org] would have given some detail on EFI history. (Hint: Itanium only ever used EFI). And it turns out that Macs are not even the first x86 machines to use it, either:

    In November 2003, Gateway introduced the Gateway 610 Media Center, the first x86 Windows-based computer system to use EFI. The 610 used Insyde Software's InsydeH2O EFI firmw

  • A Marketing Triumph (Score:5, Informative)

    by sibtrag ( 747020 ) * on Monday April 16, 2007 @08:28AM (#18749491)
    Intel's "Enhanced Dynamic Acceleration Technology" is a triumph of marketing. Notice how the focus is on the transition where one core becomes inactive and the other one speeds up. This is the good transition. The other transition, where the chip workload increases & voltage/frequency are limited to keep within a power envelope, is called "throttling" and is much disliked in the user community.

    Don't get me wrong, this is valuable technology. It is important that microprocessors efficiently use the power available to them. Having a choice on a single chip between a high-performance, high-power single-thread engine & a set of lower-performance, lower-power engines has great promise. But, the way this is presented is a big victory for marketing.
    • The other transition, where the chip workload increases & voltage/frequency are limited to keep within a power envelope, is called "throttling" and is much disliked in the user community.

      Who cares, when you won't notice the throttling since the throttled core was sitting idle anyway? They're not slowing down the core you're using.
    • Re: (Score:3, Informative)

      by Kjella ( 173770 )
      Look, if you look at the benchmarks it's quite clear that you could either get the maximum clock speed *or* the big number of cores. How likely is it really that you'll have four cores, all equally at 100% load? Not unless you're doing something embarassingly parallel better left to a cluster.

      Basicly, if you have a thermal envelope. You know that consumption rises with clockspeed squared. You can either have 4*(1GHz)^2 = 4Ghz processing power or 1*(2GHz)^2 = 2GHz processing power with the same power consump
    • Some people didn't get it. Here:

      This chip has to throttle itself when you use all the cores. (probably a power/heat issue)

      People hate throttling. Throttling is not marketable.

      Intel marketing turned things around, saying that the chip speeds up (a.k.a. "stops throttling") when running single-threaded apps. Speeding up is good! It's like the old turbo buttons.

      It's a sane idea. I'd been expecting to see chips that can't run at full speed continuously because of heat issues; this is pretty much the same thing.
  • Twice the speed? (Score:3, Insightful)

    by Aladrin ( 926209 ) on Monday April 16, 2007 @08:29AM (#18749495)
    The article suggests that this technology makes 1 core run twice as fast by basically disabling the second core for a while. They go on to 'prove' how effective it is by running a photo processing thing that they don't explain. It runs twice as fast this way.

    So... If they can have 2 cores at full speed, or 1 core at double speed... WHY THE FUCK do they have 2 cores in the first place?
    • Re: (Score:2, Informative)

      because it's better to have separate cores with separate pipelining for multiple threads than sharing a single core.

      because of pipelining, if you have to swap between tasks, you actually lose a large number of instructuions, which means switching tasks often with a single core is significantly worse for performance than multiple cores.
    • by 0100010001010011 ( 652467 ) on Monday April 16, 2007 @08:49AM (#18749699)
      Because when I'm encoding a movie I want my UI to be responsive.
    • Think of quad-cores or more rather than dual cores. Having four cores at a moderate clock speed where one can get ramped up to a high clock speed will give you the large speed boost of many slower cores for multithreaded applications and a high-clock-speed single core for single-threaded applications. The four or more slower cores will beat the one higher-clocked one in multithreaded applications.
    • When one core is idle, the other one will only speed up by 200 or 400 MHz. So you have a choice between, say 2.4+2.4 GHz or 2.6+0 GHz.
    • Because despite what you claim you didn't read the article (properly) ? Fair enough, it was almost a whole page without the adverts.

      The doubling in speed is for a different technology - "turbo memory" under a particular (memory-bound) application.

      The speed-up for "overclocking" the core is unlikely to be as much as 2x. When you have a multi-threaded app (or several apps) then you want both cores because you'll get more performance that way. When one core is not being utilised the other core can increase its
  • UEFI? (Score:2, Interesting)

    by Noryungi ( 70322 )
    While I am all for having something a bit more intelligent than BIOS to init a computer, I can't help but wonder... Does this UEFI integrates DRM functions? Is this the Trojan Horse that will make all computers DRM-enabled?

    Inquiring minds want to know!
    • Re:UEFI? (Score:5, Informative)

      by KonoWatakushi ( 910213 ) on Monday April 16, 2007 @09:17AM (#18750029)
      Rather than answer that question, I will ask another. Why would hardware manufacturers such as Intel and AMD want to limit their market by crippling the hardware to only run certain software? It is unlikely in the extreme that open source operating systems will be locked out, and that is what really matters.

      As I understand it, UEFI will enable some thoroughly nasty DRM, but only so far as the OS vender chooses to take it. Apple and Microsoft will almost certainly make it a miserable experience for all involved, but will probably tire of shooting themselves in the feet at some point. There are alternatives after all and they are looking better every day.
      • I noted the anti Apple remark. Kinda pointless when Apple have already proved you wrong by not limiting any OS on their machines because they already use this technology and they are instead promoting alternative OS'es to co-habit with OSX.
        • You go right ahead and load a DRM protected song from iTunes onto your Sandisk Sansa MP3 player using approved OS X / iTunes functionality. Once you've done that, you can make the claim that Apple doesn't screw their customers. Apple isn't bad overall, but their just as much a villain as Microsoft in this DRM thing.

      • Re: (Score:3, Interesting)

        by Kjella ( 173770 )
        Locked out, no. Let in, also no. Linux is going to suffer the death of a thousand needles when "secure" movies, "secure" music, "secure" webpages, "secure" e-mail, "secure" documents, "secure" networks, "secure" IM and whatnot get propagated by the 98% running DRM operating systems. I mean, look how many people are frustrated Linux doesn't play MP3 or DVDs out of the box, no matter how little it's Linux's fault, and there is an easy fix.

        What if the problem is ten times worse, and there is no easy fix? Are y
    • UEFI makes it easier to do nasty things with a TPM, but it is not a guaranteed problem. The Intel Macs have EFI and TPMs, but all they use the TPM for is to enable OS X to confirm that it is on an Apple computer. The presence of the TPMs in intel Macs is probably just a sign that Apple didn't bother with making their own motherboard/chipset design from scratch, and instead just made the Intel designs fit their form factors.
  • "Caught up"? (Score:5, Insightful)

    by Z0mb1eman ( 629653 ) on Monday April 16, 2007 @08:31AM (#18749519) Homepage
    It seems like a tacit admission from Intel that multi-threaded apps haven't caught up with the availability of multi-core CPUs.

    Or maybe Intel, unlike the story submitter, knows that many apps simply do not lend themselves to multithreading and parallelism. It's not about "catching up".

    Multi-core for multithreaded apps? Check.
    Trying to get each core as fast as possible for when it's only used by one single-threaded app? Check.

    Makes sense to me.
    • Also, and I feel dumb for saying this because it's so obvious (I'm not even an expert on these things), but you don't really need all of your applications to be multithreaded in order to benefit from multiple cores. I guess I'm assuming that I'm not the only person who runs multiple applications at the same time.

      Of course, it's more likely that you'll be taking good advantage of 8 cores if your apps are multithreaded, but if you're running two single-threaded applications on a dual core system, shouldn't

      • You're right of course, but I think it's more of the case that you don't need lots of cycles often, but when you do you really want as much as you can get.

        Most software people use doesn't (or shouldn't) use 5% of the processor power available to it. Of course, when you fire up the latest 3D game, ray-tracer, or other truly CPU-intensive app, you need all the cycles you can ring from every core. Most of these are multitaskable or parallelable, but it's not always obvious or easy how to do it.

        Besides, how e
        • Augh! I'm such a spelling Nazi, I have to correct myself. That's "wring" of course. I usually try to limit my Nazi-ism to setting a good example for others, but sometimes ya just gotta speak up. /me breaks out the sackcloth and ashes.
    • knows that many apps simply do not lend themselves to multithreading and parallelism.

      This is a myth, propagated by lazy developers and cheap end users.

      There are some classes of computing problems that can't be parallelized, but very few of those problems are the applications that we want to run faster on modern computers.

      The only application that shows up on benchmark sites that might not be easily parallelizable is file compression (i.e. "WinRAR"), and if that ever needs to be parallelized a small algor

  • by gEvil (beta) ( 945888 ) on Monday April 16, 2007 @08:32AM (#18749545)
    Ahhh, journalism at its finest: "The new chips will be able to overclock one of the cores if the other core is not being used." Then two paragraphs later: "This is not overclocking. Overclocking is when you take a chip and increase its clock speed and run it out of spec. This is not out of spec."

    That said, this seems to make perfect sense to me. If they're able to pump all that power into a single core while the other one is asleep/idle, all while keeping it within its operating parameters, then I'm all for it.
  • In the past, chips were limited to a maximum voltage because of the risk of long-term damage at higher voltages. As a results, the voltage could be cranked up close to the maximum, providing high-frequency performance. Around 2004, however, OEMs started becoming concerned about cooling extremely high-power chips like Tejas, and the chip manufacturers had to start pushing the power consumption back down. Now, we have chips that could operate at higher frequencies if the power budget were higher. When you
    • by mwvdlee ( 775178 )
      So basically the limiting factor in CPU design nowadays is power consumption? It can only use upto a set quantity of power, regardless of the number of cores?
  • The link to "single core overclocking" states:

    "This is not overclocking. Overclocking is when you take a chip and increase its clock speed and run it out of spec."

    THis is just a technique to stay under the specified power envelope. Nowadays not the speed is the real problem, but the powerusage. Not that in single thread mode the CPU will run less instructions per watt.... and i guess for every 25% more cpu frequency you you 75% more power or or something like that.
    • It might also be a fancy keyword for shared cache, where one core can use all the cache if the other one is sleeping or not very active. Intel has previously jumped a few fences and not implemented fully shared cache unlike AMD.

      Btw. robson? Rubs on, Rubs off..
  • "We have been working with Microsoft," Intel hinted."

    Now I know to avoid it.
  • by pzs ( 857406 ) on Monday April 16, 2007 @08:43AM (#18749643)
    As many slashdotters are in software development or something related, we should all be grateful that multi-core processors are becoming so prevalent, because it will mean more jobs for hard-core code-cutters.

    The paradigm for using many types of software is pretty well established now, and many new software projects can be put together by bolting together existing tools. As a result of this, there has been a lot of hype about the use of high level application development like Ruby on Rails, where you don't need to have a lot of programming expertise to chuck together a web-facing database application.

    However, all the layers of software beneath Ruby on Rails are based on single-threaded languages and libraries. To benefit from the advances of multi-core technology, all that stuff will have to be brought up to date and of course making a piece of code make good use of a number of processors is often a non-trivial exercise. In theory, it should mean many more jobs for us old-schoolers, who were building web/database apps when it took much more than 10 lines of code to do it...

    Peter
    • by mr_mischief ( 456295 ) on Monday April 16, 2007 @08:54AM (#18749739) Journal
      Taking advantage of multiple cores with a single-threaded per-client application just requires having more than one simultaneous user on your server. It doesn't at all require having a multi-threaded application per client. Most HTTP connections don't do anything very fancy, and really won't be helped much internally by multiple cores. The web server software itself, the database server, the fact that popular sites (or shared servers) get more than one visitor at a time, and similar concerns will make a much bigger difference with multiple cores than making a CRUD application or a blog multi-threaded.
    • I know that at least one higher level programming language can make use of multiple processors relatively easily for web applications. .Net has a relatively easy way to make web garden base applications.
    • making a piece of code make good use of a number of processors is often a non-trivial exercise.

      True, but the many benefits from a multicore system don't come from necessarily runnning one process with shared data on multiple cores (i.e. threading) but running multiple processes that are fairly isolated in parallel. One could be the Ruby interpreter,other could be your database, the other a monitoring or security application, another a backup deamon and so on.

      Concurrent programming in shared data environm

      • And besides, some tasks are inherently sequential like a finite state machine for example.

        A finite state machine is an algorithm not a task. You can achieve parallelism by using a different algorithm, or by using a splitting heuristic that divides the work among multiple threads running the non-parallel algorithm.

        • Yes, you are right, Mr. CS terminology nazi ;-) FSM is an algorithm. I was just assuming that a FSM is the only and best algorithm for that one specific hypothetical task. It was just one of the algorithms that came to my mind that I thought is inherently not concurrent / parallelizable so one could have a 1000 cores but that one finite state machine would have to go from state to state in a sequential manner on one single core .
    • by tji ( 74570 )
      Not really. Multi-core doesn't mean you need multi-threaded apps to benefit from it. Take a loot at the processes running on your Linux/Mac/Windows box some thime.. there are a lot of them. While process A is running on CPU 0, it doesn't need to be switched out to let process B run on CPU 1.

      Web apps, like Ruby on Rails, is a good example of why multi-threading is not needed. Web servers handle many simultaneous requests, so the workload is easily divisible based on individual requests. The web server
  • Multi-core CPUs (Score:5, Informative)

    by nevali ( 942731 ) on Monday April 16, 2007 @08:51AM (#18749713) Homepage
    With all this talk of multi-threading on multi-core CPUs, Slashdotters appear to have forgotten that we all run multi-tasking operating systems. An OS isn't forced to schedule all of the threads of a single application between cores: it's perfectly capable of spreading several different single-threaded applications between cores, too.

    And no, EFI didn't appear first on Intel Macs. Intel Macs weren't even the first x86-based machines to employ it.
  • As you will then them to boot a efi system as with a mac pro when you put in a non efi card you get no video until you boot windows.
    And an non EFI raid card may not be able to boot in a efi system.
    • In theory, couldn't you have an extension that provided a driver that would work until boot? They do like to talk about how extensible EFI is...
      • But you need to have that extension in the cards rom
        • But you need to have that extension in the cards rom

          Why?

          And on top of that, most add-in cards today are firmware upgradable.

          But can you explain precisely why it would have to be in the ROM? It would have to be there to not have to be somehow installed, I'll grant you that...

          • cards today are firmware upgradable but they may need a bigger rom to hold the bios and the efi rom.
            also a raid / sata / ide / scsi card needs a rom to be able to boot from it.
        • by ivan256 ( 17499 )
          Only if you want the new video card to be "Plug and Play"...

          Otherwise the driver can live on the system. Even on the system disk.
  • I suspect this isn't so much about power as it is about temperature. With a dual-core chip, you expect both cores to contribute 50% to the heat load. If one core's throttled back, you can overdrive the other core without the chip overheating.
    • With a dual-core chip, you expect both cores to contribute 50% to the heat load. If one core's throttled back, you can overdrive the other core without the chip overheating.

      This is not the case, because the heat does not spread out on the chip that much. So the peak temperature of a core doesn't depend on the behavior of the other cores. The fact that Intel is using EDAT indicates that Merom is power-limited but not temperature-limited.
  • We want it now! Run It On The Silicon!

    Give us an FPGA coprocessor on chip.

     
    • Coprocessors are certainly coming back to computers, just look at AMD's Torrenza and Intel's version of that (Geneseo?) These won't be on-chip, but they will certainly hook up to the CPU/chipset over high-speed links.

      Putting the FPGA on-chip would be a bit faster, but make for more expensive chips due to not only putting the FPGA on-chip but making different chips with different kinds of FPGAs or no FPGA. I'd settle for an off-chip, upgradeable FPGA rather than have to upgrade the CPU to upgrade the FPGA co
  • What this amounts to is taking a part that is qualified to run at, say, 2.8GHz, and selling it with a default clock of 2.2GHz in order to meet TDP. Then, when one core is disabled, you crank up the other core's clock to 2.8GHz and stay within TDP. This sounds like a good idea for mobile computing, since power (i.e. battery life) is by far the most important thing. But for servers, I think you'd want to sell as many chips as you can with the highest rated clock freq, since those are higher margin.
  • 'Intel also foreshadowed a major announcement tomorrow around Universal Extensible Firmware Interface (UEFI) -- the replacement for BIOS that has so far only been used in Intel Macs. "We have been working with Microsoft," Intel hinted."'

    $10 bucks says this heralds a new age of DRM.

  • Sum the Cores! (Score:2, Interesting)

    by onebadmutha ( 785592 )
    I understand that it doesn't work at this point, sorta like "don't cross the streams" from ghostbusters.. But really, we're talking about a long series of math problems at this point, why not interleave? I understand the math is hard, that's why intel has all of those Phd's. Getterdun. I wants me some Quake 9 a 4.2 billion frames per second. Plus, programming multithreaded is all superhardish!
    • Re: (Score:3, Informative)

      by aXis100 ( 690904 )
      why not interleave

      Because many of the CPU math results depend on other results in the chain. Spreading those dependant operands across multiple CPU's may not be efficient.

Heard that the next Space Shuttle is supposed to carry several Guernsey cows? It's gonna be the herd shot 'round the world.

Working...