Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Overclocking the AMD Spider 105

An anonymous reader writes "AMD has released two videos that show an overview of the new AMD Spider platform and how easy it is to overclock it with a single tool. The AMD Spider is based on AMD Phenom processors, the newly released ATI Radeon HD 3800 series discrete graphics and AMD 7-Series chipsets."
This discussion has been archived. No new comments can be posted.

Overclocking the AMD Spider

Comments Filter:
  • by l_bratch ( 865693 ) <luke@bratch.co.uk> on Saturday November 17, 2007 @09:59AM (#21389397) Homepage
    Does he even show any overclocking?

    Maybe it was a problem my end, but he just talked about this mythical tool for a while, then just as you really start to get into it - the video ends.
  • by florescent_beige ( 608235 ) on Saturday November 17, 2007 @10:12AM (#21389467) Journal
    Here. [youtube.com]
  • by florescent_beige ( 608235 ) on Saturday November 17, 2007 @10:18AM (#21389501) Journal
    In the demo, the presenter overclocks a Phenom 9500 (2.2 GHz) to 3 GHz.
  • DISCRETE (Score:4, Informative)

    by Sockatume ( 732728 ) on Saturday November 17, 2007 @10:39AM (#21389633)
    Discrete = distinct, seperate. Discreet = subtle, low-key. That is all.
  • by Anonymous Coward on Saturday November 17, 2007 @01:07PM (#21390627)
    Since when is forward the wrong direction?
    What's wrong with having 4 graphics cards? Especially in this case ones that _aren't_ heavy on the noise or wattage side. 4 cards could be used for graphics, or some combination of graphics and physics, or just heavy "general purpose" compute power (where I use the term "general purpose" as loosely as can be applied to a graphics card...make no mistake that the kinds of apps that a GPU can accelerate are rather specialized).
  • Re:What's new? (Score:5, Informative)

    by moosesocks ( 264553 ) on Saturday November 17, 2007 @02:00PM (#21390981) Homepage

    I mean, the only "innovation" here is that one company is making the CPU, chipset and graphics card. You know, like Intel have been for years. But AMD make one where the graphics card is targeted at gamers. Whoop-de-fucking-do.


    Not quite. The role of the GPU is stepping up to be much more important than "just games".

    Newer operating systems rely extensively on the GPU to render the desktop, apply various effects to it, etc.... These tasks can be as simple as alpha blending, or as complex as providing a hardware-accelerated version of Photoshop.

    It's not quite there yet on Windows (Vista implements it rather poorly), but Linux and OS X have been using OpenGL acceleration on the desktop for quite some time now. In what might be a first for a 'desktop' feature, support for it on Linux is actually quite good, and provides a rather nice UI experience (once you turn all of Compiz's superfluous effects off, that is).

    I'm going to jump in here as a part-time Apple fanboy, and also point out that Apple's very heavily pushing its set of accelerated 2D Graphics libraries [arstechnica.com] toward developers to integrate into their applications to provide a more natural and fluid experience. In 10.5, OpenGL rendering is pervasive in almost every part of the user interface. Once you've got that framework in place, it becomes very easy to do all sorts of fun stuff without worrying about bogging down the CPU.

    Even fast modern CPUs perform miserably when it comes to graphics operations, as they're not designed to cope with vector and matrix operations. With high-resolution displays becoming prevalent these days, it makes a good deal of sense to offload as much of the processing as possible to the GPU. If you implement this properly in the operating system, it's even transparent to the users AND developers. It's very much a no-brainer.

    Many GPUs these days also provide accelerated support for video encoding/decoding, which is also a rather strenuous task for a normal desktop CPU to handle efficiently. Video editing applications can also take advantage by providing realtime previews of HD video rendered with effects applied to it.

    Anyone who's done a substantial amount of video editing knows just how welcome this would be. Ironically, it's a shift back to an older paradigm, as the Amiga Video Toasters included an array of specialized graphics hardware to do all of the dirty work, and did it in real-time.

    This might also translate into some sort of energy savings, given that modern CPUs consume very little power when idle, although this is pure speculation on my part.

    There are all sorts of fun applications for this sort of technology once the frameworks are in place. Read up on Apple's 'Core' set of libraries for a fascinating peek into the future of UI and software design. Pixelmator [pixelmator.com] is one of the first applications to take extensive advantage of these features, and is an absolute joy to work with. Although its featureset isn't as extensive as Photoshop, it's damn impressive for a 1.0 product, and I'd daresay that it's a hell of a lot more useful to mainstream audiences than the GIMP is, and has a sexy UI to boot. Dragging the sliders when tweaking a filter, and watching the ENTIRE image smoothly change as you drag the slider seems like nirvana to photographers and graphic artists (even on somewhat old hardware)

    So yes. This is a big deal. Everyday desktop software is transitioning toward relying upon the GPU for basic tasks, and AMD has stepped up to the plate to provide a decent set of entry-level graphics hardware to fill in the gap. Remember the state of video hardware before nVidia came along, and introduced the TNT2 and later the Geforce2-MX? Before them, decent 3d graphics hardware was an extravagant luxury. Afterward, it was easily affordable, and nearly ubiquitous.

    I should also point out that Intel's graphics hardware is absolute shit. That comparison's just not fair.
  • by tlhIngan ( 30335 ) <slashdot.worf@net> on Sunday November 18, 2007 @12:44AM (#21395147)

    Has AMD always been OC friendly? I remember when Intel was actively discouraging the practice so as not to have sales of more expensive CPUs undercut.


    Well, traditionally, AMD always had supply issues, so their chips tended to not be very overclockable (they had problems with yields of higher-end chips, so there were no high-end parts to remark as lower end chips). However, they were easy to overclock, usually with aid of conductive ink to restore bridges that set the clock frequencies and multipliers of the clock generator. You could get some nice overclocks, but they tended to be quite rare.

    Intel, which usually doesn't have production or supply issues, often had problems suppling low-end chips because their chips could always perform much faster, which is why they always discouraged it. Often times, a part was marked slower just to meet market demand, but was very well capable of going faster (or... much faster). Of course, from time to time, they also had a part that was only going to perform as binned, so it didn't always work, but with Intel, the chances of that were very, very small. (The single exception I can think of was the ill-fated 1.13GHz Pentium III Tualatin CPU - basically it was a 1GHz or so overclocked, but it turns out it was overclocked as a marketing effort for it ran hot, needed lots of power, and was still unstable. I believe Intel was just stretching a design that never could go much beyond 1GHz...)

    Annoyingly for Intel, everytime they introduced a new process, the low-end chips would often be wildly overclockable as their yields were such that low-end parts were of low yield as all the parts could perform much faster. Easily 10%, but 20+% overclocks were possible as well.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...