Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Build Your Own Render Farm 114

Another installment of Tom's Hardware's how-to articles has a look at what it might take to build your own render farm. The article looks at everything from top-to-bottom roll-your-owns to buying things pre-built and the pricing insanity that goes along with it. "If you are working as a freelance artist in the above-mentioned media, toying with the idea, or doing so as a hobbyist, then building even a small farm will greatly increase your productivity compared to working on a single workstation. Studios can even use this piece as a reference for building new render farms, as we're going to address scaling, power, and cooling issues. If you're looking at buying a new machine and are thinking of spending big bucks to get a bleeding-edge system, you might want to step back and consider whether it would be more effective to buy the latest and greatest workstation or to spend less by investing in a few additional systems to be used as dedicated render nodes."
This discussion has been archived. No new comments can be posted.

Build Your Own Render Farm

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Friday July 17, 2009 @02:53PM (#28733267) Journal
    Considering this article and the last one from Tom's Hardware [slashdot.org], I cannot wait for the next Tom's Hardware articles:

    Build Your Own Annoyingly Segmented 10 Page Article!

    How to Run Out of Practical DIY Ideas!

    Host Your Own Ads for Under $1000!

    Turn 50% of Your Site into Flash Ads in One Day!

    How to Fake Content!

    Embedding Popup Ads the Automated Way!

    Going from Pioneer to Slowly Losing Relevance in 10 Easy Steps!

    Earn Pennies a Day By Inconveniencing Your users!

    R.I.P. Tom's Hardware [wikipedia.org].

    • I use AdBlock and mostly just look at the Articles section, it's still mostly the same old Tom's we remember.

      I think there just aren't that many articles about cards coming out lately, and they have to do something to fill the time. I still like it okay (with adblock).

      • Re: (Score:2, Informative)

        Firefox + AutoPager + Adblock Plus + NoScript + Stylish and problem's gone. Yeah, browsing the web is a lot more complicated than it used to be...
    • Re: (Score:3, Informative)

      by jedidiah ( 1196 )

      Both the last "bad idea" and this one really doesn't seem that far removed
      form a lot of MythTV setups me and some of the users have. MythTV supports
      a nice little cluster/farm setup where work can be shoved out to other
      machines that are part of Myth. I have 3 frontend boxes, 2 backend boxes
      and another desktop machine that can all share video processing duties.

      Large disk arrays are not terribly unusual either.

    • By the way: Does anyone know a replacement? Something with complete comparison charts on graphics cards, CPUs, etc. Something serious that is not bought by the hardware companies.

      • I'd be happy to field any questions about anything we do editorially. No, Navid, none of our writers are "bought out" by anyone.
        Chris
        Managing Editor, Tom's Hardware
        • Re: (Score:3, Insightful)

          by Hurricane78 ( 562437 )

          Well, if you of all people state that, then it must be true, mustn't it, managing editor *with a huge interest in the site not looking bad* "Chris". ;)

          But let's just say, after all the problems with your tests, I can not trust you any more. If you want to re-gain that trust, try to make your testing methods really clear, and do not fall for so many beginners errors and strange things, that the first person in the comments can point out in about five minutes. ^^
          I recommend getting some feedback from external

          • Thanks for the feedback.
            There's actually a page of test information in every story. You could even reproduce the results if you so desired.
            There are also several pages of comments that go along with each story, in which the authors participate very regularly :)
            Hope that helps address your concerns!
            Chris
  • Unlatest (Score:4, Insightful)

    by fm6 ( 162816 ) on Friday July 17, 2009 @02:57PM (#28733315) Homepage Journal

    ... or to spend less by investing in a few additional systems to be used as dedicated render nodes.

    Especially if you buy used systems. Computer hardware depreciates fast.

    • Re: (Score:3, Informative)

      by i.r.id10t ( 595143 )

      Unless they are very old, and power use would be better spent running fewer nodes with more rendering oomph.

    • by omeomi ( 675045 )

      ... or to spend less by investing in a few additional systems to be used as dedicated render nodes.

      Especially if you buy used systems. Computer hardware depreciates fast.

      Wouldn't it be possible to use Amazon EC2 to set up a scalable render farm?

      • Wouldn't it be possible to use Amazon EC2 to set up a scalable render farm?

        That is certainly possible, but it depends on the job. If you need to do a rendering job over the weekend a couple of times a year, then EC2 would definitely be the cheapest option. If you frequently need a rendering farm on a regular basis, then it would likely be cheaper to build your own. There is no guarantee of that though. If EC2 gets a better price on electricity than you do and if they have better power utilization than you, they might win just by that alone. You basically have to compute the t

      • Re: (Score:3, Insightful)

        by mr_exit ( 216086 )

        the bandwidth to Amazon would kill you. It's not uncommon for one frame to pull in gigabytes of textures and geometry needed for the render. Rendering CG is very disk, memory and CPU intensive.

        • Keep your textures synced with the cloud and let the render program pull them locally. It's actually not a new idea. Look up EnFuzion and some threads on BlenderArtists.
          • You are still talking about a lot of data, and personally I don't want anything my livelihood relies on to be floating around outside my (closed) network, where potentially anyone could get ahold of it.

            --
            I wouldn't care to rely on any government to [fail to] do something I can do [rather well] myself.
  • OR - if you get a real job, at a real company, they'll give you their unwanted outdated computers for FREE.

    Seriously! Build a massive render farm out thousands of 286's!

    • by mcgrew ( 92797 ) on Friday July 17, 2009 @03:42PM (#28733891) Homepage Journal

      My grandparents rendered on their small farm, but unfortunately I hate lye soap.

      • I used to work at a switchgear plant in the Chicago Stockyards, next to a rendering plant. There animal parts and road kill and carcasses from the vet were turned into the ingredients for cosmetics, toothpaste, shampoo, glue, crayons and etc. The place some days smelled like pork rinds, and vomit on others. When I saw the title of this article I was revolted.

    • Build a massive render farm out thousands of 286's!

      except they'll run out of memory and crash on every scene.

    • I tried running SLS Linux on a 286 in somewhere between 92-96, doesn't work so well :(

      Yes, I was young and ignorant then about why you REALLY DID have to have a 386 or better. I just figured it would be slower :/

  • by Anonymous Coward

    The only sustainable approach is to allow the geometry to roam freely outside your coordinate system. And shading should be confined to what can be achieved with natural sun light no matter how low the framerate.

  • by basementman ( 1475159 ) on Friday July 17, 2009 @03:04PM (#28733389) Homepage
    "everything from top to bottom roll-your-own to buying things pre-built" Is there some way to get high off computers now? I tried smoking all that junk that fell in my keyboard but it just smelt like burnt hair.
    • by nschubach ( 922175 ) on Friday July 17, 2009 @03:09PM (#28733451) Journal

      Keyboard dust is made of people!

      • At the office, sure.

        At home, there are more ingredients: people dander, cheeto dust, pet dander, etc.

        And that's nothing compared to mouse sludge, which often includes dried-up moisturizer (!), bacteria, yeast, among other things (of both human and non-human origins).

        Never volunteer to fix a "slow" or "stuttering" mouse. Ever. Even for your in-laws. Especially for your in-laws (one stray thought about your mother-in-law and how the mouse got gunked, and you're ready for some shock therapy). Buy them a
      • by geekoid ( 135745 )

        tasty, tasty people.

    • Re: (Score:2, Funny)

      by Em Emalb ( 452530 )

      I tried smoking all that junk that fell in my keyboard but it just smelt like burnt hair.

      Dude, if my co-workers' keyboards are any indication of the "typical" keyboard out there, you're damned lucky you didn't kill yourself.

  • EIE I/O (Score:2, Funny)

    by Anonymous Coward

    If you run your render farm on PowerPC's you can put their eieio instruction [ibm.com] to good use!!!!!

    • Re: (Score:1, Troll)

      by mcgrew ( 92797 )
      Farmer McDonald?

      "EEEE! I... EEEE!!! I OWE!!!!!!!!"







      "Filter error: Don't use so many caps. It's like YELLING". Well duh...

  • Thanks for this (Score:4, Interesting)

    by TheModelEskimo ( 968202 ) on Friday July 17, 2009 @03:23PM (#28733661)
    The article touches on general bits of info that might have been time consuming to find. I live in a small town where commercials for clients like the local chamber of commerce are often put together in iMovie, and delivered in a rush. Recently I was approached by a local art director and was asked about moving from 3D stills (which I do occasionally) to 3D animation to be composited into commercial work (probably for bigger clients than the chamber...). I've determined that I can afford about 2-3 minutes of render time per frame before deadlines really start to get pushed out. So rendering infrastructure is very important.

    My studio is unique in that I work with open source software, Blender, Lux, etc. And my clients dig it because many of them are into sustainability and see my philosophy as being similar to theirs. I've looked at outsourcing the animation projects to commercial renderfarms, but when you start to "Better Know a Linux Network," you move beyond "get it done" and start to take interest in your own little LAN. Next to my video compositing and 3D graphics books I have a big ol' fat Pro Linux System Administration book, and it's handy, and I like it that way.

    The article points out that I can save $140 per node by not needing to buy Windows XP Pro 64 bit edition. This is actually great for me since I typically use the money I save on software to buy more hardware.

    BTW, what's up with Slashdot javascript? I'm going to have to build a freaking /. renderfarm pretty soon, and I'll be sending my receipts to CmdrTaco.
    • by copponex ( 13876 )

      BTW, what's up with Slashdot javascript? I'm going to have to build a freaking /. renderfarm pretty soon, and I'll be sending my receipts to CmdrTaco.

      All of the old timers know how to use adblock or we have those freebie accounts. We're a dead marketing segment, and this is part of his evil plan to push us out. The WoW add-on was cruel. But javascript...

      The horror. The horror.

    • Re: (Score:3, Informative)

      by dr00g911 ( 531736 )

      The article neatly sums up how to build a render box from about 5 years ago, or for a hobbyist who doesn't really push the hardware.

      In the last few years, with the prevalence of displacement mapping and linear workflow, file sizes and memory usage to get renders at the quality folks expect of CG work have skyrocketed.

      As someone working as a freelance CG/VFX artist, I can tell you a few practical truths:

      1. You may not need XP 64 but you need 64-bit if you hope to do high-resolution, or detailed renders in a

      • Well written, although here are my notes on your notes:

        Many non-hobbyists don't need to push the hardware. You can get murdered for saying this on /., but the majority of people buy what they need and pocket the rest of the cash for a more interesting purchase. Many 3D professionals do not even own a renderfarm of any sort.

        Displacement is expensive, period. Animators know lots of tricks to get around that, not the least of which would be popular alternatives like normal mapping or conversion of displa
        • Re: (Score:3, Interesting)

          by dr00g911 ( 531736 )

          All fair points, but I must say that the Mental Ray workflow that's so prevalent among pro-sumer/small studio CG (now that Autodesk owns most everything and bundles MR) is terribly hard on memory usage, displacement or no, 32-bit float or no, physically accurate shading/lighting or no. Renderman is far far more efficient, however due to the licensing costs, not many of the little shops are using it.

          The article suggests buying a crapload of boxes with 4GB RAM mainboards, and my argument is that if you find y

          • Autodesk product (or XSI) with Mental Ray bundled.

            BTW. XSI was bought by Autodesk and renamed to Autodesk Softimage.

      • An addendum to this is: don't even consider a motherboard that supports less than 8 gigs of ram, and max the thing out.

        Yes.

        XP 64 (and even my tests with Win7 64 are good). Avoid Vista 64 like the plague.

        ... No. We transitioned our entire studio and renderfarm to Vista 64 without incident. XPx64 had too many software incompatibilities.

        Depending on your primary rendering usage, a Core i7 may actually be working against you with hyperthreading. Quite a few of the big boys (Renderman, Mental Ray) are still licensed per thread.

        Licenses are by the CPU socket not by the number of cores. The i7 is worth every penny.

        A few CG apps don't have command-line rendering available, and it'll suck to learn after the fact that the app you're trying to launch on your pile of new 1U servers won't launch because you don't have a decent video card. Linux & Mac OS (even Hackintoshes) are far superior to Windows in this regard

        ??? Uhhhh... I can't think of a single 3d app which requires a video card. Every notable 3d app was written before 3d accelleration was common. They all have software viewport drivers. I can't think of a single cg app which requires a video card period.

        Lots of apps require shaders to be recompiled per platform, and small studios generally use share/freeware stuff that might not be available on all platforms -- it's much better to work around this issue when you're creating your assets, versus when you've got a delivery deadline looming and you realize that your fancy layered shader looked great on your Win64 previews, but the code isn't available for Linux 64 to render within your lifetime.

        Very true. I'd say even ta

        • Actually, Mental Ray satellite (as craptastically buggy as it is) still had a 8-thread limit under Maya 2009 sp1a (patch notes say they removed the restriction, but watch your CPU usage with a dual Nehalem and tell me it's not locked to 8 cores still)....

          But it's not so much that... I mean if you've got the budget for Renderman Pro or Mental Ray standalone, you've got the budget to build a farm properly, and yeah an i7 is most definitely worth every penny, Nehalem Xeons are great too if someone else is payi

    • You might have the time, but if you pay 300e for Indigo you'll have (for the moment at least) roughly 24x the horsepower you would get for the same 3 nodes, and 95e for each one you add thereafter. If you trade it for actual hardware you'll see that it is actually less expensive to buy Indigo then using an open source Lux, with the present benefit of having a lifetime update (meaning you'll be granted access to all Indigo versions made).
      • Sure, proprietary software has its advantages. I'm aware of my needs, and I balance them against what I see as the reduced privilege level you pay for in a proprietary component.

        Good plug for Indigo though; I wish you well in your endeavor...
  • Cloud computing (Score:1, Insightful)

    by Anonymous Coward

    If your time has value, then buying CPU time from Sun, Amazon, or even Microsoft might be cheaper.

    • And your chances of getting a licensed copy of Brazil, Final Render, Mental Ray, VRay or Renderman installed as a cloud application are what fraction of 0 above 0%?

  • render nodes (Score:4, Insightful)

    by Tom ( 822 ) on Friday July 17, 2009 @03:39PM (#28733857) Homepage Journal

    Even a single render node dramatically increases productivity for me.

    I'm doing TG2 skybox renders, something that easily takes 12 hours each, and often two, three, four times that. Having a few render nodes (two at the moment) means I can continue working while a few frames are already rendering. That means more of my time is spent productive and less is spent waiting.

    My render nodes aren't even dedicated machines, just other machines I have around that are mostly idle.

  • A classic quote (Score:4, Insightful)

    by somenickname ( 1270442 ) on Friday July 17, 2009 @03:48PM (#28733965)

    A total of 10 copies of XP (for 10 nodes) may sound like a big expense, but it actually adds $140 per unit, pushing the cost of these machines to about $485 per unit for a dual-core node or $610 per unit for a quad-core configuration.

    I think Tom should have rephrased that to put it into perspective: "Don't worry only 20% of the node cost is from Windows". I find it amazing that the most expensive component on the cheaper node is Windows XP and on the beefier node, it's nearly the same price as the CPU. It's even more baffling that this statement appears on the same page in reference to CPU selection:

    It's really all about how much you want to spend here, because this is the single most expensive component required for each node.

    Maybe Tom is a secret Linux fan and is hinting that Windows isn't a component but a tax. Or maybe he's just really bad at math.

    • Shame that linux won't run all your software, so that puts it out of the equation all together.
      • Re: (Score:1, Interesting)

        by Anonymous Coward

        You really haven't looked into the 3D animation industry yet have you?

        Here are the main competitors out there for 3D suites:

        Softimage XSI - Windows, UNIX
        Maya - Windows, UNIX
        3DS Max - Windows only, but who cares?
        Lightwave - Windows, Unix

        Even with 3DS Max being windows only, all of the renderers you want to use with it have native UNIX versions too. Do you want to know why the 3D industry seem to like UNIX so much? Shear speed:

        http://linux.slashdot.org/article.pl?sid=05/07/27/1551250&tid=126
        http://www.lin

      • Shame that linux won't run all your software, so that puts it out of the equation all together.

        Shame you can't use the standard meme here. Man did you pick the wrong thread to crow about Windows in..

    • Re: (Score:3, Insightful)

      by BitZtream ( 692029 )

      Windows is certainly overpriced, no argument there.

      I would argue however that the OS is probably the single largest and most important component of the PC. While its not a piece of hardware, and it is just one of many required components, its the one that matters the most, I think.

      I mean, change your ram manufacture, you probably won't notice. Mobo, processor, case, power supply, all these things can change a fair amount and in most cases won't provide an immediately noticeable difference. The software r

      • I think you've just defined why a Microsoft majority market share is monopolistic... Change anything in your computer to another company and it will work great. Change the OS and you won't be able to run your stuff... and this isn't Apple/Linux' fault. I dare you to find a licensing cost for win32 and DirectX so that other software vendors can utilize them in their OS.

        • by drsmithy ( 35869 )

          Change the OS and you won't be able to run your stuff... and this isn't Apple/Linux' fault.

          But this is equally true of OS X and - albeit to a lesser degree - even Linux.

        • I think you've just defined why a Microsoft majority market share is monopolistic... Change anything in your computer to another company and it will work great. Change the OS and you won't be able to run your stuff... and this isn't Apple/Linux' fault. I dare you to find a licensing cost for win32 and DirectX so that other software vendors can utilize them in their OS.

          The word you're looking for is de-facto.

      • by shish ( 588640 )

        FreeBSD emulating Linux or Wine letting you run Windows binaries, which you're probably not going to want if you're trying to render frames as fast as possible.

        Generally when stuff runs under wine at all, it runs faster :-P Though in this case the load is almost entirely CPU-bound, with very little interaction with the OS, so I can't see it making much difference either way.

    • Maybe Tom is a secret Linux fan and is hinting that Windows isn't a component but a tax. Or maybe he's just really bad at math.

      If he really is trying to say that Windows is a tax and not a component on a render farm then he shouldn't be giving advice on how to build them.

      Render nodes are not like a webserver in the sense that your bases are mostly covered with Open Source alternatives. Many apps are either limited on their platform support or at least components of them are. Lightwave, for example, has a Linux-based render node, but won't work with some of the plugins that get sent to it because they're Windows-only. MotionBuild

    • Re: (Score:3, Insightful)

      Windows is just the start. If you really want to use your renderfarm you're going to want some rendermanagement software to keep it all running.

      Cost per node of Deadline (which I highly recommend) is $140 per computer. Then of course you've already bought a copy of Maya or Max etc. $3k. You might want to use an alternate renderer than Mental Ray. $1k per workstation. And you're going to want ghost for equivalent to keep all your computers up to date and get them back to work in the event of a crash

    • by Nakarti ( 572310 )

      He's really bad at the Greater-Than operand.
      Have you ever read the graphics reviews?
      Several times I've read this sequence of ideas:
      "
      Card A routinely matches card B, and often blows it away with quality AND framerates...

      For about the same price, card B is definitely a better deal than card A.
      "
      Wait........What??

  • by mdarksbane ( 587589 ) on Friday July 17, 2009 @03:51PM (#28734003)

    I really loved the system they have set up at ACCAD at Ohio State. They had some clustering software running on all of the workstations that could take it over when it wasn't in use. So you had a very nice computer lab and a render farm all rolled into one. And as a user you could set how much you wanted to share while you were working - so if you were just web browsing, the second core could be churning away on someone's render, but if you were using Maya yourself you could have it all to yourself.

    I really wish I remember what the software was, and I'm sure this is a common arrangement at these sorts of facilities, but I remember being impressed at the execution of it.

  • by BitZtream ( 692029 ) on Friday July 17, 2009 @03:52PM (#28734027)

    For memory, 4 GB is a good start. With the availability of inexpensive 4 GB kits (reviewed here), there's no reason not to. If you are using a dual-core processor and your renderer is a 32-bit application, then 4 GB means you'd have just short of the maximum RAM for each core (which is a good idea if your renderer doesn't multi-thread properly).

    This is where I got off. I wasn't aware that dual core processors treated ram separately. Thats news to me, and the guys at AMD, Intel, MS, and Linus as well. Every OS I'm aware of bases the memory available on the app, not the core, with most 32 bit OSes allowing for about 3G of memory usable to the app (roughly a gig is part of the kernel space for various things in most cases), and allowing for more with some kernel tuning depending on the OS. I think Linux allows for that, I know Windows and FreeBSD do.

    I also guess he's never heard of PAE? Last I checked pretty much every modern processor and OS was capable of supporting 36 bit addressing, meaning a process is more than capable of addressing vastly larger amounts of RAM if its designed to do so, and even without support directly in the application, you can run multiple processes to get the 3G or so per process, which with 2 processes you are at 6. So if your shitting rendering app is 32 bit, not PAE aware, single threaded and you have more than 1 core than you can just pile on more processes with any modern OS and exceed 4G of usage. With a real rendering app, i.e. multithreaded, PAE aware and still 32 bit, its a no brainer. Of course if you're going through the effort to do all this, what are the chances your renderer is going to be 32 bit instead of 64? This is a question I really do not know as I'm not a render monkey, but I just can't see anything that matters still being a 32 bit app unless RAM really doesn't matter in rendering, which lets face it, for a complex scene, it does.

    Its good to know Tom's has some real techs working for him that understand how computers work.

    • So if your shitting rendering app is 32 bit, not PAE aware, single threaded and you have more than 1 core than you can just pile on more processes with any modern OS and exceed 4G of usage. With a real rendering app, i.e. multithreaded, PAE aware and still 32 bit, its a no brainer.

      I understand where you are going and agree with you but, applications cannot be PAE aware. It's only the kernel that deals with the 36-bit addressing. It still doles out memory as 32-bit to userland. Also, a multithreaded application wouldn't take advantage of more than 4G of memory unless the OS treats threads as separate processes because each thread is still living within a single process and that process is still bound by 32-bits of addressable memory.

      • by PRMan ( 959735 )

        applications cannot be PAE aware

        Tell that to Gavrotte's RAMDisk!

      • Re: (Score:3, Interesting)

        by BitZtream ( 692029 )

        Really? Tell that to all the apps that are PAE aware, MSSQL server for instance.

        Its the same as using the old segmented memory model from a practical perspective, although the OSes today use a completely different API for accessing the other memory.

        http://en.wikipedia.org/wiki/Physical_Address_Extension [wikipedia.org]

        • You are still incorrect. Applications can do some "windowing magic" to make it appear as though they are addressing more than 32-bits seamlessly. They do not however have the ability to use 36-bit pointers. So, they aren't using PAE, they are using tricks to make it possible to use more memory than you can address while the 36-bit kernel is still handing the process 32-bit addresses.

          • Look up that "segmented memory" GP refers to.
            • I did. Now, can you tell me how to make a C program use 36-bit pointers so that I can use PAE? They are two very different things...

              • Windows has a special allocation function that returns 36bit pointers, but you still have to map them into the 32bit address space to use it - an app on 32bit windows can't address more than about 2.5gb simultaneously (much less, once fragmentation is taken into account).

                In this day and age it's a silly argument anyway.. desktop processors are generally 64bit capable, 64bit versions are available of all your favourite OS.. why stick to 32bit if your memory requirements are that large?

    • My memory is a little foggy lately, since I've been hanging around in userland a bit, but I'm fairly certain that using long-mode (64-bit) on modern Intel CPU's for your OS and application would yield plenty of virtual address space, using PAE. Additionally, PAE supports a lot more than 36 bits of addressing on the most recent processors, up to 51 I think. The bigger question, is it practical for one CPU to use all that memory?

      • My memory is a little foggy lately, since I've been hanging around in userland a bit, but I'm fairly certain that using long-mode (64-bit) on modern Intel CPU's for your OS and application would yield plenty of virtual address space, using PAE. Additionally, PAE supports a lot more than 36 bits of addressing on the most recent processors, up to 51 I think. The bigger question, is it practical for one CPU to use all that memory?

        I think you may be confusing 32-bit with PAE and 64-bit. 32-bit with PAE is userland 32-bit with the kernel able to address 36-bit (64GB). 64-bit addresses 64-bit (16EB if I remember right) in the kernel and userland but, the CPU itself probably can't address all 64-bits so you are confined to some insanely huge addresses space that you can't fill but, it's less than 64 bits.

    • by josath ( 460165 )

      Last I checked pretty much every modern processor and OS was capable of supporting 36 bit addressing

      Unfortunately both Windows XP and Windows Vista do not support 36-bit addressing in their 32-bit flavors. 32-bit XP & Vista are limited to a little less than 4GB of RAM, no matter what. I think there's a 32-bit uber expensive Server 2008 that supports it, but nobody's gonna be buying that for desktop use or even for render farm use. However Linux supports it fine, I've happily used 8GB of total RAM while running a completely 32-bit kernel/OS/applications.

    • Every 32 bit renderer I've used hasn't been able to use more than 3GB. But you're right if you're still trying to use a 32bit renderer with most scenes you're going to just going to run out of memory and crash.

      64bit + 4+ GB of RAM is pretty much mandatory for production rendering.

      What this article really ignores though is software. Managing a renderfarm means you want to invest in some great render management software like Frantic Film's deadline.

    • by dbIII ( 701233 )

      I also guess he's never heard of PAE?

      The problem is in the Microsoft area only the server versions of 32 bit Microsoft systems have heard of PAE. Of course everything else has had it since not long after the Pentium Pro came out.

  • Playstation 2? (Score:2, Interesting)

    by Uncle Ira ( 586682 )
    PS2s are cheap now, and I know they've had linux running on them for some time. Has anyone managed to get something like ClusterKnoppix running on PS2 hardware? A renderfarm of slim PS2s sitting on a bookshelf would be kind of neat looking.
    • Re: (Score:3, Informative)

      I don't think that there is anything stopping you(though the official PS2 linux kit is unsupported on the slim); but performance would probably be pretty underwhelming. 32 megs of RAM is an unpleasant limitation to labor under for a fair few computational problems, and(unless you are serious about doing optimizations to suit the PS2's particular hardware) you'll find that the stock general purpose processing power of a PS2 is pretty unimpressive.
      • Seconded. A PS3 gives much more performance per dollar, and Linux is straightforwardly installed without extra hardware.
    • PS2s are cheap now, and I know they've had linux running on them for some time. Has anyone managed to get something like ClusterKnoppix running on PS2 hardware? A renderfarm of slim PS2s sitting on a bookshelf would be kind of neat looking.

      The lack of RAM on-board a PS2 (or even a PS3) would make that exercise little more than academic.

      • by dbIII ( 701233 )
        Also the problem at the moment is that if you want to have nodes that use the cell processor and are the equivalent of a PS3 with extra memory you need a military sized budget and an accountant that is in mortal fear of you. Last time I looked it was close to ten quad core Xeon nodes to the price of a fairly equivalent cell node. I really have no idea why they have priced themselves completely out of the market.
    • by kwark ( 512736 )

      My guess is that a PS3 is more than 4 times as fast.

      For example I could find a distributed.net benchmark for a PS2 running the rc5-64 challenge at 0.3Mkeys/s. 1 (of the 6 available) SPE from a PS3 will do rc5-72 at 24Mkeys/s. No idea what the difference in GPU performance would be.

  • by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Friday July 17, 2009 @04:06PM (#28734227)

    It's called a botnet.

    TYVM.

  • by Anonymous Coward on Friday July 17, 2009 @04:11PM (#28734293)

    I did this two years ago with four cheap Dell Inspirons ($299 each, with free shipping). They're thin, easy to stack, and consume less power combined than my desktop. No discrete graphics, smallest possible HDD; all they need is processors (dual-core) and RAM. I run a stripped-down Ubuntu on them, and use some Python scripts to distribute Blender render jobs to them over the network, assembling the final frames on a file server.

    Separate machines make an enormous difference. Even though rendering is relatively amenable to parallelization, a quad core machine isn't nearly as fast as two dual-core machines with the same specs. Even today, you would have to spend an awful lot of money to get a single machine that renders animations as fast as my two-year-old cluster of four.

    I could even have built my own machines, and saved a few tens of dollars per machine, but the price was already pretty reasonable.

  • Just download the Rocks Cluster distribution and you will have an operational cluster in about an hour. Doesn't get much more efficient than that
    • He's not talking about a cluster. A render farm is just a bunch of machines that can be given work to do. There is no fancy network topology so the machines can talk to each other and they aren't even expected to know of each others existence. A render farm is more akin to something like SETI@home whereas a cluster is a trying to emulate a Big Iron box. Big difference.

      • A cluster is a collection of, usually, homogeneous compute nodes. They are usually split into MPI and SSI, Message Passing Interface of Single Server Image. The latter is a bunch of machines trying to emulate a single system and is not commonly found in the HPC world. You are more likely to find MPI setups where each bit of processing can be broken into smaller pieces and distributed to each node.

        For a render farm you can have machines with no knowledge of each other as they can each work on a separate s

  • How about Helmer? http://helmer.sfe.se/ [helmer.sfe.se]
  • Why not create an image with your render software and deploy as many as you need on EC2? No hardware cost, no setup time, you only pay for the CPU time you use.
  • DrDgaf (Score:2, Interesting)

    by Theodore ( 13524 )

    Didn't read, don't give a fuck.
    Building your own cluster can be done by any retard.

    I've been looking into building one for myself, mainly for Blender and LuxRender.
    Now, if there were CUDA/OpenCL versions for the above programs, the Zotac atom/nvidia-ion boards might be nice, expensive, but nice and low powered (or add PCI geforce 9500's, which would also work with my following idea (why the fuck won't they put a PCI-E/16 on these boards?))...
    I've been looking into mini-itx mobos (off of Newegg, that mainly

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...