Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Building The Broadcast Box 99

Mortin writes "The folks at Icrontic have a neat article up titled "The Broadcast Box." They unleashed a room full of designers to build an affordable system, and this 24-page article shows what they came up with, along with benchmarks, design specs, and cost analysis."
This discussion has been archived. No new comments can be posted.

Building The Broadcast Box

Comments Filter:
  • by Anonymous Coward on Tuesday September 03, 2002 @01:55PM (#4190139)
    Is the BB going to have DRM built in? Because if the RIAA and MPAA decide to use the DMCA, these guys might be SOL. YMMV, but IMHO these guys are basically FUBARed.
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Tuesday September 03, 2002 @01:56PM (#4190148)
    Comment removed based on user account deletion
    • really? I find them very affordable. As a matter of fact, I keep a room full of them on hand, just in case I need them to whip something up...
  • Bad link (Score:1, Informative)

    by Anonymous Coward
    Database error of some sort.
  • nice that they error out with there dbase connection info. U:icontic, P:icrontic
  • Fatal error
    Type of error Database error: connect(localhost,icrontic,PASSWORD) failed.
    Error message MySQL Error: ()

    Please report this bug to the webmaster.
    Thank you.
    • It's not clear that it's better to handle it the way cars.com [cars.com] does:
      • Cars.com is temporarily unavailable. Please check back in a few minutes.

        While you're waiting, you can kill some time and still find useful car information at our partner site, CarTalk.

        Please accept our apologies for any inconvenience.

      This is voicemail hell for web sites.

  • by Bonker ( 243350 )
    Slashdot reamed this poor site like a drunken virgin on Prom night...

    What's worse is this error message I got when I visited:


    Fatal error
    Type of error Database error: connect(localhost,icrontic,PASSWORD) failed.
    Error message MySQL Error: ()

    Please report this bug to the webmaster.
    Thank you.


    Is that a database password 'PASSWORD' I see before me? I think it is! This site's in for a long, long day.
  • Article text (Score:4, Informative)

    by SoCalChris ( 573049 ) on Tuesday September 03, 2002 @02:02PM (#4190205) Journal
    It worked for me, here is the text of the article.

    Introduction

    The power to create. Creative souls toil away inside the walls of the design department or I the dark confines of an edit suite in a television station. As the production manager I often see the graphic designers leaning back in their chairs staring at their monitors. When questioned I usually get the response..."rendering". I'm often told there's a need for a second or third computer so they can do other work while one system is busy rendering. In the broadcast environment rendering usually means 1-4 hour waits for finished elements. If waiting for one system to finish a piece for use in commercial or promotion it can be hell when there are deadlines to meet. Time is money. Waiting is frustration. Hardware should not dictate creativity.

    People often assume that I work with immensely powerful computing power in the television production world. Sometimes I do and those computers can come with price tags that the computer itself couldn't work out. Professional 2D/3D workstations are thought of as expensive and in today's market of shrinking profit margins the saying that you have to spend money to make money takes a back seat come capital request time.

    So we here at Icrontic set out to build a bigger, better, badder workstation on a home PC budget.

    The question of "what is the best" is not easily answered. Determining what is the best for your needs and expectations is a matter of knowing what your demands are and learning how to fulfill them. What is expected from the PC workstation? Do you want fast renders? Do you want to easily manipulate complex 3D scenes or drawings? Do you need the fastest processor, biggest video card, the most RAM or the fastest hard drives?

    Can you do more for less?

    That's what every manager wants to hear especially when assembling the yearly departmental budget. I'm in a unique position in my professional life and it allows a look at this problem from many sides; financial, user and builder. I wear one hat as the department manager. I wear a second hat as an active writer/producer/director who works daily with the department on television commercial projects. I wear a third hat as PR manager and a hardware reviewer for Icrontic. It means that I have no one to complain to but myself when it comes to the equipment not being fast enough. It also leaves me frustrated that the IT people dictate that I have to buy overpriced workstations when I know I can build two or even three systems for the same price.

    So I unleashed a room full of designers on an affordable system we put together. (The image reminded me of a commercial, now a decade or three old, that features a gorilla doing his best to destroy a piece of luggage.) The designers are rooted in the MAC world and if a PC is required it has to be the hugely expensive and well-known order off the web workstations. (I'm not going to point fingers) Even the art director's personal home system is a three to four thousand USD dual 1.7 GHz Xeon workstation with an nVidia Quadro card.

    Did we do it?

    Simple answer? There isn't one. What looks good on paper may not perform well in reality. Benchmarks give some information but not the complete experience. More isn't necessarily better.

    The best is a matter of debate but the smart consumer knows a lot about what they expect, a little about how it may work together and enough to choose the right combination of hardware. The following pages are just that; a guide to determine your expectations, answers to how it all works and a little bit of knowledge to make the right choices. Armed with this information you can more easily navigate the world of what's best for you through the ever-changing landscape of computer technology.
    • This is so boring... everyone on slashdot knows how to build a sooped up box.

      I thought the article was about a "black box" hack of a multi-functional wireless AP.

      Wake me up when someone garage hacks one of those, I wanna know what features are cool.
    • Umm - this is only the first page - There is a lot more to the article than this... Now, if only I could //get// to the darned thing...!

      Dammed /. Effect!!! Grr!
    • Comment removed based on user account deletion
    • The big picture

      The majority of PC consumers buy pre-built systems based on assumptions and budget. Today's PC consumer has more information at hand to select or build a PC that is better suited for their needs. Choosing or configuring the particular components is often based on how much, how fast and how big can it get while staying within a budget. The MHz rating of the processor is often the first consideration in this equation. The PC consumer looks for simple answers. Compromises are often made in order to divert budget to obtain a faster processor. The trap is the assumption that more megahertz is better could short-change other components in a system and the consumer ends up frustrated by a lack of desired performance to suit their needs.

      The goal of this article was to build a PC, on an acceptable home buyer's budget, to function as a workstation capable of taking on 2D and 3D jobs in a broadcast television station. The hopeful conclusion will be to teach you that what you expect from the PC is the first question that must be answered before choosing the parts.

      Defining broadcast industry standard in a PC has some grey area depending on where it is in the production chain. Broadcast video has to meet a set of parameters that can only be measured by a video waveform monitor. Expect to shovel out approximately $4000 USD to add this option for home use.

      This PC will be used to output work that will eventually find its way to a non-linear editing system that assembles and outputs it to tape. While the display image is extremely important for the graphic designer it is the finished file itself that is eventually transferred to a format for playback to air. The video card will not be used to output a signal that will be recorded or used straight to air.

      The work produced is either a completed piece or a collection of elements that are to be used in a completed piece. These elements may be produced solely or through the combination of 2D and 3D software applications such as Adobe Photoshop, Adobe Illustrator, Adobe After Effects and Softimage. For example, Photoshop files may be used as name supers or background elements. On a larger scale, After Effects may be employed to composite Photoshop, Illustrator and Softimage elements plus internally generated elements and effects to build a complex timeline that is rendered producing a finished piece or pieces. A waveform monitor is referenced at certain stages to ensure the output does not exceed acceptable levels.

      The PC workstation needed to be the right balance of components that have the power to manipulate complicated 2D and 3D applications real time then render at an acceptable rate. Image quality was important to display sharp, true images. This combination would be easy to obtain if money was no object...

      But money is the object.

      Is it possible to build a workstation on a budget then have it stand up against a room full of graphic designers? Broadcast production requires expensive specialty hardware but technology is making some leaps and bounds at the consumer level. Is it possible for the home PC buyer to have an affordable system that stands up to professional expectations?

      As the old saying goes time is money and in the time it took to click a mouse a few times the price tag of a popular retail pre-configured workstation rocketed up to just over $11,000 Canadian or nearly $7,000 USD! That's not a typical home PC buyer's budget. Dude...we didn't have to get one!

      So this is what we got.

      -- Still trying to get the rest of the article --



    • The System

      The broadcast box:

      AMD 2100+ Thoroughbred Processor

      ABIT AT7 motherboard

      Matrox Parhelia 512 triple head video card

      2 x 512 MB Micron PC2100 RAM

      Sony 52x CD

      LG 32x10x40x CDRW

      40 GB Maxtor ATA133 Hard Drive

      60 GB Maxtor ATA133 Hard Drive

      2 x Samsung 950p 19" Monitors

      USB Keyboard and Logitech USB wireless Optical Mouse

      Globalwin CAK4-76T HSF

      AMK SX1000 modded PC case (window, fans, cables, loom)

      Enermax 465 Watt FC PSU

      Windows XP Professional

      Digital Doc5

      The price tag came in just over $3500 Canadian or approximately $2200 USD.* That's 70% less than the well known pre-configured workstations priced out initially. It may still be expensive for family use but it had to do a little more. The crucial step in choosing a system is determining what is expected of it. If it is there to surf the Internet, write the occasional school essay and send/receive e-mail then a very economically priced computer can be built.

      It's just that e-mail was the last of the concerns in a workstation.

      *prices including monitors and OS as of September 1, 02 currency converted from CND to USD. Source: www.atic.ca

      Choosing Chips Pt. 1

      At AMD we deliver the kind of smart, essential semiconductor-based products and platforms that work best to meet your needs.

      Today, Intel is behind everything from the fastest processor in the world to the cables that power high-speed Internet.

      Choosing chips.

      Choosing a system does begin with the processor as it determines choice of RAM and motherboard. This may lead to price differences that greatly affect the end product performance especially where a budget is concerned.

      Choosing a processor used to be as simple as the most MHz for your money then add the other components to fit the budget. Intel has exploited public perception by raising the MHz bar ever higher. The question remains; is more...better?

      AMD or INTEL: which to choose? These two companies play a rival game akin to David and Goliath where Intel's market share and marketing capital seemingly overwhelm AMD. Meanwhile AMD is the enthusiast's choice and many of these enthusiasts vehemently defend AMD for performance and where the smart money is. It's a lively discussion on whether the tables have turned and if INTEL is on the defense while AMD is on the offense. One cannot ignore the fact that the balance of power is shifting with AMD clawing away at INTEL market share. Why AMD is gaining chips away at the very foundation of INTEL claims that faster is better.

      "The introduction of the highest-performing PC processor in the world is a victory for application performance and a resounding defeat for the 'megahertz myth,'" said Ed Ellett, vice president of marketing for AMD's Computation Products Group. "As the performance leader, the AMD Athlon XP processor 2600+ reigns as the superior choice and delivers outstanding application performance for richer, high-powered digital computing."

      The chip wars float around catch phrases to attract consumer attention. The most common is the megahertz or gigahertz rating. The buying public believes more is better. INTEL proudly trumpets this fact and AMD challenges it squarely. In side by side comparisons between INTEL and AMD processors the difference in the performance line between the two can be very thin. To some the choice is quite simple but if it's not then you need to know a little bit about what is coming to market and why to at least help in the decision process between models of processors.

      The latest advancement is the recent move from 0.18-micron technology to 0.13-micron technology by both INTEL and AMD.

      What's a micron and how big is it?

      A micron is pretty darn small. There are twenty-five thousand four hundred microns to one inch. A human hair can be anywhere from about 40 to 300 microns wide. A powerful microscope is needed to see an object that is one micron wide. An object that is one micron wide is smaller than most bacteria. That's how small a micron is.

      AMD and INTEL have reduced processor manufacturing to the 0.13-micron scale. That means the smallest circuit in the processor is only 13 microns wide. It's not like you could use your soldering iron to fix a broken connection. This is pretty close to the nanotechnology scale that is so often bantered around in the science fiction shows we watch.

      Why is smaller better? Processor chips are etched onto wafers of silicon. If the overall size of the chip is reduced then more chips can be etched onto a single wafer of silicon. This increase in the number of chips per wafer reduces the cost of manufacture which, we hope, will be passed on to the consumer.

      Processor manufacturers aim for a balance between reducing size and increasing processor capability. If a 0.18-micron processor is made using 0.13-micron technology then the overall space taken up by the circuitry is reduced. Let's put this on a scale that is easier to visualize. If a home theater system is shrunk in size by 50% and the bulky 33" TV is replaced by a flat panel TV then there would be a lot of room left over in that wall unit of yours for more stuff. You may choose to buy a smaller wall unit or cram more "stuff" into it. Perhaps a compromise could be reached between adding more "stuff" and reducing the size of the wall unit.

      Processor manufactures do the same striving to reduce the overall size but still pack on more "stuff".

      Choosing Chips Pt. 2

      Good things in small packages.

      Smaller is better and the additional "stuff" is notably an increase in L2 Cache. This may be a term that is familiar but not quite understood. Cache is small, fast memory located on the CPU. CPU Cache holds the most recently accessed code or data. This SRAM is accessed much faster than your main system memory because it's located right on the processor core. Processor manufacturers started to increase the amount of L2 Cache due to demands that software was making on the CPU. Manufacturers are also looking to increase the speed of this cache. The more data or code the L2 cache can contain and the faster it can process should mean an increase in system performance.

      Voltage x resistance = bad.

      The more that is packed onto a processor and the more it can do takes electrical power or voltage. This simply translates into an increase in thermal heat as MHz and technology increases. Reducing the scale or die size of the processor reduces the required voltage for the processor to properly function. An electrical signal traveling through a circuit meets resistance along the pathways. This resistance becomes heat similar to heat friction when you vigorously rub your hands together. If the distance the signal needs to traverse is reduced then the signal requires less energy to get around and thus encounters less resistance. Less voltage and less resistance equal less heat.

      If the die size has been reduced then why the increase in heat as MHz increases? Quite simply no matter how small an engine is made it will get hotter as it runs faster. An important point to note is that faster processors do require more voltage at certain stages but they always generate more heat as the MHz climbs. By building processors on a smaller scale the heat curve has effectively been "bumped down" from previous, larger processor dies. AMD has also engineered other design and manufacturing tweaks to assist in the challenge of reducing thermal output and increasing speed. We all know heat is the enemy of any processor. Heat is a hot subject of discussion. Consider the following equation.

      (Faster + voltage) = temperature - (fans x dBA)

      This equation was just made up for this article but it states that faster requires more voltage and where temperature is the variable the number of fans or dBA of those fans must increase to provide balance to the equation in an air-cooled system. The faster you want to go means you need more cooling which could mean more fans to provide that cooling and thus more noise. There are solutions later on in this article.

      Get on the bus.

      The Front Side Bus speed is the MHz rating at which data is transferred to and from the processor to the rest of the system. Theoretically higher the FSB results in a faster processor. The goal is to maximize the processor speed to perform tasks quickly and efficiently. Currently the Front Side Bus with AMD processors it is at 266 MHz with speculation that AMD has a 333 MHz FSB processor in the works.

      Lastly is the inner working of the processor circuitry. This cannot be easily explained but it is safe to say that each of the rivals in the chip wars are constantly developing, refining and perfecting their processors to crunch numbers faster and in greater gulps.

      Now you know everything....not a chance.

      This little bit of knowledge can be a dangerous thing when it comes to determining which processor is better. A consumer may come to the conclusion that INTEL processors are faster than AMD processors on the details that were just explained that:

      The higher the MHz the better

      The higher Front Side Bus Speed the better

      The more L2 Cache the better

      The lower the voltage the better

      INTEL
      AMD

      Processor Frequency
      2.8 GHz
      2600+ (2.133GHz)

      Thermal Design Power
      68.4W
      62 W

      Bus Speed
      533 MHz
      133MHz (266MHz DDR)

      Core Voltage
      1.50 V
      1.65 V

      L1Cache Size
      8K
      128K

      L2 Cache Size
      512K
      256K

      L2 Cache Speed
      2.53 GHz
      2.13 GHz

      Die Size
      0.13 micron
      0.13 micron

      It's easy to see that assumptions may lead a consumer to believe that the INTEL product is a better processor. These basics may have some validity on paper but not so in the real world. Why the lesson on MHz, die size, bus speeds and cache? The lesson is not which processor is better. The lesson is to not make performance assumptions based in the belief that bigger numbers are better.

      Choosing Chips Pt. 3

      It's easy to see that assumptions may lead a consumer to believe that the INTEL product is a better processor. These basics may have some validity on paper but not so in the real world. Why the lesson on MHz, die size, bus speeds and cache? The lesson is not which processor is better. The lesson is to not make performance assumptions based in the belief that bigger numbers are better.

      AMD has changed the fact that more MHz means better. As mentioned previously, side by side processor comparisons between INTEL and AMD chips prove this. The 64-dollar question is why?

      By using a layman's analogy once again, an INTEL CPU "engine" may run at a higher RPM (MHz) but it doesn't have the equivalent torque to match the high RPM (MHz). An AMD processor may run at a lower megahertz but it does have better torque. This is an incredibly simplified explanation but it gives the needed broad brush strokes. AMD technology on how the processor is "geared" allows their processors to rival and, in some cases, surpass INTEL processors that are clocked at a much higher frequency.

      So how does a consumer decide upon which processor? It's safe to say that the majority of PC buyers only care that it works and works fast enough for their needs. The average consumer either doesn't understand or could care less about Front Side Bus Speed, how many transistors there are, or how small a die is. A lot of PC buyers also do not realize that there is another choice beyond what is widely and visibly available on store shelves. AMD vs. INTEL marketing and product awareness is another topic altogether and best left alone lest we travel down another long road.

      To berate a point, AMD has shown that in today's marketplace GHz is not the defining mark of a processor. The important piece to the education puzzle is how each of these processors compares in benchmark tests especially introducing the performance to cost side of the equation. There are many comparisons that pit the AMD processor against rival INTEL in the never-ending battle of who's the best. Read a couple of these reviews and they will show in the multitude of benchmark tests that these processors trade off pole positions. In one test AMD may edge out INTEL and in another INTEL may come out ahead. In most the difference between the two is a matter of seconds, frames, or a handful of points. In real world "everyday" performance there would be an almost unnoticeable difference in most applications when comparing similar processors.

      Bar graphs may show who's ahead but it's important to look at the physical numbers before making a decision. Ask yourself who's ahead and by how much and in what particular application. A 2.8 GHz INTEL processor may achieve more frames per second than an AMD 2600+ in Quake but without insult, the difference is small and most likely unnoticed by the user actually playing the game unless their goal is boasting rights.

      That being said what would be another deciding factor? The AMD processor is priced far more competitively than the INTEL processor which means there's more money left over to pocket or spend on more RAM, a better video card or another hard drive.

      Processor Prices*

      AMD

      INTEL

      Athlon XP 2600+ (2.13 GHz)
      $300
      Pentium 4 2.8 GHz
      $537

      Athlon XP 2400+ (2 GHz)
      $200
      Pentium 4 2.53 GHz
      $240

      Athlon XP 2200+ (1.8 GHz)
      $146
      Pentium 4 2.4 GHz
      $206

      Athlon XP 2100+ (1.73 GHz)
      $112
      Pentium 4 2.2 GHz
      $202

      Athlon XP 2000+ (1.67 GHz)
      $59
      Pentium 4 2.0 GHz
      $161

      Athlon XP 1900+ (1.6 GHz)
      $78
      Pentium 4 1.9 GHz
      $154

      Athlon XP 1800+ (1.53 GHz)
      $64
      Pentium 4 1.8 GHz
      $139

      Athlon XP 1700+ (1.47 GHz)
      $59
      Pentium 4 1.7 GHz
      $125

      Athlon XP 1600+ (1.43 GHz)
      $52
      Pentium 4 1.6 GHz
      $117

      Athlon XP 1500+ (1.4 GHz)
      $53
      Pentium 4 1.5 GHz
      $102

      *Prices in USD from www.pricewatch.com August 31, 02 Socket A/478 processors.

      But you may think GHz to GHz again and wonder why you are paying $200 for an AMD 2400+ (2 GHz) when for another $6 more the 2.6 GHz Intel processor is available. A buyer may think that $6 for another point four GHz may be better. But that's just not the case. Read a review or three and there will be a performance picture that will form. Combine the performance/price analysis with your expectations and then the answer should be clearer.

      The final scoff any nay-sayer of AMD product may volunteer is that of stability. Many consumers state the reason for choosing INTEL is due to the perception that INTEL systems are more stable and require less driver updates and tweaking. This may have been the case years ago but is completely false at present. Any system can be properly set up and IF LEFT ALONE will or should continue to operate as intended. AMD systems are stable. If a consumer purchases a pre-configured AMD system from a reputable source they are going to have the same "stability" experience as if they purchased a pre-configured INTEL system. Large pre-configured PC suppliers go to great lengths to ensure that all of the components as sold work reliably with each other right out of the box. Intel is also the dominant force with far more processors per PC than AMD. Software and hardware developers would choose to align and optimize their product with the processor product that is in more homes and businesses. It's a marketing move. If a consumer chooses to build the computer from individually purchased components then they run the same risk of hardware conflicts and problems regardless of processor choice.

      Which processor is better? Which truck is better, Chevy or Ford? I don't think an overall clear-cut winner can be crowned but when trying to build a powerful system within a budget we think of ourselves as smart shoppers by getting the most with AMD.

      The Mother of all Boards

      The mother of all boards

      Selecting an AMD based system has other advantages. AMD based motherboards offer a wider range of motherboard configuration options than rival INTEL based motherboards. Which AMD driven motherboard is a matter of the requirements mixed with a dash of personal experience, a pinch of recommendations from friends, a paragraph or twenty from the forums and a page or four or sixty of research.

      I admit I've had a preference for ABIT product. I've grown to rely on ABIT for their stability and flexibility. They offer a wide range of choices to suit almost any need. The ABIT AT7 was supplied to us for this system which proved to be really good...and really bad.

      CPU

      Supports AMD-K7 Athlon /Athlon XP Socket A 200/266MHz FSB Processors

      Supports AMD-K7 Duron Socket A 200 MHz FSB Processors

      Chipset

      VIA KT333 / VIA VT8233A

      Supports Ultra DMA 33/66/100/133 IDE protocol

      Supports Advanced Configuration and Power Management Interface (ACPI)

      Accelerated Graphics Port connector supports AGP 2X(3.3V)and 4X(1.5V)mode (Sideband) device

      Supports 200/266/333 MHz (100/133/166MHz Double Data Rate) Memory Bus Setting

      Ultra DMA 133/ RAID

      High Point HPT374 IDE Controller

      Ultra DMA 133MB/sec data transfer rate

      RAID 0 (striping mode for boosting performance)

      RAID 1 (mirroring mode for data security)

      RAID 0 + 1(striping and mirroring)

      Memory

      Four 184-pin DIMM sockets support PC1600/PC2100/PC2700 DDR DRAM modules

      Supports DDR333 unbuffered DRAMs up to 2GB and registered DRAMs up to 3GB

      Supports 6 banks up to 3GB DRAMs for unbuffered DDR200/266 modules

      Supports 8 banks up to 3.5GB DRAMs for registered DDR200/266 modules

      Audio

      Realtek ALC650 (AC-Link)

      Supports 6CH DAC for AC3 5.1 CH purpose

      Professional digital audio interface supporting 24-bit SPDIF OUT

      Card Reader (Optional)

      Supports Memory card (MS or SD) Interface

      Supports SONY Memory Stick Interface/ SD Memory Card Interface

      Supports Compact Flash ROM Interface

      System BIOS

      SoftMenu III Technology to set CPU parameters

      Supports Plug-and-Play (PNP)

      Supports Advanced Configuration Power Interface (ACPI)

      Supports Desktop Management Interface (DMI)

      Write-Protect Anti-Virus function by AWARD BIOS

      LAN

      On board Realtek 8100B single chip Ethernet controller interface

      10/100Mb Operation

      User friendly driver included

      Multi I/O Functions

      2 Channels of Bus Master IDE Ports supporting up to 4 Ultra DMA 33/66/100/133 devices

      4 Channels of Bus Master IDE Ports supporting up to 8 Ultra DMA 33/66/100/133 (RAID 0/1/1+0) devices

      4 USB 1.1 Connectors

      On board VIA VT6202 USB 2.0 header for four extra USB channels

      Three 1394a fully compliant cable ports at 100/200/400 megabits per second

      Audio connector (Line-in, Center/Sub, Surround Spk , Front Spk , Mic-in)

      Miscellaneous

      ATX form factor

      1 AGP 1.5v slot, 3 PCI slots

      Hardware monitoring - Including Fan speeds, Voltages, System environment temperature

      Motherboard Pros & Cons

      Pros and Cons

      It boils down to a few obvious reasons why this board made the top of the list. The AT7 has the capacity to support an obscene amount of hard drive space. When working in broadcast design with uncompressed video it's going to be needed. External storage solutions of any substance are extremely expensive. The AT7 could feasibly run eight 160 Gigabyte drives off the highpoint controller. That's over a terabyte of hard drive space which is almost 1000 hours of video at DVD quality. As I said before...it's an obscene amount of hard drive space. Data integrity is of a concern but a mirrored array can be easily set up. As a rule, in a professional work environment, projects should and are backed up to external media as completed.

      The AT7 has 4 USB headers which is becoming commonplace but is always of benefit. The AT7 also features USB 2.0 support and it's good to have technology that looks forward anticipating options rather than falling quickly into obsolescence.

      Two built-in 1394a (FireWire) ports were of great value. Shoving large files (400-800 MB) around a network can be excruciatingly slow. A quick solution was to transfer data to an external FireWire drive and then walk the drive from system to system as it was required and that isn't too often. It's a reusable and fast conduit for large file transfer between the graphic workstations and the edit suite, MAC or PC. It's true. Not every business is perfect and the IT folks just haven't got around to connecting the graphic design workstations with the non-linear suites on their own large bandwidth network.

      The AT7 came with other onboard features that presented a cost effective solution compared to purchasing 3rd party PCI cards and these include surprisingly good 6 Channel sound and NIC.

      There is only one caution with the AT7 and one issue.

      The AT7 does not have parallel or serial ports on the back plane. It is a legacy free motherboard. If there is a need to attach these types of peripherals then the AT7 will disappoint.

      The issue with the AT7 was of questionable support of the new AMD Thoroughbred processors. The AT7 wasn't totally compatible with this new series of processors. It was extremely unstable with any amount or combination of DIMMS of Registered ECC ram. Unbuffered RAM in any amount or combination would eventually generate a HARDWARE MALFUNCTION blue screen. This occurred every 3-5 hours for no apparent reason. It is hoped that a future BIOS will fix this or future AT7 boards have been tweaked at the assembly plant.

      Please note that a 1900+ Palomino processor functioned beautifully with 4 DIMMS of 256 MB PC2100 memory in either Registered ECC or unbuffered sampling. The AT7 test system chugged magnificently through render after render without a problem. I hope ABIT is focused on the concerns pointed out and will hopefully have a solution soon.

      A Clear Choice

      If you have the power to do more then you have the power to create more. The final product is then not limited in look and feel by the hardware.

      This comment comes from the art director and makes me cringe as powerful hardware costs powerful bucks. Complex 2D and 3D work has a tendency to eat video cards for breakfast. A fast gaming card usually does not have the supporting features and will quickly expose its shortcomings under a task especially in 3D design. Enter a powerful Matrox Parhelia at a significantly less than powerful cost.

      A clear choice

      The background on the Matrox Parhelia 512 comes from Icrontic's initial review.

      The Parhelia-512 is the world's first 512-bit Graphics Processing Unit packed with 256 MB DDR on board. A 256-bit memory interface shoveling out a hefty 17.6 GB/s 275 MHz memory bandwidth.

      Matrox is well known for their world class DualHead dual monitor support and now they have taken it one step further by adding a third monitor. The third monitor opens up a new era of gaming that Matrox has dubbed Surround Gaming. How are they going to do this and maintain frame rates AND take gaming environments to the next level? Matrox created a Quad Vertex Shader Array made up of four 128-bit vertex shader engines. Add the highest quality trilinear and anistropic filtering through their 64 Super Sample Texture Filtering. Matrox also boasts that their 36-Stage Shader Array is the most complex rendering engine ever built. Smooth it all out with 16x Fragment Antialiasing (FAA-16x).

      SURROUND GAMING obviously wasn't a priority in a video card for broadcast design. It wouldn't be productive for the designers to be fragging away in Quake instead of doing their work. But they still did anyway. The term used was "research" but I didn't believe that for a minute. Below Softimage XSI occupies two monitors and the third is available for Illustrator. This is very handy for changing any textures inside Softimage.

      Another phrase floating about the Matrox offices these days is SURROUND DESIGN. In the past PC monitors got larger as graphic designers needed more elbow room to work. Then came two monitors providing space to place and there isn't a system in the author's design department that isn't dual monitor. If it's single monitor then it's for e-mail and that's because that hardware hasn't been replaced yet.

      Then in came the Parhelia sporting triple monitors and the designers looked at me as if I was nuts. Sebastian MacDougal of Matrox explains:

      Matrox Parhelia and Surround Design are enjoying a lot of support from design focused Independent Software Vendors (ISV's) who agree that the more you can see, the more productive you will become. The ability to either spread a project across three displays or having the ability to place various windows strategically across your desktop for better organization is something that workstation users have been asking for, for years. However, in the past it required using multiple cards which drastically reduced performance, and unless you are using Parhelia, this is still the case with competing graphics solutions today. But perhaps the most substantial benefit for the ISV's that we work so closely with is that Surround Design, in most cases, requires no direct intervention at the SW level in order to get it to work, meaning it is very easy for most ISV's to support and the advantages are enormous. To give you an idea, with the current 1.01 driver, Parhelia and Surround Design is optimized for: Softimage|XSI , 3ds max, AutoCAD and Microstation, with many other applications to follow shortly. At Siggraph 2002 in San Antonio Texas, the reception on the part of attendees to Parhelia and Surround Design was tremendous and it is completely understandable. An interesting analogy is how designing on one monitor is similar to a horse with blinders, having three displays just opens things up and allows you to be more productive.

      Initially the designers didn't know what to do with the third monitor but in time they began using the extra display each in their own way. Due to the fact that the system had sufficient power and resources they could work in two or three programs simultaneously. For example After Effects is much easier to work in over two monitors and, thus, the third monitor allowed for Photoshop or Illustrator to remain open and easily accessible to adjust or create any elements for use within the After Effects project. The Parhelia has the memory size and graphics processing power to allow for smooth interaction with these programs. Combine this with the strength of the CPU and available system RAM and many a designer were kept happy.

      How a user may work with three monitors is up to them but a third desktop enables a user to work within a program that is better suited for two monitors AND keep access to other tools without having to minimize or hide the main program. For example Adobe After Effects stays open in two monitors and Photoshop remains accessible on the third. Pictures above speak louder than words.

      One of the Parhelia's strong selling features is, what Matrox has termed, GigaColor. This feature and its benefits were expanded upon in Icrontic's first review.

      Dig around and there's a feature that most may not pay attention to but for the 2D/3D graphics professional and even the home user it will mean stunning images right to the desktop. Matrox hung the term 10-bit GigaColor on it. To you and me it is 10-bit video technology and it runs through a very speedy dual integrated 400 MHz 10-bit RAMDAC. That leaves the competition many MHz back. 10-bit technology is the same technology that allows for precise picture control in home theatre DVD players. 10-bit technology can partially be found in high-end video cards that cost thousands of dollars.

      The difference is that Parhelia-512 delivers 10-bit technology through the entire card.

      It must be told that 10-bit GigaColor still remains a bit of a mystery though it has been literally beaten into my ears by the kind folks over at Matrox. 10-bit GigaColor provides for an increase in the shades of any given color from the standard of 256 to 1024. The color palette leaps up from 16.7 million to 1 billion. This is a benefit when acquiring images such as through the use of a scanner where image control will be to a greater precision at time of capture. A greater range of the shades of a color is available thus greater control over what is kept or discarded is possible. This would primarily benefit print and magazine pre-press artists.

      But sadly we people in television deal in comparatively grainy and low rest images and the benefits of GigaColor didn't jump out and bite us on the nose. For the record the designers did notice the desktop appeared "more saturated and colorful" when it was pointed out to them. You have to understand that designers work with what they have. Technology is not such a big deal. They care about what they can do with it rather than what it has "under the hood". Though we would be much more satisfied if the rest of the computer system moved to 10-bit color base but that would mean new technology for ...well...everything.

      There is good news on the horizon about GigaColor according to Matrox.

      Upcoming OS's from Microsoft (i.e. Longhorn) will include support for greater than 8-bit per color channel precision at the desktop level, which is why you are seeing more and more companies include support for higher precision color depths. But of course, we were the first and are the first shipping product to offer that functionality, and as we make our own boards you know you'll get the right components for sustained image quality

      The designers were quick to adapt to the flexibility the Parhelia offered and enjoyed working in an environment that produced clear, crisp images to the desktop. The only drawback is each of them would like a Parhelia of their own and 3 digital flat panels. That means a few more dollars added to this year's capital purchase forms. More paperwork....just what I enjoy.

      Keeping Cool

      Keeping your cool.

      There has been criticism towards AMD processors for running "too hot" comparatively to INTEL processors. (Please note that the 2.8 GHz INTEL processor runs hotter than the AMD 2600+) Many enthusiasts have clamped, bolted, hung or clipped every type of copper or aluminum heatsink to the processor in order to combat the excess thermal heat. Sitting on these metal monsters can be screaming fans generating massive airflow to keep the processor "on ice" as game play heats up.

      Keeping processors cool during normal operation is a matter of a few good choices, a bit of computer know-how and the right cooling configuration. Overclocking is a different story as the processor is subjected to increases in voltage and MHz resulting in above spec stress.

      Two background articles that may assist in the theory and configuration of an efficiently cooled computer are Case Cooling Tweaks Part 1 and Case Cooling Tweaks Part 2.

      There have been dozen or so heatsinks that have been planted onto to my processors over the years and there is one that still remains my favorite.

      GlobalWin's WKB-38 has been a heatsink of personal choice. The WBK38 allowed for the mounting of a large diameter, low dBA fan. It blew a whopping 55.1 CFM at a very tolerable 36 dBA. In a properly vented case this remained a quiet and highly efficient heatsink/fan combination and still does.

      Why the trip down memory lane? If it works then don't knock it and there hasn't been a heatsink that has enticed me to finally retire the "good ol' WBK38".

      GlobalWin offered up their new CAK4-76T for the test system to see if they could sway me to the higher efficiency of copper. The CAK4-76T has a built-in temperature sensor to speed regulate the 70 x 70 x 15 mm fan. At 30 degrees Celsius the minimum airflow is 23.1 CFM at 24.7 dBA and at 38 degrees Celsius the maximum airflow is 36 CFM at 35 dBA. Compared to the WBK38/92 mm. fan combination it was similar in noise level but came up 20 CFM short.

      Fan( Per Fan )

      Sensor Temperature
      30C
      38C

      Operation Voltage
      DC 10.2 ~ 13.8 V

      Rated Voltage
      DC 12V

      Input Current
      0.2A MAX.
      0.28A MAX.

      Input Power
      2.4W MAX.
      3.36W MAX.

      Bearing System
      One Ball One Sleeve Bearing

      Fan Speed(RPM)
      3000±15 %
      4500±15 %

      Max. Air Delivery(CFM)
      23.1± 15 %
      36± 15 %

      Noise Level
      24.7± 2dBA
      35± 2dBA

      Fan Safety
      UL Approved

      Fan with RPM signal output
      Yes

      Heatsink

      Dimension
      70 x 66 x 40 mm

      Material
      COPPER 1100

      Weight
      595g. (1lb. 5oz.)

      Mounting Kit

      Retention Clip
      Steel SK7 ( Quality Material Clip )

      Thermal Interface
      High thermal conductive interface

      Material
      GW101/GW103

      Connector
      Molex 2510 / 2695 3Pin

      The CAK4-76T comes packaged with the necessary mounting hardware for either INTEL or AMD processors. It is important to note that the cooling requirement of this PC system was to control temperature and keep noise to a tolerable level. 30 dBA is similar in noise level to whispering and more often than not the average RPM of the CAK4-76T fan was 3300 RPM which puts it less than 30 dBA and that is extremely quiet.

      GlobalWin has improved their clip mechanisms and the CAK4 was able to be attached with little effort and a small flat blade screwdriver would be handy. A further improvement would have been to make a three socket ear clip instead of the single socket ear design. I have to say that, for everyday use, the CAK4-76T may find a permanent home. It's quiet and efficient for a workstation.

      The heatsink is also just another player in the heat game. As the Case Cooling Tweaks articles point out the correct choice of a PC case and additional fan modifications can help win the battle against heat and noise.

      On the case

      Breaking out of the beige box...the right way.

      AMK Computers came to the table with the SX1000 and set up a workstation case that delivers looks, cooling efficiency and a few other treats. The base SX1000 case comes standard with

      Space 4 drives in a removable bay

      Space for a zip and floppy in a removable bay.

      fan mounts (two front-two rear)

      space for 4 external 5.25 inch drives

      locking access panel

      locking front drive cover

      To this AMK added:

      A side window with 2 more fans

      A top blowhole

      VBLOCK sound dampening material

      Cable Loom

      Rounded cables

      Digital Doc 5

      Enermax 465 PSU (FC)

      Fan filters

      The neon lights were thrown in for this article just to make the case look better. I think they add a few MHz here and there due to the fact the case "looks" faster.

      Seven fans plus the two Enermax PSU fans and heatsink fan may seem like a lot and loud. Quite the opposite as all the case fans were kept to ADDA 25 CFM/ 25 dBA specifications and regulated by the Digital Doc 5 fan controller. When the fans were not needed they were shut off. Only two fans, the top exhaust blowhole fan and one of the rear exhaust fans, were kept constantly running. (In addition to the PSU and heatsink fans). The two always on fans provided continual airflow yet emitted a minimum of noise. Again the computer in non-stress applications or when not rendering ran at below 30 dBA...less than a normal whisper.

      The heatsink is warmed by the processor as the system was stressed. The fin design of the CAK4-76T allowed for the tips of the Digital Doc 5 thermistors to be inserted between the fins. This did not block airflow but this configuration allowed the Digital Doc 5 to directly read the temperature of the heatsink. Fans were turned on or off in a preset order to compensate for the increases or decreases in temperature. A "full roar" my cat was louder.

      The last cooling tweak was to apply the WPCRSET tweak to enable the CPU halt command. This halts the processor and allows it to drop 5-10 degrees Celsius off pre-tweak levels. Besides updating the drivers the WPCRSET tweak was the only software OS "hack" if it could be called that.

      In order to test this configuration a SOFTIMAGE project followed by an After Effects project were rendered out. The Softimage render took approximately 50 minutes (the first flat peak) and the After Effects render (the second peak) took 10 minutes. The following graph shows the temperatures never exceeded 46 degrees Celsius (23.5 C room temperature) which is only a 10-12 degree Celsius increase over base line temperature. That's a very satisfactory result especially with a system that operates through a range of 25-35 dBA.

      The neon lights are available as an option and it was rather humorous watching designers and other employees wander by, stop, and back up to take a second look. Most came in and peered into the side window of the PC and said the word "cool" a lot. It is true that these people know of nothing other than the "beige box". They asked "why the window?" The answer was "why not?"

      Computers can become very dusty even in apparently clean offices. Filters are the solution to greatly cut down on the amount of dust that collects and clogs a PC after months of use. Filters do reduce airflow but they are worth it. A picture is worth a thousand words and this was the result of only 3 weeks of operation. The fans these filters covered were also not spinning at all times. This dust was the result of what was sucked into the case (or tried to be) from the airflow generated by the back plate and PSU fan. The filter on the left is clean and the one on the right...ugh.

      The plethora of benchmark programs can be important when determining what does what task faster or better. These are specific assessments of individual functions. For this article it was decided to add a few more of what is our assessment of real world tests. It was also thought important to show how a change in one particular component could affect end results. It is hoped that the result of these tests will help you assess priorities in system configuration to match the priorities in system expectations.

      • Here's the rest:

        Test System & Programs

        The test system.

        AMD 2100+ Thoroughbred Core Processor

        AMD 1900+ Palomino Core Processor

        ABIT AT7 motherboard

        Matrox Parhelia 512 triple head video card in single head mode* 1.01.69 beta driver

        2 x 512 MB Micron PC2100 RAM

        Sony 52x CD

        LG 32x10x40x CDRW

        16 x DVD (not included in pricing)

        40 GB Maxtor ATA133 Hard Drive

        60 GB Maxtor ATA133 Hard Drive

        2 x Samsung 950p 19" Monitors

        USB Keyboard and Logitech USB wireless Optical Mouse

        Globalwin CAK4-76T HSF

        AMK SX1000 modded PC case (window, fans, cables, loom)

        Enermax 465 Watt FC PSU

        Windows XP Professional build 2600 updated

        Digital Doc5

        *dual and triple monitors enabled for Adobe After Effects and Softimage benchmarks only.

        Programs used:

        Sisoft Sandra 2002, ZD Media Business Winstone 2001, ZD Media Content Creation Winstone 2001, MadOnion 3DMark 2001 SE, Quake III Arena, Passmark Performance.

        Commanche 4, Serious Sam: the Second Encounter, GL Excess, Drone Z, SpecviewPerf 7.0, PS Bench,

        Adobe Photoshop 7.0, Adobe After Effects 5.5, SoftimageXSI 2.0.1, MediaCleaner Pro 5

        The above benchmark programs are publicly available. For more about Ziff Davis and the etesting labs program go here.

        After Effects Pt. 1

        Adobe® After Effects® 5.5 software delivers a set of tools to produce motion graphics and visual effects for film, video, multimedia, and the Web whether working in a 2D or 3D compositing environment. After Affects is a main creative program and works in concert with Adobe Photoshop, Illustrator, Softimage and a Media100 non-linear edit suite.

        A user interacts with Adobe After Effects through the GUI and produces finished work by rendering a project to the hard drive. The amount of effects and elements that After Effects can do is far too lengthy to summarize accurately but guaranteed it is extensive in its palette of tools. Therefore After Effects can make a myriad of simultaneous different demands on the CPU/GPU/RAM systems. To demonstrate the benefits of different hardware components a real world After Effects projects consisting of compressed and uncompressed video, EPS, internally generated and PICT text elements, transitions, size scaling, shadows, and treatments was chosen as the test project. Benchmark programs may examine individual demands on a system but in the real world this may not be the case and it is important to measure the results of simultaneous varied demands as well as one specific measurement task. It may not be a standardized test but it shows what to expect from a project that encompasses a lot of different tasks simultaneously.

        After Effects primarily uses the processor, video card and ram while a user is working within a composition window and timeline. Adjustments to a project are displayed in real time in the composition window. The faster each of the individual hardware subsystems are the smoother the interaction and the faster the composition window will be redrawn.

        When After Effects renders or builds the finished timeline it is the processor, ram and hard drive that determine speed as the video card is more or less bypassed. After effects will call to the disk for information for the processor to calculate a finished frame and then return that frame to the disk for storage. This process repeats for as many frames that are within the timeline. Remember that any video is a series of still frames and After Effects builds each single frame and "glues" it to the next to finally end up with a playable movie or, conversely, a sequence of files.

        Ram is an important consideration with Adobe After Effects. It will function effectively with only 512 MB of ram but more ram is better. Adobe recommends using the following formula to calculate the amount of ram it needs to preview a composition.

        [(height x width x (bit depth/8) x frame rate x (resolution) 2) / 1024] / 1024 = MB/sec.

        The variables for height, width, frame rate, and resolution depend on the composition setting. Always use the maximum expectations to determine a base of RAM requirement. For example a preview of 10 seconds in NTSC broadcast format would plug into the equation thusly:

        [ (640 x 480 x (32 / 8) x 30 x (1)2) / 1024 ] / 1024 = 35 MB/sec.

        One then should come to the conclusion that 350 MB of available RAM would be needed to preview 10 seconds worth of an After Effects timeline.

        That couldn't be more wrong.

        After Effects Pt. 2

        How much RAM is needed is dictated by the information in the picture and the compression codec used. For example; 30 seconds of a 640x480 white page will take up much less RAM than a video of a stock car race. That's because more information must be stored about the color changes during moving video. A white page is just that...white...and the program will figure out quite quickly that it can save time by repeating the same information about pixel color instead of storing unique information about each one. How much required RAM depends on the variety of color, how often each pixel changes and the particular compression codec used. This will be the only time where a strong recommendation is made. Get at least 1 GB of RAM to make the After Effects experience more enjoyable. Get more RAM if it is expected that there will be a need for longer previews or work in D1, HDTV or widescreen format.

        There will be two benchmark measurements to identify the benefits of different processor components. The the bars on the left feature a small jump in AMD processor speed to demonstrate how more CPU horsepower will speed up the CPU/GPU/RAM dependent RAM PREVIEW. THe bars on the right demonstrate a small increase of CPU horsepower and its effect on rendering speed. This should help determine if the increase you are considering is worth it.

        The results do show the greatest impact in each of the two major functions of After Effects. A small increase in CPU does have a big affect on rendering speed but not in real time ram preview. When designing a system on a budget it is important to identify what is expected and, if budget restrictions require a choice, then the desired balance between expectations must be sought to satisfy favor user interactivity or speed of rendering. Also anticipate that longer RAM previews or larger format previews require more ram. You can literally watch the ram fill. Just leave task manager open and watch the page file usage creep up. The goal in making purchasing choices is to work backwards from what you expect in the end result.

        Softimage XSI

        SOFTIMAGE®|XSI v.2.0 is an incredibly powerful 3D tool that has the capacity to bring virtually any system to its knees especially if raytracing, radiosity or photon-mapping is used to a large extent. If this is the case then there definitely will be a loud scream of anguish coming from a solitary PC system. Softimage projects can become so system intensive that 100 finished frames can take an insane amount of time to render. In order to increase rendering speed many computers are equipped with specialty hardware and are tied into render farms in the single-minded task of rendering a single scene.

        That's enough of the fire and brimstone about complex 3D rendering. Softimage works on somewhat similar principle to After Effects. A faster and more powerful video card will translate to a smoother interface where complex scenes can be manipulated in real time. Note that Softimage does not have an interface to real-time preview a finished frame as unlike After Effects. Users can manipulate objects in a choice of views from wire frame mode to simulated real-time shading mode. In order to look at a finished frame a user must render the frame to disk which bypasses the GPU. A faster processor will result in the faster render. The amount of RAM is not as great an issue as the user is working frame by frame and the graphics card is doing the bulk of the work while working within the GUI.

        This is a most basic overview and there are specialty hardware components that can enhance the speed and interactivity of complex 3D scenes and programs. The designers working on the test system use Softimage on a less complex level to provide enhancements and elements to commercials, promos and station ID elements. Though their work is quite complex to some it a far cry from that of special effects in major film productions.

        Speeding up Softimage requires thinking on the same two levels as After Effects. The Softimage GUI can display very complex and varied effects but it does so in simulated mode. Displaying the finished rendered product in real time is beyond the capacity of most video cards. But there needs to be the hardware features on the video card to accommodate for smooth manipulation of 3D objects and the proper display of simulated effects. Softimage, AutoCAD and various other 3D programs need to access those hardware features in order to function and display the image properly. Don't think that a fast gaming card comes with these physical hardware features or have those that are onboard...unlocked. Remember that last word as it comes to make sense later.

        It is quite true that a fast gaming card will be a poor performer in Softimage if it works at all. Conversely a workstation class video card may make for an enjoyable user experience in a complex 3D application but will deliver lower frame rates in games. It is safe to say that different applications require different hardware tools. Matrox provides some insight to the locking and unlocking of features on video cards.

        The Parhelia workstation solution differs in no way shape or form to the retail Parhelia, which does go against the grain. Competitors tend to artificially inflate prices for their workstation products by unlocking features even though the chip may be identical to or based upon the same technology as their retail offerings. This so-called feature locking doesn't occur with Parhelia when compared to our retail solution. This is the key point here, so whether you are a prospective Parhelia client, purchased a retail board or have one integrated in your system, all Parhelia boards have access to the same workstation functionality and Surround Design support.

        Softimage, by default, is designed for a single monitor interface yet the layout can be customized for dual and even triple monitors. It was most interesting to hear the comments made when the designers started to spread their workspace out to the second monitor and then to the third. Since Softimage bypasses the video card in the render process there was no performance loss.

        A simple animation of 100 frames in length was rendered out with two different processors. As rendering in Softimage relies upon the processor most...then the faster the processor should result in a faster render. The animation data is as follows:

        You may ask if this is any good? Just for laughs I let the art director take the project to his dual Xeon 1.8 GHz nVidia Quadro driven power box and he did beat the time by a full 10 minutes. He also beat the price by a full $3000 (cost of purchase of art director's system vs. article test system). Somehow I'll wait the 10 minutes and keep the 3 grand in my pocket. A single Xeon 450 with a Quadro card takes over 3 hours. Those numbers are completely unofficial but it lets you see the range of performance.

        Benchmarks Pt. 1

        Before the benchmark

        Benchmarks are a yardstick we use to measure performance. Not one benchmark stands above the rest as the defacto tool. Benchmarks are useful to identify major peformance problems in a system. They can also be used to identify the impact of hardware changes on overall system peformance. This is very useful especially when combined with the software expectations. A faster processor may deliver faster renders but not help with a smooth GUI. A better video card may deliver a smoother interface but won't help if long ram previews are required. The performance enthusiast and overclocking crowd are edging each other by a handful of points or frames. Remember this as you look at graphs and charts. Don't look at just "who's in front" but also by how much both in points/frames and cost.

        3D Mark 2001 SE

        The granddaddy of benchmarking tools measuring how effectively a system runs 3D graphic applications. Moving from the 1900+ to the 2100+ showed only a small increase in peformance. This isn't critical for workstation applications but may be the goal of gamers to squeeze every frame per second gain from their systems.

        Sisoft Sandra

        Small increases in processor speed appear to have the greatest impact in Sandra's multimedia benchmark.

        Benchmarks Pt. 2

        GL Excess

        Quake III Arena

        Serious Sam the Second Encounter

        Business Winstone and Content Creation

        Benchmarks Pt. 3

        Code Creatures

        Commanche 4

        DroneZ high quality.

        SpecviewPerf 7.0

        This benchmark really tests OPENGL performance and it is important to note that there is a large discrpency between our results and the results from Matrox on their test system. We are investigating this. (Our system scored much lower)

        PS Bench

        We added a new benchmark to our tests. PS Bench looks at 21 individual tests in Photoshop 7.0 and the results can be looked at individually or as a cumulative score. There are three levels to PS Bench; basic, intermediate and advanced. This test shows the results of the intermediate tests.

        Media Cleaner Pro

        Three tests were conducted to compress a 651 MB 640x480 NTSC Quicktime file. The larger the file the more good a faster processor is going to do you.

        In the driver's seat

        Who's in the driver's seat?

        If you were to be put into the driver's seat of a race car would you be able to win a race against a professional driver in the exact same car? Probably not given the fact you don't know how to properly drive a race car.

        Computer hardware is just that...hardware...and it can't do anything without being told how to do it. While hardware itself does go through advancement cycles as new technology emerges into mainstream it isn't worth much if it doesn't work or work well. Driving the consumer PC market forward are games. Gaming video cards have fallen into 3-month product cycles with new versions being announced before the prior has even hit store shelves.

        A comment from Mark Randall of Serious Magic in a TechReport article piqued interest to look beyond the hardware for performance.

        The problem isn't the hardware, it's the software drivers. In fact, the speed could be dramatically increased with revised software drivers. However, no manufacturer has presently made this aspect of driver performance a priority. The first card manufacturer to address this issue would deliver the following benefits to their users:

        Mr. Randall goes on to state that software drivers, properly addressed, could increase render time, record game play in real time, capture motion images off the desktop or even stream video out to the internet directly from the video card.

        Drivers can indeed be a problem. Ask anyone who's experienced a Blue Screen of Death (BSOD). The $64 question is about the driver itself. Are we, the persistent purchaser of PC parts, being cheated out of performance that could be ours without a hardware upgrade?

        A graphics card is built on the power of the Graphics Processing Unit (GPU). This is a processor chip and it doesn't make financial sense to reinvent the chip each time a new video card is released. This is the same for CPUs. The AMD Thunderbird chip scaled all the way up the 1.4 GHz before the Palomino core took over till 1.77 GHz and now the Thoroughbred core extends the range past 2 GHz. The same can be said for INTEL PII, PIII and PIV architecture.

        The point is that features are either locked or unlocked on some graphic cards and the differences between adjacent levels of product may be very subtle; as subtle as a fresh set of tires and a tweak to a spoiler setting may make the difference between winning and losing the race.

        If all the hardware is available then how visually enjoyable or complex that game may be, how fast a render is or if the card can support the software itself may come down to what features are hidden. Case in point; in the early stages of this article Matrox was developing and refining drivers for the Parhelia with such software applications like Softimage. Use the official 2.31 drivers with the Parhelia and Softimage won't recognize the card as an OPENGL card and won't access those OPENGL features and not perform as expected. One or two driver revisions later and Softimage is happy.

        But it isn't as easy as that. Between the software and associated drivers and the hardware is the Application Programming Interface (API) layer. Hardware and software speak two different languages and they need some way to properly communicate with each other. Explaining the API is a fairly complex matter but think of computer hardware as your body. The software resides in your head as a desire to do something like walk, talk, run, jump, or eat. Between that "software" thought of wanting to walk across a room to pick up an apple and take a bite out if it and the mechanical act of actually doing it is a series of "hidden" instructions that "just happen". You don't really think about activating individual muscles to tighten and loosen on that incredibly precarious journey of balance as you stride across a room. You don't actively plan and coordinate in 3D space the relation of the apple to you or to your hand and then calculate placement and pressure required to take a bite. These things you just...do.

        The API acts in a similar fashion taking what the software wants to do and translating it to the hardware to do it and then returning the result back to the software to display. The most recognizable examples of API layers would be Microsoft's DirectX and OPENGL but other software can have its own proprietary API layer is with ADOBE and their programs. DirectX and OPENGL take interesting approaches to 3D graphics and each has their inherit advantages and disadvantages and they can be more than just coding issues....they can be political. For a far superior explanation I suggest a visit to www.jakeworld.org to read an article by guru game programmer Jake Simpson and his article on Graphical API History.

        Drivers are much more complicated that one might think. They can be a proverbial house of cards. Each game, application, tool, player and so on interacts with the video card in a subtly different way. Drivers are initially designed to work with everything but may not work to their fullest potential. That's where optimization begins and the people who build drivers begin the task of figuring out what enhancements or tweaks can be made to their drivers in order to gain performance and stability. This must be done one program at a time and there is an extensive list of programs. Just think of how many games there are then begin the task of trial and error to get the best performance out of each individual game.

        It isn't as simple as taking those individual driver enhancements and putting them into one set of drivers. A "tweak" in one enhancement can cause another tweak to turn into a problem and fixing that problem can create four others. Drivers are a balancing act between performance, stability and cost. It's almost an unobtainable triangle. Achieving performance and stability takes an unlimited pot of R&D money. Achieving great performance may cause instability. Achieving stability may cost performance.

        And around it goes.

        So hardware manufacturers strive to achieve balance by designing their product to fit a niche purpose. It would take too much time, effort and money to build the fastest, most stable gaming/workstation/single monitor/dual monitor/triple monitor/multimedia/digitize/output video card. It can be done but the cost of the product would be 10 times an unacceptably high price.

        Manufacturers In the video card market choose where their priorities are based on what market they want to capture. Gaming cards and their drivers are optimized for games with lesser emphasis on workstation applications. Workstation video cards are optimized for the reverse. Let's face it. There's more money to be made in gaming cards than the workstation cards.

        If, for the most part, the hardware can support significant performance improvements then is it the fault of the API, software or drivers and are we being cheated? This brings us back around to Stephan Schaem, Chief Technology Officer of Serious Magic.

        In some cases, card manufacturers have chosen to differentiate their 'consumer' vs. 'professional' cards by introducing essentially identical cards with different firmware and software drivers. The manufacturer's state that the additional cost of the pro product goes to fund development of advanced driver features that are particularly useful in production environments. The issue that Serious Magic has focused on is a different one. It's a significant issue in PC graphics card performance but we don't believe it was an intentional omission.

        In a nutshell, here's the issue. While today's graphics cards can render images very quickly, the software drivers are painfully slow at getting rendered output back over the AGP bus and into the PC where it could be saved and put to work by users. Current generation software drivers achieve only a fraction of the theoretical download transfer speed that the hardware you've already paid for is capable of. It's remarkable that a graphics card with a video input and some video recorder software can record TV-quality images to the PC hard disk in real-time, yet the same card can't record it's own renderings at even 1/10th this speed. Serious Magic has made a benchmark which demonstrates this problem freely available on our website:

        www.seriousmagic.com/3D-Dloadbenchmark.zip

        The problem isn't the hardware, it appears to be the software drivers. This is supported by the fact that the external video input to a VIVO-enabled graphics card can be moved over the AGP bus very quickly. Also, some software drivers under Windows 98 are able to move the rendered output very quickly. However, in all cases under Windows 2000 and XP the speed of transferring the 3D rendered results of the same card is very, very slow. It seems that the speed could be dramatically increased simply with revised software drivers. While this is a significant issue for many business, educational, production and scientific tasks, it is not a feature that gamers are clamoring for (although it would make capturing movies of game output faster, this is not as coveted as a higher frame rate). We believe that this is why no manufacturer has yet made this aspect of driver performance a priority. Even the more expensive cards with drivers targeted at the "professional" market are equally poor at this task. Hopefully, with the game market rapidly reaching saturation, manufacturers will realize that the growing business, educational, production and scientific markets can be substantial. Although each of these markets may be small when compared against the game market, when combined they can add up to meaningful numbers.

        And don't tell me there's a difference between drivers. Here is an example of the same system benchmarked the same way except for the change in video card drivers.

        Speed! I need more speed Scotty!

        Speed! I need more speed Scotty!

        What does the future hold? Processors, graphic cards and RAM are edging upwards in speed and bandwidth. The 3GHz mark is within reach for both AMD and Intel. Matrox opens up a huge 17.6 GB/s pipe with the Parhelia and DDR ram is bumping up the performance ladder as seen in the table.

        Memory name
        Type name
        Clock speed
        Voltage
        DDR clock speed
        Data Bus & Bandwidth

        PC100
        .
        100MHz
        3.3v
        .
        64-bit, 0.8GB/s

        PC133
        .
        133MHz
        3.3v
        .
        64-bit, 1.05B/s

        PC1600
        DDR200
        100MHz
        2.5v
        200MHz
        64-bit, 1.6GB/s

        PC2100
        DDR266
        133MHz
        2.5v
        266MHz
        64-bit, 2.1GB/s

        PC2700
        DDR333
        166MHz
        2.5v
        333MHz
        64-bit, 2.7GB/s

        PC3200
        DDR400
        200MHz
        2.5v
        400MHz
        64-bit, 3.2GB/s

        PC4200
        DDR533
        266MHz
        2.5v
        533MHz
        64-bit, 4.2GB/s

        Today's ultra-powerful CPUs, GPUs and RAM are tied to a proverbial boat anchor. It's the motherboard with its inherent latency and bottleneck problems. Further to that is the I/O rate of the hard disk or how fast data can be lifted from or stored to the platters.

        A way to increase After Effects render speed is to increase disk speed and this is accomplished by moving to a SCSI disk array. Unfortunately in the restrictions of a home buyer's budget it would push the cost above an acceptable level. SCSI disks have a greater throughput of data than IDE disks. ULTRA160 SCSI disks deliver a maximum 160 MB/s and the newer UTLRA320 SCSI deliver 320 MB/s. The less expensive IDE drives can move data at a maximum of 100 MB/s (ATA100) or 133 MB/s (ATA133). We all know that actual performance with either SCSI or IDE is significantly less than theoretical boasts. Any of these disks in an array can further enhance performance with SCSI arrays reaching upwards of a theoretical 500 MB/s. Processors can handle a greater amount of data in After Effects but must "wait around" for the data to exchange with the hard drive.

        CPU, GPU, Ram and hard drives work through the motherboard and therein lay the bottleneck. CPU, GPU and RAM may be able to accept and shovel out information with great speed and in huge gulps but the problem is that the pathway between components is relatively small and not nearly as fast. It's like trying to drain or fill a swimming pool with a garden hose. A solution is to get a heck of a lot more garden hoses or a bigger hose.

        Both AMD and INTEL are backing solutions and each in their own way. AMD brings HyperTransport with a bigger hose and INTEL counters with the many hose analogy for PCI-Express, formerly known as 3GIO. INTEL also is deep into it with Infiniband. Infiniband is more of an "outside of the box" solution providing for reliability, availability, scalability and performance gains between data centers, such as server disk arrays. It isn't paramount to this article but worth mentioning as it will have an impact on how fast two systems can talk to each other. Both AMD and INTEL have the same goal to increase the amount and speed at which data moves through a system or device.

        HyperTransport

        Chipset? Who's got the chipset?

        AMD HyperTransport Technology-Based System Architecture should be thought of on two levels; within the specific component and between components. In other words HyperTransport technology, when applied to a component such as a processor, can raise the bar on how fast it can complete an operation or how much it can process at any given time. HyperTransport, when applied to the pathway between components, increases the amount of data (bandwidth) and reduces the time for it to get around (latency). HyperTransport allows for the pool to drain or fill faster due to a very much larger hose.

        HyperTransport promises some pretty hefty improvements to loosen the noose on bottleneck I/O problems. HyperTransport technology is used to provide high-performance interconnects between integrated circuits that comprise the system's core. Peripheral device interconnect is provided by existing industry standard busses such as USB, IEEE-1394, IDE, SCSI, Serial ATA, etc. In other words AMD is aiming to provide a large bandwidth, high speed platform. AMD makes the HyperTransport technology available and leaves the rest up to the other manufacturers. This may mean a bigger, better, badder motherboard.

        HyperTransport Technology

        HyperTransport technology is an advanced high-speed, high-performance, point-to-point link for integrated circuits. HyperTransport provides a universal connection that is designed to reduce the number of buses within the system, provide a high-performance link for embedded applications, and enable highly scalable multiprocessing systems. It was developed to enable the chips inside of PCs, networking and communications devices to communicate with each other up to 48 times faster than with existing technologies.

        Compared with existing system interconnects that provide bandwidth up to 266MB/sec, HyperTransport technology's peak bandwidth of 12.8GB/sec represents better than a 40-fold increase in potential data throughput. HyperTransport technology provides an extremely fast connection that complements externally visible bus standards like the Peripheral Component Interconnect (PCI), as well as emerging technologies like InfiniBand. HyperTransport technology is the connection that is designed to provide the bandwidth that the new InfiniBand standard requires to communicate with memory and system components inside of next-generation servers and devices that may power the backbone infrastructure of the telecom industry. HyperTransport technology is targeted at the networking, telecommunications, computer and high performance embedded applications and any application in which high speed, low latency and scalability is necessary.

        The AMD-8000 (HyperTransport) series of chipset components stack up to some large numbers promising a peak throughput of 12.8 GB/s.

        AGP 8X doubles the bandwidth moving peak transfer rate up to the 2.1 GB/s notch.

        PCI-X (not to be confused with PCI-Express) significantly improves data transfer rates from 100 and 133 MB/s all the way up to nearly 1 GB/s peak data transfer.

        USB 2.0 allows for connecting exterior USB peripherals to access the system via a 450 MB/s pipeline.

        It's a very simplified explanation but it means that PC systems have the potential to make rather large performance jumps in the relatively near future. HyperTransport technology is a reality as evident by nVidia's nForce chip but don't expect full featured HyperTransport motherboards to find their way onto store shelves for some time to come.

        More on Hypertransport technonology can be found at the website and in an AMD white paper.

        PCI-Express

        All aboard the Express!

        INTEL stands behind PCI-Express and Infiniband. The performance gains have been staked even higher than HyperTransport with an initial offering of 2.5 GB/s/direction up to a projected advance to 10 GB/s/direction and beyond. It appears that PCI-Express is initially designed to "fit into the existing box" and Infiniband is designed for improved connectivity "out of the box" such as connecting server data centers.

        PCI Express architecture is described as a high-speed, general purpose serial I/O interconnect that provides the bandwidth for current and future applications. After reading about PCI-Express it is almost impossibly difficult to sum up this technology into a single sentence but the PR team managed to do so with a collection of words that commits to nothing yet sounds exciting. Nonetheless, PCI-Express has the same goal as AMD with one major difference. PCI-Express has been designed to fit with present technology. It also partners well with Infiniband.

        HyperTransport is a new chipset entirely thus, as an example, a brand new motherboard would be required. It is up to motherboard manufacturers but in order to satisfy consumer demand there may come a time where motherboards may feature a PCI-Express port as an option to add PCI-Express components. This may happen at the relative same time that HyperTransport motherboards enter the marketplace. It's debatable to which is the best approach. Is bolting on new technology to enhance current the better route or is it best to start from an entirely next-gen platform?

        A further question arises about data transfer to and from the hard drive platters. To get faster data transfer the disk needs to spin faster or the data algorithm has to be more compact or a combination of both. There comes a limit to how small the data can be made. Seagate explains;

        Today, as the magnetic particles that make up recorded data on a hard disk drive become ever smaller, we are approaching a point where the data bearing particles are so small that random atomic level vibrations present in all materials at room temperature can cause the bits to spontaneously flip their magnetic orientation, effectively erasing the recorded data. Magnetic recording scientists and engineers have calculated that this so called "superparamagnetic effect" may become a serious technology issue for new products in only two or three years.

        But as soon as it is said that it can't be done'

        Seagate has decided to use a HAMR to cram more and more bits of information per square inch into hard disc drives, pushing the limits of magnetic recording even further beyond what was ever thought possible. The Company today demonstrated its revolutionary Heat Assisted Magnetic Recording (HAMR) technology, which records data magnetically on high-stability media using laser thermal assistance.

        HAMR, combined with self-ordered magnetic arrays of iron-platinum particles, is expected to break through the so-called superparamagnetic limit of magnetic recording by more than a factor of 100 to ultimately deliver storage densities as great as 50 terabits per square inch. This will provide the capability for people to store the entire printed contents of the Library of Congress on a single disc drive in their notebook computers.

        Hard drive space has increased at a phenomenal rate over the last 5 years. It used to be that 270 MB was considered a big disk and now 80, 100, and 120 GB drives are commonplace. (270 MB is less than one percent the size of a 120 GB hard drive.) Space increases and falling prices keep the consumer happy but what happens when the consumer turns their attention away from processor speed and disk space?

        PCI Express and HyperTransport bring the promise of faster productivity on the computers that we work with today. This will buy time until hard drives become something more than they are and perhaps less integral to the real time operation of a system. Fitting the multitude of software and hardware architecture together into a coherent working solution may take time but it is on the horizon and we'll witness some form of its arrival sooner than later.

        And where will it stop? Will we expect real time renders or projects rendered faster than real time? In whatever form it happens to finally evolve into next generation technology could make today's super fast PC the 486 of tomorrow.

        More on PCISIG can be found at their website and this FAQ. Also look to the other white paper on 3GIO. Infiniband information can be at the website and in the FAQ.

        Conclusion

        Conclusion

        Workstation class PCs were always thought of as very expensive and powerful beasts affordable to only those with deep pockets. Everyday a new piece of hardware comes onto store shelves and if properly picked can make for some formidable computing power at very affordable prices. You don't need the best of the best hardware to do the work. Perhaps that diamond tipped, gold plated shovel isn't needed in the garden when a plain old spade will do the job just as well.

        I commend those who waded though this. PC configuration is like a jigsaw puzzle; you need a few pieces of information to begin to see the big picture. After this you may be left with the question of what would we recommend? Our test system tackled the workload of a professional broadcast design department and performed well and even better than some existing systems. We thoroughly enjoyed the extra display that the Parhelia brought to the work environment. Remember that a workstation is not designed to be a competitive gaming computer even though the designers had to be told on several occasions to do work instead of playing Quake. The AMD processors made a few INTEL loyalists reconsider. All of them were like curious children when we broke from the "beige box" syndrome. Those that knew the price of professional 2D/3D workstations said..."it cost what?" If you are building from the ground up or just adding on...determine what you want first. If it is workstation graphic power then balance the GPU-CPU equation as a little more money invested in one or the other may deliver better results in the end.

        Begin with the end. Getting more from a workstation, gaming or home multimedia PC is a matter of answering the questions of what is expected from the computer. Define your goals and get your hands dirty with a little research then you'll end up with a PC that is better suited to your tasks and, perhaps, your pocketbook. We built a system that made many users very happy. It also made my budget very happy as well. It is amazing the creative power that's available in computer hardware today.

        In closing I'm reminded of an old saying. Give a man a fish and he'll eat for a day. Teach a man to fish and he'll eat for life. In other words; if I tell you what's best now you'll have the best for a day but if I teach you how to choose what's best for you then you'll have the best for life.

        Icrontic extends their appreciation to the good people at ABIT, AMD, Matrox, GlobalWin and an ever-faithful AMK Computers for their assistance and involvement with this article.

        Personal Opinion

        The use of benchmarks, charts, graphs and a lot of technical talk are valuable in the price vs. performance equation but it all comes down to how a computer system "feels". Marketing surveys may show results such as 9 out of 10 users thought it was fast but what happens if you are the 1 out of 10?

        Our test system surprised us. Perhaps we were rooted in a MAC design world for too long or caught with our pants down for keeping up with technology. The home PC enthusiast most likely upgrades more times in a year than an office does in 5 years. In unofficial comparisons our test system beat our single and dual processor G4's and nipped at the heals of a dual XEON Quadro system.

        We didnt' set out to build a gaming machine but we were able to play games and not worry about being blown up when our computer couldn't keep up. Softimage and After Effects are what interested us the most. Fast renders and an easy interface would make our head spin. The Matrox Parhelia brought great amounts of real estate and a great image quality but a few problems. Softimage is not the most well-behaved program at the best of times. It was cranky to begin with and within a few driver tweaks Matrox engineers had it under control. There are still a couple of bugs but they are getting harder to find and most wouldn't stumble across them. Softimage did have some very minor display problems with the second and third display but these should be gone with the release of the 1.01 drivers. The other problem wasn't the fault of Matrox but more us. Our cabinetry was configured for dual monitors and not for three. Nevertheless the Parhelia functions extremely well in single, dual or triple head mode. A lot of people hadn't heard of AMD, ABIT or GlobalWin and didn't know there was so many choices and options. They definitely marvelled at the AMK case.

        We thought it couldn't be done on a budget. It's simply amazing the sheer computing power available at our fingertips. Immediately half the computers that were twice the price...were made obsolete.

        Sure there were the doubtful who mocked and stood firmly by their convictions...as the familiar sound of the MACs crashing echoed down the hallway.
  • Considering how fast they've been /.ed I'm betting they tried using one of these as their webserver.
  • I can't see much in Netscape 4.7 (I'm at work. Management takes a poor view of hacking the servers and PCs here, so I can't upgrade.) BUT, I can see something in IE 5.0. ... Well, I could, before I refreshed the page. The page I saw also had database problems.
    Warning: Can't create a new thread (errno 11). If you are not out of available memory, you can consult the manual for a possible OS-dependent bug in /var/www/icrontic/last10.php on line 14

    Warning: MySQL Connection Failed: Can't create a new thread (errno 11). If you are not out of available memory, you can consult the manual for a possible OS-dependent bug in /var/www/icrontic/last10.php on line 14 Can't open connection to MySQL

    And now, IE is frozen trying to get a new copy of the homepage. Slashdot effect in the house.
  • PASSWORD? (Score:1, Insightful)

    by Anonymous Coward
    Who's the genious who thought up displaying the connection string to end users, including the password??? Sure it makes it easier for the developer to debug, but displaying your db password to the entrire Slashdot community isn't the smartest thing to do.
    • At my university (I work in the IS department) they still have it so anytime there is _any_ content exception (null pointer, stringoutofbounds, no records in database lookup, etc) throw this "blue screen of death" (no one gets it in the department except me, sadly enough) that has all these error messages.. since i'm a new hire my opinion of, "you should handle your errors more gracefully" is brushed off. i'm glad to see someone else has my view on this sort of thing :)
    • Who's the genious who thought up displaying the connection string to end users, including the password??? Sure it makes it easier for the developer to debug, but displaying your db password to the entrire Slashdot community isn't the smartest thing to do.

      Happily, the actual password is not passed by the error message. The string 'PASSWORD' is merely a placeholder.

      Sadly, this can be turned off by prepending a '@' to the function call, like so:

      @ $db = mysql_pconnect("localhost", $username, $password);
      I suspect that poor web server will have more than a couple nmap scans and connection attempts on port 3306. I hope that the admin blocked traffic from anywhere but localhost from within mysql's grant tables, although if he has the mysql port accessible to the world, then there's probably (hopefully) a reason for it.

      -B

  • to broadcast their mysql database connection details upon database error. I hope they remembered to put --skip-networking in the my.cnf :/
  • Maybe it's good. (Score:3, Interesting)

    by Oculus Habent ( 562837 ) <oculus DOT habent AT gmail DOT com> on Tuesday September 03, 2002 @02:12PM (#4190283) Journal
    Well, in spite of being inable to read the site, I hope the box is a nice one for a reasonable price.

    We have been at the point where a desktop computer could create broadcast-quality video for some time, but a box capable of streaming live broadcast quality would be nice.

    Apple has some bits about CNN dot com doing on-location work [apple.com] on a PowerBook G3, and a recent story about a guy proposing [thegirlinthepicture.com] using a movie made in iMovie in a theatre. I imagine the Broadcast Box is probably not a Macintosh, but a dual 1 GHz G4 would probably do quite well also.

    --
    There is no reason to have links that read "here [slashdot.org]" or "this [slashdot.org]".
  • Now, if only they could figure out how to build a server that can instantly upgrade itself to prevent itself from being /.'ed.

    "Watch it, buddy. I have moderation points and I am not afraid to use them!"
  • I tried to access the page, and got:

    Fatal error
    Type of error Database error: connect(localhost,icrontic,PASSWORD) failed.
    Error message MySQL Error: ()

    Please report this bug to the webmaster.
    Thank you.

    I guess sometimes you just gotta dish out the dough for some serious systems.
  • by tenzig_112 ( 213387 ) on Tuesday September 03, 2002 @02:39PM (#4190484) Homepage
    As a professional editor, I've seen the industry's focus shift radically over the past ten years. When I got out of college, broadcast and post houses all wanted the same thing: single-use boxes with live output for on-air or linear editing. However, as uncompressed non-linear editing made disk-based editing an online option, the needs of broadcast and post diverged. Or did they?

    I train compositors on occasion at TV stations and I'm constantly surprised how many render-heavy tools they use in spite of the time constraints. It seems they want [need] the same capabilities and tool sets a post house might need, but with the ability to make quick changes and themed templates.

    Today, people ask for [demand actully] collaborative tools. Even one-man-band outfits are becomming frustrated with turnkey systems with proprietary file formats and incompatible toolsets.

    Manifesto: Editors demand open systems with portable project files, open media formats.

    I work on an sgi Octane right now, but once we go HD, we're looking at something as simple and cheap as a beefy Final Cut Pro system. Right now we have an Avid offline and a Jaleo online which takl to each other with 1970's era EDLs. Even all-Avid facilities don't yet have the kind of transperancy and portability that we really need.

    With a low powered FCP offline and a more powerful setup in the other room, you can swap whole projects back and forth [theoretically] with no information loss.

    Of course, we'll just hack the splash screen in ResEdit so our high-fallutin' clients don't notice the drop in prestige...
  • are they talking about?

    If it's just standard NTSC video, like what comes out of a Mini DV camera, then a $2000 PowerBook G4 will be able to work with it easily. Apple (and many FinalCutPro users sites) will be happy to tell you stories of directors doing rough cuts of projects on the 'flight home' from the shoot.

    Now, if it's HDTV were talking about, that's a whole different ballgame. The really exciting HDTV format, 1080p/30 has a data rate of something like 121.5MB/sec, compared to 3.7MB/sec for MiniDV/NTSC. Working with this kind of video requires massive, fast SCSI hard drives configured in a RAID array, a huge monitor to see the output in full-rez (at least 1920x1080) and lots of horsepower if you want to work with video that has been compressed down for broadcast.

    Then again, while such workstations can be had for around $10,000 (check out Boxx Technology [boxxtech.com]), HD1080 has a look that rivals 35mm film at a fraction of the cost. Think about it, 1 minute of recan (i.e. the stuff big studios don't use and sell to independent film dealers) 35mm stock costs around $36, and processing can run up to 3x the cost of the original film stock. So basically, you can own an HD editing system for the same cost as about an hour of 35mm film. Not really that bad a deal when you consider that both the new Star Wars films were shot in HD...

    • Hey,

      A dual 1.25GHz G4 with 2 gigs of RAM, 288 gigs of internal Ultra 160 SCSI, with the Apple 23" Cinema HD, and Final Cut Pro, should do the job quite nicely.

      Granted, this config is more money than the box they built, but it sounds perfect for the situation you describe.

      Maybe Apple's been planning this. Drive access speeds are high enough. Screen resolution [apple.com] is high enough (it's even called the HD Cinema Display), plus all G4 Towers can support dual monitors - one for the output, one for the palettes? Processing power is substantial with the dual G4s, and the System Controller [apple.com] reduces system bottlenecks. Final Cut Pro [apple.com] is gaining acceptance for it's capabilities. FireWire [apple.com] offers the ability to quickly add more storage with little hassle. Apple even notes using FCP OfflineRT [apple.com] with an iPod [apple.com]. The availability of the 20 gig model would allow you to edit an entire 30-minute show in DV format, and the whole Lord of the Rings trilogy in Offline RT.

      --
      And now back to your reglarly scheduled commercials.

      • Re:HDTV Power (Score:2, Informative)

        by falzbro ( 468756 )
        121.5MB/sec vs DV's 3.7? This simply isn't a fair comparison.

        DV is a compressed NTSC, natively around 22MB/sec.

        So, if the gods that created DV compression can do similar with HDTV, we're talking about a 6xdecrease in size. 120/6 = 20MB/sec.

        Interestingly, that number puts us right back to having the same hardware that is currently used to edit uncompressed NTSC.

        Perhaps the Video Toaster NT and its competitors will come up with a scheme to use the yet-to-be-named HDTV compression schemes via a standard firewire port? That sure would be spiffy.

        • HDCam and Panasonic's DVCPro HD formats are compressed using compression very similar to DV compression. Apple and Panasonic have announced that they are collaborating to produce a standard to move DVCPro HD signals through Firewire, which means that soon (probably some time next year -- I'd guess early summer) you'll be able edit HD (of the Panasonic variety, at least) the same way you do your miniDV -- get the data directly off tape through firewire, no decompress/recompress required. For cuts-only editing this is really ideal. For anything where a lot of rerendering is required (like color correction and compositing) it's still better to go uncompressed. Of course you'll need to spend an extra 20 to 30 G's to do it...

  • It amazes me that people are willing to spend almost twice the price for a CD burner to save a total of 1 minute and 5 seconds burn time per disk. what a waste of money for your average computer luser. And to think, the made them BURN-PROOF to allow multitasking. sheesh
  • I want a blotto box!
  • That's a great little site you have there Mortin, I mean this is the third time you've been so called "slashdotted" again. We know you have the hardware to handle the load because you gloat about it. Yet you refuse to correctly configure your damn server. psst. your not getting slashdotted if you can ssh in without missing a beat or packet.
  • I think someone should have told Mortin the result of being /.ed. Poor Icrontic...
  • I thought that this guy was describing a PC configured for actual broadcast operations, where it's in the video chain and applying effects in real-time. Most broadcasters use overpriced AVID gear [avid.com] for this, and I thought this guy was proposing a PC replacement.

    Not. He's doing offline content creation. That requires power, but not the hard real-time of broadcast.

    Current AVID gear tends to be rackmount specialized hardware front-ended by Windows 2K boxes. The real-time video data isn't flowing through the Win2K boxes. But in theory, current PCs have the bandwidth to do the job. It's more of a CPU scheduling and data pipeline management problem. The problem is designing systems that never drop a frame.

    Apple does well at this because they design the whole device chain to work together.

  • Ok, to all the slightly dense people, it doesnt show the password, why the hell would it? Do you really think anyone in their right mind (bar tech illiterate office workers) would use the word "Password" as a password??...

"The only way I can lose this election is if I'm caught in bed with a dead girl or a live boy." -- Louisiana governor Edwin Edwards

Working...