Follow Slashdot stories on Twitter


Forgot your password?
Music Hardware

THX Caught With Pants Down Over Lexicon Blu-ray Player 397

SchlimpyChicken writes "Lexicon and THX apparently attempted to pull a fast one on the consumer electronics industry, but got caught this week when a couple websites exposed the fact that the high-end electronics company put a nearly-unmodified $500 Oppo Blu-ray player into a new Lexicon chassis and was selling it for $3500. AV Rant broke the story first on its home theater podcast with some pics of the two players' internals. then posted a full suite of pics and tested the players with an Audio Precision analyzer. Both showed identical analogue audio performance and both failed a couple of basic THX specifications. Audioholics also posted commentary from THX on the matter and noted that both companies appear to be in a mad scramble to hide the fact that the player was ever deemed THX certified."
This discussion has been archived. No new comments can be posted.

THX Caught With Pants Down Over Lexicon Blu-ray Player

Comments Filter:
  • No shock (Score:5, Informative)

    by darkitecture ( 627408 ) on Saturday January 16, 2010 @05:51AM (#30788886)
    The audio industry being less than honest?

    Say it ain't so!
  • by TheGratefulNet ( 143330 ) on Saturday January 16, 2010 @05:54AM (#30788894)

    actually, there IS something to that cable. very very minor but its there.

    I believe that cable is NOT for ethernet even though it uses rj45. I THINK its used for i2s in audio and that is VERY timing dependant (clock and data on diff wires).

    now here's where most people don't know something and think they do: ethernet cable these days is NOT equal length wires! yet i2s for spdif break-out NEEDS each wire exactly the same length (timing matters, again). and so you cannot really use ethernet cable. look it up, it has unequal twisted pairs inside for noise reasons (ethernet spec) but this does NOT meet i2s audio specs (they do NOT want unequal length wires).

    I think that's the reason.

    other than that, yes, most 'fancy wire' is stupid snake oil for rich morans. but there IS something (albeit small) about this that makes *some* sense.

    (check etherent spec on wiki. I bet you didn't know that the wires were not all the same length. I didn't know that until it was pointed out to me, in this very context)

  • by TheGratefulNet ( 143330 ) on Saturday January 16, 2010 @06:04AM (#30788936)

    let me explain it, this time with wiki goodness: []


    Individual twist lengths

    By altering the length of each twist, crosstalk is reduced, without affecting the impedance.[12]

    Pair color [cm] per turn Turns per [m]

    Green 1.53 65.2

    Blue 1.54 64.8

    Orange 1.78 56.2

    Brown 1.94 51.7

    this is what its about. ethernet cable (modern spec) has UNEQUAL LENGTH WIRES.

    this will 'mess' with digital audio clock and data (i2s). hence you do NOT want to use ethernet wire for things that have rj45 connectors on the back of AUDIO GEAR.

  • by TheGratefulNet ( 143330 ) on Saturday January 16, 2010 @06:07AM (#30788950)

    You have no idea why twisted pairs are twisted do you?

    ahem. as both a designer and builder of digital audio equipment, I have to say you are DEAD WRONG. I fully know about differential encoding using twisted self-shielding. its the same that POTS uses and same that pro audio uses with xlrs. same idea.

    but running pairs next to each other interferes. THIS is why they use unequal length PAIRS. PAIRS. that's the key, each pair 'beats' at a slightly diff frequency (swr, really) and there is some natural attenuation due to this.

    yes, I know of what I speak, thank you very much mr AC...

  • by imsabbel ( 611519 ) on Saturday January 16, 2010 @06:30AM (#30789030)

    I tell you, audiophiles have NO IDEA OF SCALE.

    I am working with HF stuff. I run on cables that cost as much as that one, but in bulk supply from industry vendors (Huber+Suhner, for example). Because they are linear to 18 Ghz.

    I also did an experiment where i had to synchronize two signals to some picoseconds, and that is damn hard. Damn hard in the sense of "a day of quality time with a network analyser and a few delaylines".


    Speaking again on HDMI: Yeah, it matters for it, as its fucking running at several hundred times the datarate than an audio connection.
    HDMI is specified to transport up to 10Gbits/s, multiplexed on only 19 conductors.
    Compare again with audio datarates...

  • by phoenix321 ( 734987 ) * on Saturday January 16, 2010 @06:33AM (#30789040)

    The question is: does the Denon units use the Ethernet protocol?

    The answer to this question will determine if you're smart or if you bought into their marketing chant.

  • by Anonymous Coward on Saturday January 16, 2010 @06:35AM (#30789052)

    i could be wrong, but this sounds like cat5 to me, (rather then 5e where the twists are all the same)

  • Re:Credibility. (Score:5, Informative)

    by Sycraft-fu ( 314770 ) on Saturday January 16, 2010 @06:38AM (#30789072)

    Ya. While over all I like the idea of certification grades, THX did a bad job of it. Part of it was that they don't do enough to differentiate the certification types. They all feature THX in big letters and then something small that tells you what the actual certification is. Ok, well that matters a lot. A high end Ultra 2 certification on speakers pretty much means they can handle theatre reference levels of sound. They can truly give you a home theater. Their lower end stuff? Not so much.

    Also when it came to computer speakers they started compromising too much. It wasn't a matter of backing off on some specs that really didn't matter too much, they changed it so much to accommodate the lower end nature of computer speakers as to make it more or less meaningless.

    Personally, I don't buy THX gear. It is a waste of money in my book. All the gear I seem to like the best doesn't bother getting THX certified. They don't need a label saying "This is good for home theater." You take a listen to it and you say "This is good for home theater," no badge needed.

    In some cases, they impose restrictions that aren't acceptable to manufacturers either. Speakers are a good example. The high end THX spec (don't know about the lower ones) requires speakers to be sealed with a natural rolloff at 80Hz. Ok, well maybe I don't want that. In fact, I for sure don't want that for music. I want more full range speakers, and I'd like them ported as that increases low end efficiency. Ok, well they can't be THX then, no matter how good they are.

    Really, if you are looking for good home theatre, you'll do much better buying high quality gear you like, and making sure to get a receiver that has a good calibration solution like Audyssey MultEQ. Having your setup properly dialed in to correct levels and delays and such is way more important than if the speaker is precisely what THX likes.

  • Re:Dear Lexicon (Score:3, Informative)

    by Dahamma ( 304068 ) on Saturday January 16, 2010 @06:41AM (#30789094)

    Seriously? Yeah, Lexicon's ridiculously overpriced equipment used to be worth the ridiculous prices, but *now* they are ruining the company and overcharging.

    Or maybe they have ALWAYS been charging a 500%+ markup on their products just because they could. I'm not saying Lexicon doesn't have some of the best products in the business - just that the best products in the business do NOT need to cost 5-10x the average products in the business...

  • by anss123 ( 985305 ) on Saturday January 16, 2010 @06:54AM (#30789154)

    Actually, interconnect and speaker cables do (audibly) benefit from good quality, to a reasonable extent.

    Interconnect, yes. Speaker cables, no.

    Plenty of blind tests have shown that there's no audible difference between the most expensive speaker cables and cheap telephone wires. If you look at the math you'd see that the wire noise is something like a hundred times less than the distortion introduced by the speakers themselves, so spend those the $500 on better speakers and use whatever wire you got for cables.

    Interconnect cables transmits much weaker signals so noise have a greater effect there.

  • by TheGratefulNet ( 143330 ) on Saturday January 16, 2010 @07:03AM (#30789206)

    want a worse example? lets continue with panasonic but lets enter LEICA into it!

    rebadging was never quite the same as when 'red dot' leica did it. they took semi-crappy pany digicams, slapped a leica logo on it, LIED TO THE PUBLIC about the lineage of the camera (saying it was qa'd in germany which is an out and out LIE) and then sold the cams at several times the pany price.

    LEICA used to be a real high end camera company. they lost face when they pulled this stunt. there are leica lenses in the $3k range that are 'real leicas' but a $500 digicam that is rebadged is not a real leica even though the brand lies thru their teeth about it (when was pressed, they dodged the issue. probably due to lost advertising income if they fessed up that the fz50 and vlux1 are the same friggin cameras. touch that 'third rail' and you lose advertising revenue and review samples. yup, we know the game, guys...

  • by jgardia ( 985157 ) on Saturday January 16, 2010 @07:27AM (#30789302)
    sorry, but the maximum speed of i2c is 3.4 mbps. you will need about 9m difference in length to have 10% of phase difference between your clock and your data, using the maximum speed (the usual one is 100-400 kbps). I agree that cat5(e) was not designed for i2c signals, but is more than enough for this application.
  • by Anonymous Coward on Saturday January 16, 2010 @07:35AM (#30789340)


  • by Opportunist ( 166417 ) on Saturday January 16, 2010 @08:03AM (#30789444)

    We're not talking about data loss here, but data degeneration. And it used to be a problem with analog cables. A signal might have been distorted by a badly shielded cable because the signal was sent into the wire and then reproduced the way it was received. If it was altered along the way, that alteration was often audibly noticable.

    That doesn't apply to digital data. If a 1 is sent and is received as a "0.8" or a "1.2", it will still be interpreted as a 1. Simply because there is no 0.8 or 1.2, as there used to be in analog times. Yes, the signal can still degenerate, but since we use discrete values of signals in digital media instead of a "sliding scale" analog signal, that degeneration is easily compensated. It can now be identified correctly and it is adjusted accordingly. So that signal degeneration plays a lesser role now. Of course, if you have REALLY crappy cables it will show. But the average cable that wasn't tied in a knot first will do just fine.

  • by commodore64_love ( 1445365 ) on Saturday January 16, 2010 @08:05AM (#30789448) Journal

    Well it turns out you're right. S/PDIF doesn't use error correction. It's as error-prone as analog. What idiot would design a digital transmission protocol without built-in error correction?

  • by marcansoft ( 727665 ) <{moc.tfosnacram} {ta} {rotceh}> on Saturday January 16, 2010 @08:22AM (#30789514) Homepage

    The difference in wire length for i2s is either very audible or not audible. It does affect the DAC and matching clock and data lengths is important, but it's a data corruption issue - if the lengths differ enough that the signal is out of spec with regard to the setup and hold times of the DAC, you get glitchy audio. This isn't an "analog" difference.

    Clock jitter may be audible, and mismatched clock skew between outputs can be too, but skewed clock and data to a single DAC will not cause any audible changes until you exceed the specifications and then all hell breaks loose.

  • by msauve ( 701917 ) on Saturday January 16, 2010 @08:27AM (#30789554)
    Mod parent up.

    His point is correct, although the details are a bit misleading.

    Just to give an impression of the magnitudes involved, I2C high speed (here's the spec []) signals have rise/fall times in the 10-80 ns range. The setup time, which depends on the synchronization between the data and clock lines, has a minimum spec of 10 ns. If the implementation puts things in the center of the window, there's about an 80 ns setup time, so their might be 70 ns of slop available on either side.

    Twisted pair cables will have a velocity factor in the 70% range. i.e. electricity travels through them at about 210 000 000 m/s. In 10 ns, electricity would travel about 2.1 meters.

    How much margin any particular I2C implementation has depends on many things, but it should be clear that any decent implementation won't be affected a 10 ns delay to either signal, which equates to a couple of meters of additional wire.
  • by msgmonkey ( 599753 ) on Saturday January 16, 2010 @08:48AM (#30789682)

    As someone who has actually interfaced I2S sigma-delta DAC's to DSP's I can tell you are either confused or have your facts wrong.

    The clocking setup is typically a master clock running at 256X, 384X or 512X audio frequency running into the DAC, it is the stabilty of this clock that determines the accuracy of the analogue output.

    The I2S bus has three lines, CLK (data clock) which runs at 32X frequency (for 16bit audio), DATA (the actual bits) and LR which indicated if the data is on the left or right side. Jitter on the data line has no bearing on the quality of the output as long the data is present on the clock transition as it is latched and presented synchronously to the analogue section of the DAC.

    Although I2S was not designed for cable comunications you could easily get away with using it for short distances since even at 24bits and 96Khz the clock rate is only 4.608MHz with a cycle time of 217ns. Assuming a latch window of 25% of cycle time of gives us 51ns, any device producing that much jitter would have to be pretty badly designed.

    So to cut a long story short, yes for I2S using ethernet cable is more.

  • by Antique Geekmeister ( 740220 ) on Saturday January 16, 2010 @08:51AM (#30789694)

    He was right. You don't know.

    Pairs of wires are twisted together to couple them to each other, rather than to their environment. This is useful for both digital and analog signals, and is key to differential signals which may accumulate quite a large "common" signal over their length from environmental factors, especially differences in "ground". Similar effects can also be achieved by wrapping one wire inside the other, also known as coaxial cable, but that's far more expensive to make and more awkward to terminate.

    The details of what happens if the size of the twist happens to match a harmonic of the basic transmission frequencies is left to the reader. The consequences are easy to imagine, but difficult to calculate.

    Different pairs are twisted at different rates to keep them from coupling to _other pairs_. A nearly inevitable result of this is that over a long cable, the tightly twisted pair winds up being slightly longer than the less twisted pair. But the key is that the twisted pairs do _not_ synchronize with each other due to the differences in twisting, not the minor differences in length.

  • by MattskEE ( 925706 ) on Saturday January 16, 2010 @08:52AM (#30789702)

    At 20kHz two chunks of aluminum makes for a pretty nice cable. With 50GHz coax you need tiny precision machined connectors (2.4mm), and a very narrow cable with a low permittivity dielectric. Such a cable costs about $2,000.

    The reason for the precision, size, and expense at those frequencies (as you know but others probably don't) is that if you have a large cable, there are multiple different wave equation solutions (modes) which allow power of a particular frequency to travel down a cable, and they will propagate at different speeds in the cable (and different attenuations), so what you get out of the cable is a distorted version of the input. So you must make the cable with size on the order of the smallest wavelength you intend to transmit. And it has to be precisely made because imperfections, scratches, and so on need to be even smaller or they will cause an impedance shift which reflects some signal back at the source.

    At 50Ghz a wavelength is 6mm. At 20kHz it is 15km. This is why it is easy to make very nice audio cables, and hard to make nice HF cables.

  • by commodore64_love ( 1445365 ) on Saturday January 16, 2010 @08:57AM (#30789716) Journal

    >>>That doesn't apply to digital data. If a 1 is sent and is received as a "0.8" or a "1.2", it will still be interpreted as a 1. ..... So that signal degeneration plays a lesser role now. Of course, if you have REALLY crappy cables it will show. But the average cable that wasn't tied in a knot first will do just fine.

    Well said. If a digital signal "1" degrades below 0.6 there's a possibility the computer inside the receiving unit will misinterpret that "1" as a "0" but as you said that's unlikely, even with a low-priced bargain cable. You don't need to spend $500.

  • by Anonymous Coward on Saturday January 16, 2010 @08:58AM (#30789718)

    While I will agree with you on the stratus (which if I remember correctly is basically an eclipse chassis ripoff, although it may've been a shortened galant or something.), both Honda and Toyota have been using significantly different chassis for most of their coupe and sedan model cards for years. The SUVs however are often the exact chassis/engine/sheetmetal, with a few cosmetic changes to differentiate. Even then if you go and compare the electronics in them the Lexus models are usually MUCH more advanced than the Toyota counterparts (Haven't gotten to poke around at the newer honda stuff as much, our school was a T-Ten academy until toyota discontinued it year before last.)

  • by Anonymous Coward on Saturday January 16, 2010 @09:05AM (#30789756)

    i2s != i2c. i2s is more like spi than i2c. the dac I'm using for a project right now runs i2s at ~ 24 mhz.

  • by ThreeGigs ( 239452 ) on Saturday January 16, 2010 @09:23AM (#30789828)

    No, please let ME explain this.

    The wires are in pairs. Color coded pairs.
    Depending on your point of view, the green pair is TX (transmit) and the orange pair is RX (receive).
    Since the green pair (solid/striped pair) is twisted together, both green wires are the same length. Both orange wires are the same length (although green and orange may have slightly different lengths).

    ALL of the signal data is transmitted in ONE DIRECTION on ONE PAIR (man I love caps emphasis) whose wires are the SAME LENGTH.

    Understand that yet? Data from component A to component B travels over two wires which are the _same length_. Data back from component B to A travels over another pair of same-length wires. There is no "messing" with clock data, as it's all serial. And even if it used the 1000-T/TX standard requiring 2 pairs per direction, the difference in length of conductors in a 10 meter long cable (10 meters would be a very big audio rack) is 4.44 centimeters (assuming 24 ga and standard insulation thickness and miswiring to get longest/shortest paired). Rounding that up to 5 centimeters, and using 300 million meters/second for speed of light, and .64c propogation speed in the wire, I get about one four-billionth of a second difference. Meaning your sample rate would need to be in the GHz range, and which means if you can tell the difference, you would be able to "hear" VHF and UHF radio waves.

    Keep rationalizing though... it keeps my debunking skills fresh.

  • by TheRaven64 ( 641858 ) on Saturday January 16, 2010 @09:32AM (#30789868) Journal

    Thanks, I was too lazy to do that bit of the calculation. Now let's put that into some more relevant terms. The highest sound that a typical human can hear is around 20kHz, let's be generous and say and audiophile can hear 25kHz (the upper range of my hearing is well above average, but is still a way below 25kHz). At 25kHz, the wavelength of the sound is about 1.36cm (assuming one atmosphere pressure), so that gives the absolute shortest wavelength you're likely to hear. In 7 nanoseconds, this sound will travel 0.000175 wavelengths. That means that this skew, over a 100m cable, will potentially make the sound out of phase by 0.0175%. Now, the human ear is very good at detecting differences in phase. It's something that you're likely to find subconsciously irritating, even if you aren't able to tell what it's caused by. Even a magical audiophile ear (the one we're assuming can hear ultrasonic audio), however, is going to have difficulty spotting a phase variance that small.

    I always assumed those ethernet cables were a joke. I'm astonished that people take them seriously. If I had a few million dollars, I'd buy one to reward the manufacturers for making me laugh, but I never thought people would buy them expecting a real difference in quality.

  • by ThreeGigs ( 239452 ) on Saturday January 16, 2010 @09:54AM (#30789966)

    Note that when you twist two wires together, they reach a shorter distance, because the wire now has to follow a longer spiral path. More twists means more distance. So to make up for the loss in length you need to use more wire. Thus a differing number of twists means a different amount of wire is needed to reach the same distance. Not a lot, but there _is_ a difference.

    In ancient times, clamps were made by putting a wooden rod between two ropes and using the rod to twist the ropes, pulling the two ends closer together. Same principle.

  • Re:No shock (Score:1, Informative)

    by Anonymous Coward on Saturday January 16, 2010 @10:04AM (#30790004)

    I actually have that Oppo player, and the reason I chose to spend nearly 500 bucks was NOT exclusively video. The Oppo happens to be the only sub-1000 bucks universal player I've found, that is capable of playing DVD-Audio and SACD in addition to BlueRay and up-converted DVDs, and that's worth a lot for some people like me with large collections on DVD-A and SACD material, which DOES sound substantially better than anything else.

  • by Lonewolf666 ( 259450 ) on Saturday January 16, 2010 @10:14AM (#30790062)

    While THX has no convenient spec for download on their homepage, I have gleaned the following from various forums (errors of the posters possible ;-)
    -80 Hz is the crossover frequency between subwoofer and full range speakers
    -The subwoofer is fed the signal over a low pass filter with 24db/oct at 80 Hz
    -The full range speakers are fed the signal over a high pass filter with 12db/oct at 80 Hz. Together with the natural roll off that amounts to a high pass filter with 24db/oct.

    My semi-educated opinion (electrical engineer but not specializing in audio) is that
    1) This setup actually makes sense for a subwoofer system.
    2) If you don't want to use a subwoofer, ignore it and get some non-THX setup without the high pass filter for the full range speakers. Good full range speakers will cover significantly lower frequencies than 80 Hz, and with the high pass filter you would throw those away.

  • by mako1138 ( 837520 ) on Saturday January 16, 2010 @10:20AM (#30790094)

    It doesn't matter for i2s, it's a clocked interface. As long as setup and hold times are met, the data will be valid. Picoseconds aren't going to mess things up when the setup and hold time specs are measured in nanoseconds or more.

  • by isomer1 ( 749303 ) on Saturday January 16, 2010 @10:59AM (#30790298)
    (please mod parent up)

    Just to run the numbers available -
    In cat5 you have 4 twisted pairs with the following arrangement [1]:
    color cm/turn turn/m
    green 1.53 65.2
    blue 1.54 64.8
    orange 1.78 56.2
    brown 1.94 51.7

    That makes:
    green = 1.53 * 65.2 = 99.756
    blue = 1.54 * 64.8 = 99.792
    orange = 1.78 * 56.2 = 100.036
    brown = 1.94 * 51.7 = 100.298

    So a roughly 0.5% difference in wire length. Using a low-end estimate for the speed of an EM wave [2] of 66% the speed of light we still have: 1/(3.0e8 * 0.66) = 5e-9 s/m. So that at the end of a 100m length of cat5 (the spec limit) the ~0.5m difference in length would mean the signals would be separated by roughly 250 picoseconds.

    I have trouble seeing how that could possibly be enough to impact any audio signal perceptible by the human ear. If anyone else has more numbers to run please chime in.
    [1] []
    [2] []
  • by timeOday ( 582209 ) on Saturday January 16, 2010 @11:14AM (#30790400)
    No, there is nothing wrong with $5 SPDIF cables. Maybe, MAYBE, if you spent a long time and tried really hard, you could get an SPDIF connection to mostly work but with a significant bit error rate, by wrapping it around sharp corners, or putting dirt on the ends or something. In practice, I've never seen it not work.
  • by emt377 ( 610337 ) on Saturday January 16, 2010 @12:38PM (#30790978)

    That doesn't apply to digital data. If a 1 is sent and is received as a "0.8" or a "1.2", it will still be interpreted as a 1.

    This is not the issue with SPDIF. SPDIF has a variable data rate and the clock is embedded in the modulation. The source can (and will!) vary its transmission rate. The model is that you have a mechanical CD transport that reads bits off a disk, sending them down a wire as they're decoded. Because the transport is physical its actual rotational speed will vary. The disks also vary (within tolerances) in density on the disk itself; the drive mechanism is PLL locked to the disk contents, not a master clock. If the pits on the disk get denser it slows down a bit, when they get sparser it speeds up.

    This model is called streaming. When A/V engineers talk about streamed data, like an Elementary Stream (ES), this is what they mean. MPEG-2 is also a streamed model for instance; if you read the specs you'll find bounds and tolerances on data rates, intermediate buffers, jitter, etc. It's how the standards are created. They're not anything resembling IP datagrams, the closest you'll get to that is transport streams which basically packetize and mux multiple elementary streams into a transport stream. The reason the models are created like this is that audio and video generally and sent down completely independent paths and need to retain the their sync. Or at least that was the reasoning when the standards were created, these days it's mostly just a pain in the rear.

    If you run two self-clocked signals at variable rates down two wires of different lengths, the receiver will see the bitstreams arrive at a slight skew. Since they're both variable rate to begin with it's no trivial problem to try to heuristically determine when the bits were actually sent, or read off the disk, or were meant to be read of the disk when it was mastered. The issue isn't whether the clocks will be off, but whether it's audible.

  • by Anonymous Coward on Saturday January 16, 2010 @01:05PM (#30791216)

    Umm... First, POTS often doesn't use twisted pair (look at most any phone cable in your house) and it's not differential but a DC current loop. Second differential is not an encoding but a signaling method. The "twisted" part isn't for self shielding, but to reduce common mode rejection. Essentially the two legs of the differential pair will receive equal noise from an adjacent noise source and the differential receiver can filter out this common mode noise (know as common mode noise rejection).

    Also, based on another one of your posts, "twist length" has nothing to do with the relative lengths of each wire in a pair. It's basically how tightly the cables are twisted. If you have 10 twists per meter (10cm twist length) or 100 twists per meter (1cm twist length), both legs of the pair have the same length (although they are BOTH longer but the same length in the 100 twist case).

    I really hope you're not an engineer.

  • by chriso11 ( 254041 ) on Saturday January 16, 2010 @03:18PM (#30792302) Journal

    The ONLY place where I expect the digital bit stream to have any problem is in the DAC (or when the sampling for the ADC). The BER (bit error rate) in a standard digital link at 192KHz audio is going to be on the order of 10^-10, which means 1error in every 18minutes of audio. I doubt even trained ears can notice a 1bit error over that interval. And heck, the interface is probably even better - 10^-12 BER isn't out of reach.

    As for jitter: if you have a 30KHz signal, which only your audiophile dog can hear, a 10ps jitter RMS (which can be considered around 70ps pk-pk jitter) would be translated by the DAC into >115dB SNR; and it would not introduce distortion, only noise ( a good chunk of it would be then filtered out by the system lowpass filters in the audio stream). If you are dealing with a 3KHz tone, then the SNR would be >135dB.

    I've had to fight jitter problems on high speed ADCs (100+MSPS) in the past; you have to do BPF and clock divider techniques there; audio systems don't need that. To me, 192KHz is DC. The wavelength of the ~10Mbit/s interface clock is around 100ft - and for a digital system you would only close the eye by 1% with a 1foot mismatch.


  • by Schaffner ( 183973 ) on Saturday January 16, 2010 @03:43PM (#30792518)

    Well, it was Grace Hopper and she's the mother of COBOL, not FORTRAN. She used to give out "nanoseconds" at her lectures. They were 11.9 inch lengths of wire, which represents how far electricity can go in a nanosecond. A friend of mine still has one of these "nanoseconds" he got from her.

  • by Neil Hodges ( 960909 ) on Saturday January 16, 2010 @08:49PM (#30794892)

    I like your post but there is a minor error. OS X is not open source. It's derived from NeXT which is a closed-source OS from the 1980s that was ported to the PowerPC platform, and is still closed source today.

    You may want to read this []:

    Darwin is an open source POSIX-compliant computer operating system released by Apple Inc. in 2000. It is composed of code developed by Apple, as well as code derived from NeXTSTEP, BSD, and other free software projects.

    Darwin forms the core set of components upon which Mac OS X, Apple TV, and iPhone OS are based. It is compatible with the Single UNIX Specification version 3 (SUSv3) and POSIX UNIX applications and utilities.

    Darwin's heritage began with NeXT's NeXTSTEP operating system (later known as OPENSTEP), first released in 1989. After Apple bought NeXT in 1997, it announced it would base its next operating system on OPENSTEP. This was developed into Rhapsody in 1997 and the Rhapsody-based Mac OS X Server 1.0 in 1999. In 2000, Rhapsody was forked into Darwin and released as open-source software under the Apple Public Source License (APSL), and components from Darwin are present in Mac OS X today.

    Darwin version 10.2 corresponds to Mac OS X 10.6.2.

COMPASS [for the CDC-6000 series] is the sort of assembler one expects from a corporation whose president codes in octal. -- J.N. Gray