Forgot your password?
typodupeerror

SHA-1 Cracking On A Budget 92

Posted by Zonk
from the putting-the-pieces-together dept.
cloude-pottier writes "An enterprising individual went on eBay and found boards with more than half a dozen Virtex II Pro FPGAs, nursed them back to life and build a SHA-1 cracker with two of the boards. This is an excellent example of recycling, as these were originally a part of a Thompson Grass Valley HDTV broadcast system. As a part of the project, the creator wrote tools designed to graph the relationships between components. He also used JTAG to make reverse engineering the organization of the FPGAs on the board more apparent. More details can be seen on the actual project page."
This discussion has been archived. No new comments can be posted.

SHA-1 Cracking On A Budget

Comments Filter:
  • by MollyB (162595) * on Saturday September 01, 2007 @06:40AM (#20432487) Journal
    If this story is hard to understand (was for me), then a comment following TFA might be useful, if you don't/didn't read that far:

    5. FPGA - field programmable gate arrays are sort of like reconfigurable circuitry - they can be programmed to perform complex computations in one giant "step", rather than as a sequence of instructions (how a general purpose cpu like the pentium operates).

    This makes them fairly pointless for general computing, but when you need to crunch a bunch of numbers in the same way over and over, they can REALLY outperform a general cpu. Usually these are used to manipulate audio / video data streams in real time (the original purpose for the FPGAs used in this project) - but recently people have started using them to brute-force try to crack an encryption scheme. Where a general purpose cpu might take upwards of 40 clock cycles to check one possible answer, each of the FPGAs in this system can check at least one answer PER clock cycle.

    This guy pulled a bunch of FPGA systems out of some (defective?) HDTV video processing systems - reverse engineered exactly how everything was wired together, reprogrammed the FPGAs to do SHA-1 hash cracking rather than HDTV video processing, and added some usb control circuitry so the system could take commands from / return results to a pc.

    One could use this same board setup to do any sort of massively parallel data processing, but right now the system isn't wired up to really feed large amounts of data into / out of the system in real time. He can get away with that as hash cracking results are fairly small and infrequent, so the limited means he has for getting "answers" out of the system isn't too much of a problem.

    Posted at 4:39AM on Sep 1st 2007 by smilr
    HTH.

    • Re: (Score:1, Interesting)

      by Anonymous Coward
      Yeah, except that for "general computing" they are not at all "fairly pointless". Also, the explanation seem to want to indicate that an FPGA can more or less only be programmed to perform number crunching, which is as off-track from the truth as can possibly be.

      A more proper description of an FPGA would be "electronics simulator", as you can with an FPGA "program" resistors, caps, transistors, logic gates etc. with these, thus "program" an IC of your choice, or, for example, a Z80 or M68K core with accompa
      • by Poromenos1 (830658) on Saturday September 01, 2007 @10:58AM (#20433667) Homepage
        How about if you write an FPGA simulator program and simulate an FPGA simulating a CPU running your program? Will the universe implode?
      • by Space cowboy (13680) * on Saturday September 01, 2007 @11:34AM (#20433879) Journal
        There are some FPGA's that can control their output impedance on pins, but an FPGA is really for digital electronics - you're using 4-way look-up tables to emulate arbitrary 4-input logic-gates for the most (99.99%) part. I've seen genetic-algorithms produce capacitance-based designs where unconnected circuits affect each other due to analogue effects, but not humans. We tend to stick to the straight and narrow...

        An FPGA really is conceptually very simple, and they're not hard to "program" either... Contrived example:

        Verilog design to add/subtract 2 numbers (you'd never do this, but...)

        module addsub (a, b, addnsub, result);
        input [7:0] a;
        input [7:0] b;
        input addnsub;
        output [8:0] result;
        reg [8:0] result;

        always @(a or b or addnsub)
        begin
        if (addnsub)
        result = a + b;
        else
        result = a - b;
        end
        endmodule

        Compare that to a K&R "C" routine to do the same thing...

        void addsub(a, b, addnsub, result)
        short a;
        short b;
        unsigned char addnsub;
        short *result;

        {
        if (addnsub)
        *result = a + b;
        else
        *result = a - b;
        }

        In both cases, of course, you'd just use the 'if...else...' part, but I wanted to show more language structure...

        The key thing to remember is that in C, all things happen serially, unless you arrange otherwise with threading libraries. In Verilog, any block beginning with 'always @' happens in parallel with every other 'always @' block. Once you've mentally-mapped the concept of vast numbers of threads to the way hardware works, any competent multi-threaded programmer can become a competent hardware engineer.

        Of course, there's "guru stuff" in there as well (as much as you want, trust me :-), you don't get world-beating overnight, but it's relatively easy to get the 80% solution, and that might be just fine. Eking out the last 20% is where it gets hard, as you have to understand the internal structure of the LUTs, and how they interact with the carry-chain, what the LUT->LUT delay can be useful for etc. None of this is at all relevant unless you're missing your timing on a critical circuit (eg: you need 133MHz so your DDR2 SDRAM can work, but the synthesis tools (equivalent to a compiler) only deliver 125 MHz for your design).

        The 'always@' part is the hint of just where the power lies. *Everything* can happen in parallel, so you can build pipelines (like CPU's are pipelined today) into your logic, thus reducing the time taken per step (while taking multiple steps), thus increasing your clock rate. The benefit of course is that although the *first* result comes out in the same time, every clock thereafter, you'll also get a result.

        I wrote a JPEG decoder a couple of years or so ago, running at ~130MHz. That doesn't sound much, but that comes to ~65 MPixels/second because of the pipelining. Looking at the SSE-optimised intel libraries, a CMYK422->CMYK baseline decode (which is what the FPGA was doing) takes 371 clocks/pixel. The intel chip I was comparing to was a 3.6GHz P4, meaning it could do ~9.7 Mpixels/second. For motion-jpeg that's the difference between decoding SD frames (for the P4) and decoding HD frames (for the FPGA)...

        So, FPGAs tend to run slowly (relative to today's CPUs) but can exploit parallelism in ways CPUs just can't, but of course for serial processing, you can't beat a tradition

        • FPGA question... (Score:3, Interesting)

          by Endymion (12816)
          hmm... you seem to know a lot about FPGAs, so I'll ask you something I've been wondering for a while...

          Coming from a traditional software end of things, I'm used to seeing "accelerating co-processors" available to do useful tasks much faster than the main CPU. I'm thinking not only the FPU (when it was a separate chip), but things like a modern GPU and such. Many of these have been slowly integrated back into the CPU as time has gone on, the FPU being the best example, so now it's something you can just cal
          • Re: (Score:3, Interesting)

            by Radicode (898701)
            There are many libraries you can put on your FPGA. Some are open source, some costs A LOT. It's similar to a dll or a jar: you have an interface you bind to and you program your stuff around it. You can get modules to process FFTs, encryption, ethernet, VGA, sound, video, pretty much anything you can imagine. You can even use a CPU library to have a gereal cpu like your x86 and execute assembler instructions. You can even turn an FPGA into an old defunct cpu to repair an old electronic hardware. Amazing stu
            • by Endymion (12816)
              That all sounds wonderful... (and does make me want to try some FPGA programming - it sounds really cool) ...but that sounds like it's still implemented in the main programmable logic gates of the FPGA (that is, in "software"), like how a .so/.dll is great on a normal CPU, but is still just part of your program running.

              I'm more thinking of a specific hardware piece, like an FPU co-processor. Something not re-programmable, but theoretically much faster for that specific task. It wouldn't make sense for a lot
              • by Radicode (898701)
                I understand what you mean. I think they have some processors like that. I'm thinking about that PCI card addon to process physics in games. I still think an FPGA is best for this job because once programmed, everything is actually "hard-wired"... it's not "software", so it's still almost as fast as a real circuit.

                Radicode
          • Re:FPGA question... (Score:4, Interesting)

            by Space cowboy (13680) * on Saturday September 01, 2007 @02:02PM (#20434781) Journal
            Well, common FPGA's are basically look-up tables surrounded by a sea of interconnect logic. The designer specifies the function of each LUT, and the connections between them using a language such as Verilog or VHDL. They're not "generic logic", they're defineable logic. Example: On a CPU, you have the 'add x,y' instruction - that's a chunk of logic on-chip. On an FPGA, that chunk of logic doesn't exist until you write a design that needs it.

            You can buy (though I think they're very expensive) "IP cores", which are pre-packaged modules ready to plug-in-and-go. There are some free ones available as well. You may have to do more work to get the free ones to work [grin].

            There are also built-in hard cores on modern FPGA's. You never used to be able to synthesize the statement "a = b * c;" in a verilog design, for example. Now that FPGA's have hardware multiplier blocks in them, it synthesises to a bunch of wires connecting up the LUTs to the built-in hardware. For the more-complex examples you suggest, it's best to implement them in logic, because an FFT (of a particular radix, input format (complex or real), and output requirements) is a very specific piece of hardware, and not generally useful to most customers.

            You get multipliers, blocks of fast dual-port RAM, even entire processors (PPC) embedded into the FPGA fabric these days. Of course, you pay more for things like embedded CPUs... Funnily enough, a CPU is one of the easier things to write for an FPGA IMHO. You'll never get the speed of the FPGA fabric to match the hard-CPU core though...

            To do what you're talking about though, you'd need a way to interface the FPGA to the PC - there's a freely available PCI core, so you'd then just need a card which had a PCI interface (there's one from Enterpoint [enterpoint.co.uk] for ~$150... Then you just need to link the PCI core to your own cores (FFT, whatever) and write software to offload any FFT's to your co-processor. Xilinx offer the "Embedded Development Kit" to make this easier (you have to pay for this, the other tools are free to download). I don't know if anyone has made the freely-available PCI core into an EDK module though...

            Simon.

            Simon

            • by Endymion (12816)
              ok, thanks for the explanation...

              It sounds like my idea is happening, but at a much lower level than I was thinking (like the multiply example you gave). I guess I'm still thinking of things at the wrong level (software, high level functions), when it's much more basic things that need to be accelerated.

              The dual-port RAM interface makes a lot of sense - it'd be a lot nicer than trying to do it yourself with general purpose pins, I'd think.
            • by SrJsignal (753163)
              Simon, While it's obvious you somewhat know what you're talking about, a LOT of your information is pretty dated. I use both super expensive top-of-the-line fpgas and middle of the road fpgas on a daily basis, so I'll just throw up a few "modern" corrections. With Xilinx (which is *strictly* the brand I use) you get MASSIVE amounts of IP cores that are configurable / synthesizable / simulatable. Granted this comes with their tools license, but you have to have one of those for any of the decently large
              • No, I'm aware of what you say, I just can't afford *any* of the commercial IP cores. I enquired about the cost of a JPEG core once, and was basically laughed at.

                I'm coming from a different perspective, that's all. FPGA's are a hobby for me, nothing more. I can afford to spend a few hundred dollars on a kit board, but I'd never drop a few grand on a core... I'd either do it myself or make do without. I'll use webpack exclusively for development (since they dropped the in-between option, Foundation is far too
            • There are also built-in hard cores on modern FPGA's. You never used to be able to synthesize the statement "a = b * c;" in a verilog design, for example. Now that FPGA's have hardware multiplier blocks in them, it synthesises to a bunch of wires connecting up the LUTs to the built-in hardware.
              quartus 3 at least can synthisize a = b * c on a chip without a hardware multiplier, it takes a lot of logic cells though.
        • "He no longer has to worry about trying to be the baddest motherfucker in the world. The position is taken. The crowning touch, the one thing that really puts true world-class badmotherfuckerdom totally out of reach, of course, is the hydrogen bomb."

          For some reason this passage comes to mind. I can now just learn to blow glass better; computers are never going to be my bag.
        • 4-input LUTs are on their way out, the migration towards LUT6 and beyond has begun in the current high-end FPGA families (Virtex5, Stratix II/III) and will most likely enter the volume-oriented ones (Spartan, Cyclone, etc.) soon. Single-LUT 4:1 muxes alone can enable drastic improvements in many designs.

          As for FPGAs being cheap, all things are relative. There are ASICs out there for nearly any common application imaginable and these are often well under $50 and are usually designed by people who have extens
          • As I mentioned above to a different poster, I think we're playing in different ballparks. The cheapest (and pretty-much useless for anything other than playing around on) V5 dev-kit I know of is ~$1000. That's an order of magnitude more than the cheapest S3A or S3E kits. 4-luts may be on their way out at the very high end, but they're definitely still around in the sort of things us mere mortals can buy/use.

            Simon.
            • All things are relative. IMO, the only thing that exists in FPGAs is a cheapest/smallest/slowest device that will fit a design with the required safety margins and futureproofing headrom. Most of the places I worked at where we used FPGAs were sold to Virtex: XC2V6000, XC2VP70, XC4FX100, XC4LX200 - since they were mostly doing ASIC prototyping, they preferred to spend $2000 extra up-front than have to re-do their $10000 prototype boards because they underestimated the final design's gate count or had to inc
          • I know this will not be an accurate information, but anyway...
            I saw, how one of modern Ericsson phones had some very small/cheap FPGA soldered in and it was used for overcoming a design error of some ASIC. So, yes, in fact we are usign CPLDs/FPGAs on daily basis ;)
            • The argument was not about using FPGAs in consumer-level devices or not because we, as you said, actually do.

              It is just that in consumer electronics, CPLDs and FPGAs are usually there to replace glue-logic and complement the microcontroller/embedded processor's capabilities... like working around bugs. One example of FPGA in consumer equipment is early Radeon SLI: the 'master' board used a Xilinx FPGA to receive pixel data from the slave board and combine it with data from the master to produce the final im
        • by kramulous (977841)
          Wow! I know it's a hobby, which means that sometimes the documentation side of things is generally the first thing to go, but I'd really like to read up on some of the stuff you've been up to.
          • Documentation ? You want documentation ? ? ?

            Well, I have circuit diagrams for old projects, (I think - only because I don't tend to 'trash' stuff), and I tend to comment my source-code a lot.... That's about as far as it goes... Perhaps I'll try and put some stuff on my blog in future...

            Cheers,

            Simon
        • by fmadero (1150849)
          When you mean by "hardware engineer", do you mean a computer, electrical engineer or both? Is a formal education nessary to learn alot of whats going on in this project? In computer science we really don't get into the hardware too much as an undergrad so this project is fasinating.

          -frank
          • I don't think there's any *need* for a formal education (I'm a physicist by degrees, but have spent my whole career as a coder). If you were planning on making a career in this area though, then just like any other area, having a provable education would help a lot, otherwise you show experience (I've done this, and this, and this, and...). You should probably ask real hardware engineers for their opinion too - I'm just an amateur. Personally, it's just a hobby - I'm not planning to change my career any tim
  • How fast is that? (Score:3, Informative)

    by tcdk (173945) on Saturday September 01, 2007 @07:06AM (#20432555) Homepage Journal

    NSA@home is a fast FPGA-based SHA-1 and MD5 bruteforce cracker. It is capable of searching the full 8-character keyspace (from a 64-character set) in about a day in the current configuration for 800 hashes concurrently.


    Anybody, have an idea how fast that is compared to modern a CPU?

    IIRC, the last time I did anything like this it took my 2200+ AMD about 24 hours to do a 6-character keyspace (from 64-character set) - with MD5.
    • by owlstead (636356)
      "NSA@home is a fast FPGA-based SHA-1 and MD5 bruteforce cracker. It is capable of searching the full 8-character keyspace (from a 64-character set) in about a day in the current configuration for 800 hashes concurrently."

      So your 2200+ AMD is beaten to little pieces by this monster.

      Source, well, you had to click a single link to their homepage [unaligned.org]. That'll learn you to post early. Or not, since I supplied you the answer anyway.
      • by tcdk (173945)
        I actually quoted that myself...

        I was wondering how it compared with the latest and greatest like x64 with SSE3/4 or a Cell processor...

        (that'll learn you to actually read the post you are replying to. Or not)
        • Re: (Score:3, Informative)

          by owlstead (636356)
          Arg! Whoops, sorry about that. Read the post, but thought you were quoting the summary.

          I've wondered about Cell performance myself for a while, but I haven't got the time to go out of the way to do some measurements. For SSE3/4: I would call it highly unlikely that we would see anything like the performance they are posting: 2 ^ 12 performance difference for MD5 alone is quite a lot. Maybe SSE5 might speedup SHA-2 as well. Anyway, you might want to add the T2 (Rock processor) from Sun to that list, it has 8
        • by kayditty (641006)
          It is very fast compared to modern (conventional -- I am unsure about Cell) processors. You might get some 100 Million MD5 h/s with raw, unsalted MD5 on the latest and greatest quad core Xeon, using MDCrack. Check their performance page here: http://c3rb3r.openwall.net/mdcrack/ln.html#perform ance [openwall.net]

          At that rate, it would take 32 days and 14 hours to brute force 8 chars a-zA-Z0-9._ for a single given hash. This setup is capable of doing this 800 times concurrently in a single day, if I read correctly.
          • by kayditty (641006)
            Yeah. That sounded a bit off. Evidently, it does 800 concurrent hashes at 4Mh/s, instead of doing 800 different hashes at the normal rate. It would be much more interesting if it were doing one hash at 3.258 billion h/s.
    • Anybody, have an idea how fast that is compared to modern a CPU?

      IIRC, the last time I did anything like this it took my 2200+ AMD about 24 hours to do a 6-character keyspace (from 64-character set) - with MD5.

      You should compare against VIA hardware. Their CPUs are crap for general usage, but the crypto acceleration is really good:
      http://www.logix.cz/michal/devel/padlock/bench.xp [logix.cz]

      Page doesn't seem to include MD5/SHA1 though, but you can compare that to AES on your box.

    • by jerkychew (80913)

      Anybody, have an idea how fast that is compared to modern a CPU?

      IIRC, the last time I did anything like this it took my 2200+ AMD about 24 hours to do a 6-character keyspace (from 64-character set) - with MD5.

      From one of the comments (I assume c/s is Cracks per second?):

      8. I made some lazy calculations for SHA1 just for fun:

      The FPGA bruteforcer is capable of 3.257.812.230 c/s.
      My Athlon64 3400+ is is capable of 1.915.000 c/s.

      Impressive Oo

      Posted at 9:47AM on Sep 1st 2007 by miknix

      • by gratemyl (1074573) *
        c/s is likely to mean "combinations per second" - at least that would have been my guess and it makes more sense as well.
  • SHA-cracker? (Score:5, Informative)

    by owlstead (636356) on Saturday September 01, 2007 @07:06AM (#20432559)
    That's nice, his own SHA-1 cracker. But, even with advanced cryptographic attacks, SHA-1 is still in the order of 2^63. Not something you would like to try with just a few FPGA's. What is meant here is a cracker to find out which plain text, with limited entropy, is used to create a certain hash value. A SHA-1-based password cracker would therefore be a better name, I suppose.

    It seems from here [unaligned.org] that it searches a 64 ^ 8 = (2 ^ 6) ^ 8 = 2 ^ 48 keyspace in 24 hours. No small feat, it should therefore do about 3,257,812,230 hashes in a second. It does 800 concurrently, which makes for 4 million a second per SHA-1 unit. Ouch, that's really fast.

    Note that this could be done with any hash or symmetric algorithm, as long as it can be implemented on FPGA. So the moral of the story: use very long password (or even better, pass phrases), or make sure that they won't be able to acquire the hash.
    • That's nice, his own SHA-1 cracker.

      Its a bit like if you built your own cruise missile. Telling the whole world about it might not be the smartest thing to do.

      • Re: (Score:3, Funny)

        by LarsG (31008)
        Telling the whole world about it might not be the smartest thing to do.

        EFF [cryptome.org] seems to think it is the smartest thing to do.
      • Re: (Score:1, Interesting)

        by Anonymous Coward
        Not really, guys have been reprogramming FPGA for cracking and Crypto work for a long time now. I remember back in the day when they cracked the system first used for DirectTV it was done with a system close to this, Now you can get FTA recievers that recieve all of DirecTV programming for free and you have to update your firmware once every 2-3 months when they change the hash key. The guys releasing the firmwares crack the new key in a matter of 1 hour by using such a setup.

        Works great, and that is wh
    • Re: (Score:2, Informative)

      by Rythie (972757)
      By comparison my Althon 64 3200+ does about 883,000 16byte hashes a second

      $ openssl speed sha1
      Doing sha1 for 3s on 16 size blocks: 2586683 sha1's in 2.93s
      Doing sha1 for 3s on 64 size blocks: 2063294 sha1's in 2.90s
      Doing sha1 for 3s on 256 size blocks: 1199179 sha1's in 2.75s
      Doing sha1 for 3s on 1024 size blocks: 479901 sha1's in 2.84s
      Doing sha1 for 3s on 8192 size blocks: 71496 sha1's in 2.87s
      OpenSSL 0.9.8c 05 Sep 2006
      built on: Tue Mar 6 08:16:57 UTC 2007
      options:bn(64,64) md2(int) rc4(ptr,char) des(idx,cis

    • Re: (Score:3, Insightful)

      by archen (447353)
      Perhaps this gets brought up each time, but what are we supposed to use for password encryption anyway? MD5 seems to be inadequate. SHA-1 is also waning. I switched to Blowfish on all my FreeBSD servers partially because of MD5 problems, but also because it's not a common format to come across for anyone figuring they'd just have MD5 hashes to try - I understand however that blowfish was not intended for this purpose.

      But it seems like MD5 and SHA are getting weaker by the day with computational power on t
      • Re: (Score:3, Informative)

        by owlstead (636356)
        Don't forget that PKCS#5 v2.0 uses an iteration count and a salt. This means that the algorithm is not applied just once, but 1000 times (or more, 1000 is the minimum). This would mean a slowdown of 1000 on these kind of crackers *if* they implement the iteration count. A salt would make it hard to use a default configuration like this one found on internet as well.

        As said, the hash algorithm itself does not matter too much. The problem with all these schemes is that the amount of entropy in the passwords i
      • by GoRK (10018)
        I use SHA-512 with 8 bit salts. For the near future it seems like the best way to deal with password storage.
      • by jZnat (793348) *
        Just like the contest to create the AES encryption standard, there is an ongoing one (or happening soon) for cryptographic hashing algorithms. You can probably expect a good one with good vendor support within a couple years or so. Note that if you are using hashes in a typical cryptographic environment (signing a message by encrypting the hash of the message with your private key so that others can verify via your public key), I don't believe this would be a problem. Also, this is only effective in any
      • by m50d (797211)
        For an older, well-supported one that hasn't yet shown cracks the way MD5 and SHA1 have, try RIPEMD160. If your system supports more modern ones, Whirlpool is looking good, though it isn't so mature yet.
      • by kayditty (641006)
        We do not encrypt passwords (well, not usually). We hash them. There is nothing wrong with Blowfish crypt. It is very, very secure. Much more secure than either salted MD5 or SHA-1. However, there is really nothing wrong with using either MD5 or SHA-1 in the short term (so long as you are using a proper salt!). They will do fine. The eight character password, though, has been antiquated for about five years now. You should have switched to 10-15 characters some time ago. Really, even 15 characters is walkin
      • Re: (Score:3, Interesting)

        by cancerward (103910)
        No-one has cracked Ken Thompson's UNIX password [google.com] yet, and he is a co-inventor of the algorithm...
        • by tengwar (600847)
          No, no-one has reported cracking it. Bear in mind that Ken is capable of hiding [cmu.edu] stuff below the source code of the OS, Ken could have set it up so that when a program outputs this particular string, Unix takes some predetermined action such as calling in the black helicopters.

          On a more serious level, but for the same reason, there is no reason to think that this entry in the password file corresponds to a valid Unix password, since if that system was based on his code, he login will bypass normal authenti

          • I know about the "Reflections on Trusting Trust" paper. I'm not sure he ever implemented it; he just implied he knew how to. It pays to be paranoid! In the paper UNIX Password Security - Ten Years Later [springerlink.com] the authors wrote "Over the past 10 years, improvements in hardware and software have increased the crypts/second/dollar ratio by five orders of magnitude." That was about 20 years ago, so if no-one has reported cracking ken's password in the meantime, I think the original UNIX password algorithm has sto
      • Password encryption is data storage. Like your hard disks, use something that's big enough for today and remember to upgrade it when the encryption method doesn't have sufficient space to protect you any more.
    • by Cerebus (10185)
      There's three ways to attack a hash: attack collision resistance, attack pre-image resistance, and attacking the plaintext.

      Collision resistance means it should be difficult to find two texts that have the same hash value. The upper bound for these attacks is 2^(n/2), where n is the length of the hash. For SHA-1, that upper bound it 2^80. Because of some more sophisticated attacks, 2^63 is now the current best for a collision attack.

      Pre-image resistance means given a hash it should be impossible to find
    • by kayditty (641006)
      A better alternative is using a hash function with an adjustable cost [usenix.org] (and good salting function with a large salt space), or you could just stop using passwords all together.
  • But I have no idea what that summary or TFA are about.
    • by lindseyp (988332) on Saturday September 01, 2007 @08:07AM (#20432801)
      What? You mean TFA didn't have the right TLA's or FLA's or maybe FPGA's are just not your COT and SHA-1 is no BFD to you but to URG it's a BFD.

      what?
  • Cool diagram. [unaligned.org]

    I used to draw patterns like that while suffering through triple Maths - draw a circle, mark off every 10 degrees (or 5 if it looked like being really boring today), then join every point to every other point. Mindless, yet strangely satisfying.

    And what kind of nerd site doesn't let you use a degree symbol?
  • Princeton Engine (Score:2, Interesting)

    by SuseLover (996311)
    For the record the company is Thomson and that is a peice of equipment known as the Princeton Engine used by the IC developers to quickly verify their software/algorythms. It was lying around in our computer room (known as the Princeton Engine room) for years. Its replacement is from Cadence and is called Palladium and has the power of several hundred of those old fpga boards.
  • He's not looking for collisions - he's looking for preimages of a given hash. Since he can't search a large enough space to find a preimage of an arbitrary hash, the most useful application of this sort of thing is password cracking - given the hash of someone's password, search the space of plausible passwords until you find one that matches the hash (taking salt into account as appropriate). Fun but not too advanced.

    Shame - what I was really hoping to read was that he'd implemented the latest collision
    • by jrwr00 (1035020)
      Video HW does seems like it would be able to handle the weirdness of SHA-1 Cracking
  • This doesn't find hash collisions (much less pre-images), so it's not an attack on SHA-1. It's an exhaustive search over a subset of ASCII (64 characters) for the purpose of cracking short (8-character) passwords.

    This is John the Ripper in hardware, just not as clever.

  • So is this being used to create huge rainbow tables? How many characters out can you go? I saw someone selling rainbow tables for SHA-1 out to 15 characters at DefCon on 2 DVD's for like 10$...
  • The boards contain 15 Virtex-II Pro (XC2VP20) FPGAs in 3 identical sets of 5 (here called "channels"). Each channel also owns a Spartan-II (XC2S50) FPGA that was originally used as a control chip, and a DSP (ADSP21160M) which probably calculated transform parameters. There is also a shared XC2S50 chip, which is not used in this application, just like the DSPs. The clock distribution tree unfortunately contains 2 domains, which means the 39MHz channel clock had to be distributed from chip to chip, using inte

Hacking's just another word for nothing left to kludge.

Working...