Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Encryption

Blazing Fast Password Recovery With New ATI Cards 215

An anonymous reader writes "ElcomSoft accelerates the recovery of Wi-Fi passwords and password-protected iPhone and iPod backups by using ATI video cards. The support of ATI Radeon 5000 series video accelerators allows ElcomSoft to perform password recovery up to 20 times faster compared to Intel top of the line quad-core CPUs, and up to two times faster compared to enterprise-level NVIDIA Tesla solutions. Benchmarks performed by ElcomSoft demonstrate that ATI Radeon HD5970 accelerated password recovery works up to 20 times faster than Core i7-960, Intel's current top of the line CPU unit."
This discussion has been archived. No new comments can be posted.

Blazing Fast Password Recovery With New ATI Cards

Comments Filter:
  • My password is safe (Score:0, Informative)

    by Anonymous Coward on Tuesday March 16, 2010 @10:57AM (#31495760)

    Because it's in my pants!

  • by ShadowRangerRIT ( 1301549 ) on Tuesday March 16, 2010 @11:03AM (#31495916)
    And for the curious, TFA is no better. They're calling it a benchmark so they can advertise more effectively, that's all.
  • Re:My password. (Score:3, Informative)

    by FireofEvil ( 1637185 ) on Tuesday March 16, 2010 @11:05AM (#31495942)
    1, 2, 3, 4, 5? That's amazing! I've got the same combination on my luggage!
  • Huh? (Score:2, Informative)

    by blackjackshellac ( 849713 ) on Tuesday March 16, 2010 @11:09AM (#31496006)

    Is this supposed to be a good thing? Sounds like someone's password encryption algorithm needs some upgrading to me.

  • Re:GPUs (Score:3, Informative)

    by godrik ( 1287354 ) on Tuesday March 16, 2010 @11:14AM (#31496074)

    It is in progress in fact. That was the point of intel 80 cores prototype.

    I found funny that with time we keep doing the cycle external processor->co processor->ntergrate in CPU dye -> external processor

  • boo (Score:5, Informative)

    by Anonymous Coward on Tuesday March 16, 2010 @11:16AM (#31496102)

    boo slashvertisement

  • Re:GPUs (Score:3, Informative)

    by Anonymous Coward on Tuesday March 16, 2010 @11:17AM (#31496114)

    GPU's are better at doing certain calculations generally, and are very good at parallel processing seeing as graphics can be broken down to be processed parallel very quickly. For this, gpu's have a ton of cores. So in a way processors are indeed starting to follow with multicore systems but it is nowhere near the number GPU's use. High end GPU's now have 480+ processor cores on a card these days, thats a lot more then 4 core intel's ;). But if you had a ton of cores on the processor, each additional one doesn't add too much to actual cpu power as most things must be done linearly, not parallel. Just helps with multitasking really. Which is why a few cores are useful, but overall power of the core is better then having a ton of them. Graphics cards go with a ton of lower speed cores.

  • Re:Slashvertisement (Score:3, Informative)

    by cOldhandle ( 1555485 ) on Tuesday March 16, 2010 @11:18AM (#31496132)
    In case anyone wants to play around with this tech without paying (or rolling your own): I tried out this free (as in beer) windows software yesterday: http://golubev.com/rargpu.htm [golubev.com] It seemed to work very effectively - I was able to brute force 5 lower case letter only passwords on RAR files in a couple of minutes on a GTX260. It also has some advanced options to specify mutations of strings to try, and to use word lists.
  • by cbope ( 130292 ) on Tuesday March 16, 2010 @11:20AM (#31496174)

    Normal. Running GP-GPU or CUDA apps has no effect on output to the screen. We do it for medical imaging processing.

  • by roman_mir ( 125474 ) on Tuesday March 16, 2010 @11:22AM (#31496192) Homepage Journal

    On that one ATI board that get 103K passwords per second and only 4K on the latest quad-core intel (which by the way, is almost 26 and not 20 only times faster.)

    So that's wonderful. How many passwords are there in 1024 bit SSL encryption? 1024 asymmetric is equivalent to 80 symmetric algorithm, so that's like 2^80 passwords, right?

    Let's say 100,000 passwords per second, that's 10^5.

    Google says this: (2^80 / 10^5 ) / (3600 *24 *365*1000) = 383 347 863

    383.3 million years to go through every password in 2^80 possibilities.

    In reality, of-course, not every combination is used, many passwords can be eliminated by heuristic and also it helps to have a good dictionary file handy, from which to generated most likely password combinations. That probably cuts down from 383 million years to something much more ATI friendly. Of-course we need to use stronger cypher.

    As a final note: at last I understand why Hugh Jackman needed the 7 monitor setup, each one must have been used as an output device for the video card it was connected to. Obviously the video cards were the actual power behind all that hacking!

  • Re:GPUs (Score:1, Informative)

    by Anonymous Coward on Tuesday March 16, 2010 @11:22AM (#31496210)

    GPUs are ridiculously parallel SIMD style processors. They are good at performing massive amounts of calculations in parallel, but for it to be effective these calculations have to be the same across all cores. GPUs don't have a huge amount of true CPU-style cores; rather, they can run one or a few algorithms over many instances of data in parallel. This works great for certain scientific and brute-force calculations such as these (and for 3D games), but it doesn't really work for regular programs. Also, GPU programs usually need to be written in a specific programming language (usually a derivative of C) and with this parallelism in mind.

    CPUs already have something like this (SIMD instructions), and they help for many workloads, but massive paralellism like this only really works for GPU-type tasks, not your average OS/apps.

  • Re:GPUs (Score:3, Informative)

    by John Napkintosh ( 140126 ) on Tuesday March 16, 2010 @11:23AM (#31496214) Homepage

    The last sentence nails it. They only do certain types of operations well, and the frequency with which I upgrade GPUs compared to CPUs - or more specifically, the fact that I very rarely replace both at the same time - leads me to believe I'm better off having them separate. Maybe there are parts of the GPU which could be incorporated into the CPU, and I think that might be what the Core i3/5/7 processors are doing with GMA integration.

  • Re:Portrayal (Score:4, Informative)

    by ElectricTurtle ( 1171201 ) on Tuesday March 16, 2010 @11:23AM (#31496218)
    Being found not guilty does not mean he didn't spend time in jail. Not everybody is released on their own recognizance pending trials.
  • by Anonymous Coward on Tuesday March 16, 2010 @11:31AM (#31496318)

    He probably assumes the screen looks different because he assumes video cards are nothing but raw memory-mapped video framebuffers -- which hasn't been the case since 1990 or so.

  • Re:GPUs (Score:5, Informative)

    by SuperMog2002 ( 702837 ) on Tuesday March 16, 2010 @11:36AM (#31496372)

    Is the coding/assembly so different that it doesn't translate? Do they only do certain kinds of processing really well (it is a GPU after all), so it couldn't handle other more 'mundane' OS needs?

    Yes, exactly. CPUs are built from the ground up to do scalar math really, really fast. That lends itself well to doing tasks that must be performed in sequence, such as running an individual thread. However, they've only recently gained the ability to do more than one thing at a time (dual core processors), and even now high end CPUs can only do six calculations at once (6 core processors).

    Meanwhile, GPUs are built to do vector math really, really fast. They can't do individual adds anywhere near as fast as a CPU can, but they can do dozens of them at the same time.

    Which type of processor is best for which job depends entirely on the nature of the math involved and how parallelizable the task is. In the case of 3D graphics, drawing a frame involves tons of vector arithmetic work, which is why your 1 GHz GPU will run circles around your 3 GHz CPU for that task (and is also where the GPU gets its name from). In the case mentioned in the article, password cracking is highly parallelizable: you've gotta run 100 million tests, and the outcome of any one test has zero influence on the other tests, so the more you can run at the same time, the better. By running it on the GPU, each individual test will take a bit longer than running it on the CPU would, but you'll be able to run dozens simultaneously instead of just a few, and will thus get your results much faster.

    CPUs certainly have their place, though. Some tasks simply must be done in sequence and cannot be easily divided up in to seperate parallel tasks. The CPU will get these done much faster, since running them on the GPU would incur the speed penalty without realizing any benefit.

    I've simplified it a bit for the sake of explanation, but that's the gist of it. Hope that helps!

  • Re:Portrayal (Score:4, Informative)

    by russotto ( 537200 ) on Tuesday March 16, 2010 @11:45AM (#31496572) Journal

    No, the US jury found him not guilty.

    No, the charges against Sklyarov were dropped and he was released as part of a deal in which Elcomsoft agreed to accept US jurisdiction. The US jury then found Elcomsoft not guilty.

  • Re:Slashvertisement (Score:3, Informative)

    by elrous0 ( 869638 ) * on Tuesday March 16, 2010 @11:51AM (#31496676)
    Agreed, looks more like the kind of "story" we'd see posted by kdawson, not Taco.
  • by Anonymous Coward on Tuesday March 16, 2010 @11:55AM (#31496756)

    The display buffer for a 1920x1200 screen with 24-bit colour takes less than 7MB. Even a fairly low-end graphics card will have at least 128MB of memory. In other words, there's plenty of memory for a program running on a GPU without needing to piss on the display buffer.

    If your screen is just displaying a bunch of 2D windows, then the 100s of cores in your GPU will be sitting idle. Again, computations running on the GPU will have no impact on what you see.

  • by ShadowRangerRIT ( 1301549 ) on Tuesday March 16, 2010 @11:59AM (#31496830)
    I run the Folding@home [stanford.edu] GPU client [stanford.edu] on my GeForce 8800 GTX. On Vista and later OSes (pre-Vista, the driver model wasn't well adapted to GPGPU and this leads to a polling driven communication scheme which is really inefficient), the effect on resources is unnoticeable aside from during games (where I kill the client to reduce jerkiness); the GPGPU work is lower priority and gets shunted aside from rendering, though the latency involved is a problem for graphics intensive games. For less demanding work and general usage, it's unnoticeable; the GPU is perfectly capable of drawing the screen and curing Alzheimer's at the same time. :-)
  • Re:Portrayal (Score:3, Informative)

    by ElectricTurtle ( 1171201 ) on Tuesday March 16, 2010 @12:36PM (#31497482)
    Foreign nationals such as Dmitry Skylarof are usually classified 'high risk of flight' because they are expected to run back to their country if given half a chance, so, yeah, not out of the ordinary.
  • Re:Portrayal (Score:5, Informative)

    by Anonymous Coward on Tuesday March 16, 2010 @02:19PM (#31499092)

    Dude, I was there. Defcon 9.

    He didn't "enter a hostile country" unless you think the USA hates everybody and is hostile to all.

    Dmitriy broke no US laws and broke no Russian laws. No US entity had complained about his activities before his arrest. He had every right to think he'd not be bothered.

    But he he angered a powerful and amoral US corporation named Adobe, so they had their government lackeys detain him. When Adobe took a horrible blog-beating and a nearly instantaneous sales hit they asked the fedguv to drop the charges and the USA said "no, you turned him in, you don't prosecute DCMA, we do - he stays in jail for a year until we eventually get around to trying him and finding him not guilty". The worm turned on its master, very funny for everyone but Dmitriy's wife and infant children.

    What did Dmitriy do that brought corporate wrath down on him? He revealed in a public forum that Adobe's e-book cipher, which they were shopping to authors as "hard encryption", was ROT-13. I was there when he did it. That's right, Adobe was telling authors that their technology would prevent duplication of their books, but their copy-protection was ROT-13. It's beyond parody.

    Dmitriy revealed to e-book authors that Adobe had ripped them off. For that, he was held in durance vile.

    Why did he do it? Not for the challenge, it was trivial! He did it so people could back up their legally purchased e-Books and so that blind people could read e-books. For that, he was held.

  • Re:What about....? (Score:3, Informative)

    by Ant P. ( 974313 ) on Tuesday March 16, 2010 @03:01PM (#31499754)

    Not really. GPUs are good at going really fast in a straight line. Throw so much as an "if" statement at them and they become about as fast as a P2. The closest you'd get to what you're describing is a Cell PCI-E card, or Intel's vapourware Larrabee.

    Though if all you want is to use your old stuff on a new PC, you can get ISA/PCI card motherboards that run off the host's power/peripherals.

  • by Anonymous Coward on Tuesday March 16, 2010 @06:10PM (#31502182)

    If you are worried about an individual: The password needs to be 10 characters. (Upper, lower, digits and symbols.)

    If you are worried about governments or large corporations: The password needs to be 12 characters. (Again - upper, lower, digits and symbols.)

    If you want to use a subset of the possible characters, the passwords need to be longer.

    That's based on actual results using GPUs to perform the calculations.

  • Re:GPUs (Score:3, Informative)

    by imgod2u ( 812837 ) on Tuesday March 16, 2010 @06:50PM (#31502606) Homepage

    My understanding is that even DX10+ compliant GPUs still suffer badly when conditional branching occurs. They can do it, but it basically causes them to throw away everything.

    That's entirely up to the implementation. Today's generations of GPU's don't pay much heed to conditional branching but the upcoming Fermi from nVidia, for instance, does introduce branch prediction and tracking. The API supports conditionals and loops.

    As for Larrabee, while it was designed as a GPU in some ways, I got the impression it still hewed to CPU roots. It was integer based, not floating point based

    *boggle* no it wasn't. The thing was a bunch of 486 CPU's each with a gigantic 128-bit SIMD (read: vector floating point) unit attached. It obviously was not made to do anything but the most rudimentary CPU tasks. Hell, it doesn't even support branch prediction or OoOE.

    They wanted to make all those college raytracer programs practical for use, replacing the current model which is somewhat more fuzzy and less accurate, but *way* faster.

    Erm, no. While it's true that SSE supports 64-bit FP and may have been faster than the double-precision data on current graphics cards *per core*, in aggregate, it still wouldn't be any faster than a typical graphics card. And with Fermi, nVidia has vastly improved its double-precision processing anyway.

  • Re:Portrayal (Score:3, Informative)

    by Rene S. Hollan ( 1943 ) on Tuesday March 16, 2010 @08:30PM (#31503486)

    Bail bondsmen can't help you if you can't post collateral or pay the bond fee.

    The problem isn't not having the resources to post bail. (Well, that is a problem, but a different one.) The problem is not being able to execute the steps to do so.

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...