NOTE: When cracking WPA/WPA2 passwords, make sure you check gpuhash.me first incase it's already been processed.

Home - General Discussion - New MDXfind posted on hashes.org - ARM support


12 Results - Page 1 of 1 -
1
Author Message
Avatar
Waffle

Status: Elite
Joined: Wed, 02 Jan 2013
Posts: 284
Team: CynoSure Prime
Reputation: 357 Reputation
Offline
Sat, 24 Dec 2016 @ 21:27:50

The new version of MDXfind offers several new algorithms, and a few more speed improvements.

I also started ARM support, and have tested it on multiple platforms (various ARM6 and ARM7 devices). It operates only on linux, for the ARM version.
The performance is surprising. The PI3, running best64 rules on 1,000,000 input hashes is about 4.6M hashes per second with MD5. Not bad at all. I have not done more than basic regression testing, so I would appreciate feedback if you run it on a board and run into any problems.

Of course, the Windows and Mac versions are there as well :-)


Avatar
Chick3nman
Moderator
Status: Trusted
Joined: Wed, 28 Jan 2015
Posts: 546
Team:
Reputation: 582 Reputation
Offline
Sat, 24 Dec 2016 @ 23:49:45

Maybe I'm just not understanding some of the settings but is there a way to do hash:salt without separating the salts into a new file and having them run against every hash? It makes testing for different algorithms pretty hard when every salt is tried against every hash from a list.

Either way, Thanks for your work on the project! It's an incredibly powerful tool and I use it quite often when searching for algorithms or running things like rotated or truncated hashes.


My PGP key is available for security and identity verification here: https://keybase.io/chick3nman

Hardware: 1x D-WAVE 2000Q

BTC: 1Chick3nMTco6sBEByKuvmAzYTBsGN5KzD

Avatar
Waffle

Status: Elite
Joined: Wed, 02 Jan 2013
Posts: 284
Team: CynoSure Prime
Reputation: 357 Reputation
Offline
Sun, 25 Dec 2016 @ 07:59:25

Chick3nman said:

Maybe I'm just not understanding some of the settings but is there a way to do hash:salt without separating the salts into a new file and having them run against every hash? It makes testing for different algorithms pretty hard when every salt is tried against every hash from a list.

Either way, Thanks for your work on the project! It's an incredibly powerful tool and I use it quite often when searching for algorithms or running things like rotated or truncated hashes.

You aren't the first to note that.

But it's important to remember that the salt is not part of the hash - it is part of the password. Every salt *must* be tried with every candidate word. The only thing that keeping the salts and the hash together does is (theoretically) reduce the number of hashes you compare against, for a given salt.

MDXfind is particularly good at that, however. It doesn't matter if you have one hash, or 1,000,000 - the comparison speed is almost identical, unlike other programs (which are greatly affected by the number of unsolved hashes present). If I were to re-code it to "pair" the hash and the salt, I would still need to separate or look up the salts and hashes, which (paradoxically) would make the program much slower, not faster.

For algorithms that use longer salts, MDXfind does mark the salt as "found", and does not use it. For shorter salts (like md5(md5($pass).$salt) with 3 character salts), the salts are not unique, and thus must be used with every candidate password.

And yes, you are correct - using salts with an unknown algorithm does in fact make it hard. That's really the point of salts - adding apparent complexity to the password to prevent trivial re-use of know password hashes to solve other users passwords.

If you have an idea that would make this faster, I'm all for trying it out, however :-)


Avatar
Chick3nman
Moderator
Status: Trusted
Joined: Wed, 28 Jan 2015
Posts: 546
Team:
Reputation: 582 Reputation
Offline
Sun, 25 Dec 2016 @ 10:22:27

To me it simply seems like unnecessary work to compared a hash generated with a specific salt+candidate to a hash we are certain it won't work with since we know it's the wrong salt already. When you have 1000000 hash:salt pairs, that's 1000000x1000000 comparisons per candidate, for no clear reason. If the salts are paired, even if it takes a few operations to get to the comparison, that still means 1000000x1 + a few operations possibly. I don't seem to understand how doing less work ends up slower, but I also don't claim to understand how MDXFind Is built at a low level either. I might be confused still about the reason it wouldn't be worth it, as you said comparison speed wouldn't change much. Most applications seem bottlenecked by comparisons so it's sorta hard to imagine it wouldn't be the same case here.


My PGP key is available for security and identity verification here: https://keybase.io/chick3nman

Hardware: 1x D-WAVE 2000Q

BTC: 1Chick3nMTco6sBEByKuvmAzYTBsGN5KzD

Avatar
Waffle

Status: Elite
Joined: Wed, 02 Jan 2013
Posts: 284
Team: CynoSure Prime
Reputation: 357 Reputation
Offline
Sun, 25 Dec 2016 @ 15:40:30

Chick3nman said:

To me it simply seems like unnecessary work to compared a hash generated with a specific salt+candidate to a hash we are certain it won't work with since we know it's the wrong salt already. When you have 1000000 hash:salt pairs, that's 1000000x1000000 comparisons per candidate, for no clear reason. If the salts are paired, even if it takes a few operations to get to the comparison, that still means 1000000x1 + a few operations possibly. I don't seem to understand how doing less work ends up slower, but I also don't claim to understand how MDXFind Is built at a low level either. I might be confused still about the reason it wouldn't be worth it, as you said comparison speed wouldn't change much. Most applications seem bottlenecked by comparisons so it's sorta hard to imagine it wouldn't be the same case here.

Like I said, you aren't the first to make that assumption.

The way that MDXfind works is different, however. The comparison code can reject a match in (let me check...) 10 instructions, and one prefetched read. That's for _any number_ of hashes - 1 or 100,000,000 (which is how I normally run MDXfind). Those 10 instructions run well matched to the pipeline, so execute in about 6 clocks, or about 2-5 nanoseconds (because the memory fetch happens earlier, out of band).

Now, let's just assume that I can take that to zero instructions. In fact, let's not assume that; let's try it... stand by.

On my crappy laptop, using my 29,012,259 test hashes, and 29,012,259 test passwords with best64.rule it takes:

39.73 seconds hashing, 2,198,699,724 total hash calculations
55.35M hashes per second (approx)

to run. (to be clear, I reversed each of the 29,012,259 hashes, so that there would be no matches, and then ran mdxfind -f 29mr.txt -r rules/best64.rule 29m.pass)

Now, I modified the code to remove the comparison altogether. So, even though I will still load the hashes, run the rules, and generate the md5's, I won't compare any of them to the loaded hashes:

32.50 seconds hashing, 2,198,699,724 total hash calculations
67.65M hashes per second (approx)

So, by not comparing *anything*, i save 3.28 nanoseconds per hash. Now, let's add a comparison against _just one_ md5 hash (128 bits), using a best-case (in cache) comparison (using SSE instructions, so it really is a best-case test).

33.42 seconds hashing, 2,198,699,724 total hash calculations
65.79M hashes per second (approx)

So, if you have *EXACTLY ONE* salt per hash, and you can (somehow) arrange to look at just that one (I'm skipping over a lot of details), you can save 2.87 nanoseconds per hash.

But all of that blows up when you can have more than one salt per hash (as, for example, with vb 3.8 hashes, where many passwords can share one salt). Let's look at a typical VB hash list. One of my lists using 3-char salts has 821,925 salts in 4,479,314 solved hashes. The frequency ranges from 21 re-used salts to 1. Let's also assume that we can (somehow) do all of the salt-wrangling for free, and it takes no time (impossible, but this is for arguments sake). That leave us searching through 789,567 salts with more-than-one salt used, and 32,358 with exactly one.

Without going through a lot more examples, I hope it is clear that 1 comparison is always going to be better than more-than-one :-)

Bottom line; if you can arrange to work on one salt at a time, then yes, you can save 2.87 nanoseconds per candidate - but only if there is exactly one salt. As soon as you enter the real world, however, you will have cache flushes (comparisons won't be in the cache), and overhead (sorting, or other comparison techniques) which nullify any advantage you get. The salt is part of the password, not part of the hash.


Avatar
Waffle

Status: Elite
Joined: Wed, 02 Jan 2013
Posts: 284
Team: CynoSure Prime
Reputation: 357 Reputation
Offline
Sun, 25 Dec 2016 @ 18:08:50

Chick3nman said:

To me it simply seems like unnecessary work to compared a hash generated with a specific salt+candidate to a hash we are certain it won't work with since we know it's the wrong salt already.

I want to touch on this some more, if you could indulge me.

I have solved literally millions of hashes with missing, or corrupted salts, by ignoring what the hash/salt pairing was. There are many lists which have bad salts (because of bad extraction, bad handling, or just poor practices). That was one of the initial reasons that I started writing MDXfind. John had an almost-working mode in it that would fix salts, and the CPU version of hashcat had a different almost-working version, but it seemed like this was an afterthought.

Looking just at the left list from Hashkiller, for example, I see that I have found 213,962 md5(md5($pass).$salt) passwords. None of them had salts, of course, in the original list supplied by Hashkiller.

salts are part of the password, not part of the hash.


Avatar
Chick3nman
Moderator
Status: Trusted
Joined: Wed, 28 Jan 2015
Posts: 546
Team:
Reputation: 582 Reputation
Offline
Sun, 25 Dec 2016 @ 20:48:41

That clears it up a lot, thanks. And yes I'm aware of the issue with bad or missing salts. I have like 1mil hashes from the list manager that I can't submit because they are labeled as md5 yet I found them with salts. I posted asking what to do about it a while back and got no replies.


My PGP key is available for security and identity verification here: https://keybase.io/chick3nman

Hardware: 1x D-WAVE 2000Q

BTC: 1Chick3nMTco6sBEByKuvmAzYTBsGN5KzD

Avatar
Waffle

Status: Elite
Joined: Wed, 02 Jan 2013
Posts: 284
Team: CynoSure Prime
Reputation: 357 Reputation
Offline
Tue, 03 Jan 2017 @ 17:59:10

And, thanks Chick3nman for defending my honour on IRC :-)

I'll let you know when the mdxfind with GPU support is done. Soon.


Avatar
jimbas

Status: Trusted
Joined: Sat, 26 Mar 2016
Posts: 830
Team:
Reputation: 1357 Reputation
Offline
Tue, 03 Jan 2017 @ 18:11:15

Waffle said:

And, thanks Chick3nman for defending my honour on IRC :-)

I'll let you know when the mdxfind with GPU support is done. Soon.

wow.. carefull.. when you do so.. you could be the owner of the best hashcracker program :P hahah

awesome as always!


BTC: 3F78Wk7GhnWAzAsrUw6uUeXZ3PzyuAvkm7
BCH: 33tuLY5u8drRkgP4pVeFupPrV8bSV5xaqY

Avatar
Chick3nman
Moderator
Status: Trusted
Joined: Wed, 28 Jan 2015
Posts: 546
Team:
Reputation: 582 Reputation
Offline
Tue, 03 Jan 2017 @ 18:24:24

Waffle said:

And, thanks Chick3nman for defending my honour on IRC :-)

I'll let you know when the mdxfind with GPU support is done. Soon.

MDXFind is a great tool and I think it deserves credit where due. Just making sure others know that as well


My PGP key is available for security and identity verification here: https://keybase.io/chick3nman

Hardware: 1x D-WAVE 2000Q

BTC: 1Chick3nMTco6sBEByKuvmAzYTBsGN5KzD

Avatar
Theatione

Status: Elite
Joined: Sun, 22 Feb 2015
Posts: 646
Team:
Reputation: 1593 Reputation
Offline
Thu, 05 Jan 2017 @ 00:15:12

Chick3nman said:


MDXFind is a great tool and I think it deserves credit where due. Just making sure others know that as well


Totally agree! Thank you Waffle for sharing this tool with us.
Quick questions: How well is the performance scaling on systems with several cpus? And if it is scaling well, would that still be useful once MDXfind with GPU support is released?


If you have large lists, please upload them to the Hash List Manager, it will be more practical for everybody. Thank you!

Avatar
Waffle

Status: Elite
Joined: Wed, 02 Jan 2013
Posts: 284
Team: CynoSure Prime
Reputation: 357 Reputation
Offline
Thu, 05 Jan 2017 @ 06:47:14

Theatione said:


Totally agree! Thank you Waffle for sharing this tool with us.
Quick questions: How well is the performance scaling on systems with several cpus? And if it is scaling well, would that still be useful once MDXfind with GPU support is released?

It's working well, and I've added support for a number of systems (for example, the ARM line, and some of the more esoteric FPGA boards).

One person is running mdxfind on 128 cores, and getting very nice performance (>1GH/sec as I recall), and the performance will continue to get better over time.

GPUs are great, but they have significant limitations. Certain algorithms really do better on CPU than GPU. For a graphic example of this (easy to duplicate), take a file you know the hashes for (or generate a new set of md5's from an existing wordlist).

Code:
mdxfind -f /dev/null -z wordlist | cut -c 8- | cut -c 1-32 > wordlist-hash.txt

Then, use your favorite GPU-based hash cracker to crack the passwords, using the original wordlist. You will be surprised at how slow it is...

There is a place for GPU, and a place for CPU in these applications. Use the right hammer for the correct screw, is the best advice :-)




12 Results - Page 1 of 1 -
1

We have a total of 162956 messages in 20469 topics.
We have a total of 19222 registered users.
Our newest registered member is febrian1.