[prev in list] [next in list] [prev in thread] [next in thread] 

List:       john-users
Subject:    Re: [john-users] Splitting workload on multiple hosts
From:       Solar Designer <solar () openwall ! com>
Date:       2018-04-09 20:44:22
Message-ID: 20180409204422.GB23244 () openwall ! com
[Download RAW message or body]

On Mon, Apr 09, 2018 at 04:13:55PM -0400, Rich Rumble wrote:
> I used save-memory=2 it was going over and into swap for the 1G slices.

You'd likely achieve better speed by using --save-memory=1 and running
fewer forks.  The performance difference between 12 and 24 forks is
probably small (those are just second logical CPUs in the same cores).
The performance difference between --save-memory=1 and --save-memory=2
for large hash counts when things do fit in RAM with =1 can be large (a
few times, as it can be a 16x difference in bitmap and hash table size
and thus up to as much difference in lookup speed).  You could very well
prefer, say, 6 forks and lower memory saving over 24 forks and larger
memory saving per each.  Larger chunks and fewer forks, too.  These are
unsalted hashes, and there's little point in recomputing the same hashes
(of the same candidate passwords) for each chunk when you can avoid that
(even if by using fewer CPU cores at a time).

On Mon, Apr 09, 2018 at 04:22:17PM -0400, Stephen John Smoogen wrote:
> Are you able to use taskset to push each one to a CPU? I found that
> sometimes the kernel would shove multiple processes to the same CPU.
> This was done more by the kernel and not the process itself so taskset
> or similar tools needed to be done to get the forks off to their own
> client.

This shouldn't be much of a problem with recent kernels, except for
latency sensitive tasks which password cracking isn't, and anyway it
would be the least of Rich's worries given what he's doing.

Alexander
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic