[prev in list] [next in list] [prev in thread] [next in thread] 

List:       john-dev
Subject:    Re: [john-dev] CUDA multi-device support
From:       Muhammad Junaid Muzammil <mjunaidmuzammil () gmail ! com>
Date:       2014-01-13 4:19:37
Message-ID: CAFNw1FL9CHC_xSnj4POM3=ZZrQjXKEPextb5Br7gF2Xu=58qtw () mail ! gmail ! com
[Download RAW message or body]

Thanks for the info. Previously, I wasn't thinking in terms of
virtualization. With the frameworks like DistCL etc, devices over a
cluster/cloud can be accessed as a native device.


On Sun, Jan 12, 2014 at 8:08 PM, magnum <john.magnum@hushmail.com> wrote:

> On 2014-01-12 14:16, Jeremi Gosney wrote:
>
>> On 1/12/2014 1:25 AM, Muhammad Junaid Muzammil wrote:
>>
>>> Currently we have set MAX_GPU limit as 8 in both openCL and CUDA
>>> variants. What was the reason behind it? Currently, both AMD crossfire
>>> and NVIDIA SLI supports a maximum of 4 GPU devices.
>>>
>>
>> This is not very sound logic, as one does not use Crossfire nor SLI for
>> GPGPU. In fact, this technology usually must be disabled for compute
>> work. fglrx supports a maximum of 8 devices, and afaik nvidia supports
>> 16 devices, if not more. So 16 would likely be a more sane value.
>>
>>
> Right. And that's local ones. With VCL/SnuCL/DistCL you can have a lot
> more so oclHashcat supports 128 devices.
>
> I intend to add a file ```common-gpu.[hc]``` for shared stuff between CUDA
> and OpenCL, eg. temperature monitoring. When I do that I will merge
> MAX_CUDA_DEVICES and MAX_OPENCL_DEVICES to one same MAX_GPU_DEVICES so
> they'll always be the same. And I'll probably set it as 128.
>
> magnum
>
>

[Attachment #3 (text/html)]

<div dir="ltr">Thanks for the info. Previously, I wasn&#39;t thinking in terms of \
virtualization. With the frameworks like DistCL etc, devices over a cluster/cloud can \
be accessed as a native device.</div><div class="gmail_extra"> <br><br><div \
class="gmail_quote">On Sun, Jan 12, 2014 at 8:08 PM, magnum <span dir="ltr">&lt;<a \
href="mailto:john.magnum@hushmail.com" \
target="_blank">john.magnum@hushmail.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"> <div class="HOEnZb"><div class="h5">On 2014-01-12 14:16, \
Jeremi Gosney wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 \
.8ex;border-left:1px #ccc solid;padding-left:1ex"> On 1/12/2014 1:25 AM, Muhammad \
Junaid Muzammil wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 \
.8ex;border-left:1px #ccc solid;padding-left:1ex"> Currently we have set MAX_GPU \
limit as 8 in both openCL and CUDA<br> variants. What was the reason behind it? \
Currently, both AMD crossfire<br> and NVIDIA SLI supports a maximum of 4 GPU \
devices.<br> </blockquote>
<br>
This is not very sound logic, as one does not use Crossfire nor SLI for<br>
GPGPU. In fact, this technology usually must be disabled for compute<br>
work. fglrx supports a maximum of 8 devices, and afaik nvidia supports<br>
16 devices, if not more. So 16 would likely be a more sane value.<br>
<br>
</blockquote>
<br></div></div>
Right. And that&#39;s local ones. With VCL/SnuCL/DistCL you can have a lot more so \
oclHashcat supports 128 devices.<br> <br>
I intend to add a file ```common-gpu.[hc]``` for shared stuff between CUDA and \
OpenCL, eg. temperature monitoring. When I do that I will merge MAX_CUDA_DEVICES and \
MAX_OPENCL_DEVICES to one same MAX_GPU_DEVICES so they&#39;ll always be the same. And \
I&#39;ll probably set it as 128.<span class="HOEnZb"><font color="#888888"><br>

<br>
magnum<br>
<br>
</font></span></blockquote></div><br></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic