[prev in list] [next in list] [prev in thread] [next in thread] 

List:       qubes-devel
Subject:    Re: [qubes-devel] PCI passthrough appears to be regressing
From:       Eric Shelton <knockknock () gmail ! com>
Date:       2016-01-31 22:07:01
Message-ID: 92515724-519f-4cb8-a08f-27530517f80e () googlegroups ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


On Sunday, January 31, 2016 at 11:27:40 AM UTC-5, joanna wrote:
> 
> -----BEGIN PGP SIGNED MESSAGE----- 
> Hash: SHA256 
> 
> On Sat, Jan 30, 2016 at 08:20:45AM -0800, Eric Shelton wrote: 
> > I'm not sure that I have anything concrete enough to open an issue yet 
> > (aside from https://github.com/QubesOS/qubes-issues/issues/1659), but I 
> > think it is worth initiating a discussion about PCI passthrough support 
> > having regressed from Qubes 2.0 (where pretty much everything seemed to 
> > work, including passthrough to HVM domains), and that it appears to only 
> be 
> > getting worse over time.  For example, see these recent discussions: 
> > 
> > https://groups.google.com/forum/#!topic/qubes-users/VlbfFyNGNTs 
> > https://groups.google.com/d/msg/qubes-devel/YqdhlmYtMY0/vCO3QHLBBgAJ 
> > https://groups.google.com/d/msg/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ 
> > 
> > The state of things seems to be something along these lines: 
> > - everything worked pretty well under Qubes 2.0 
> > - starting with Qubes 3.0, PCI passthrough to HVM domains broke (I think 
> it 
> > is a libvirt related issue, as passthrough would still work using 'xl' 
> to 
> > start a domain) 
> > - it seems people are having less success doing passthrough under Xen 
> 4.6 
> > than under Xen 4.4.  However, there is no hard evidence on this. 
> > - Linux kernel 4.4 appears to break PCI passthrough for network devices 
> > entirely (not so much what is going on with Qubes today, but evidence 
> that 
> > Xen's PCI passthrough support is getting worse, not better).  Although 
> the 
> > Qubes team has opted to move forward despite the above issues, this 
> > represents the point at which Qubes will no longer be able to do the one 
> > passthrough thing it relies on - isolating network adapters. 
> > 
> > The effects of this today are that PCI passthrough questions are popping 
> up 
> > more frequently on qubes-users, and the fixes that used to reliably 
> address 
> > passthrough problems (for example, setting passthrough to permissive) 
> seem 
> > to be less helpful, as problems seem to be lurking deeper within Xen, or 
> > perhaps changes to Xen mean that other fixes should be used instead. 
> > 
> > The effects of this in the future is that it grows less and less likely 
> > that the vision of doing a separate GUI domain via GPU passthrough can 
> be 
> > successfully executed.  It is hard enough to get a GPU passed through to 
> > begin with (thanks to weird abuses of PCI features by GPU makers to 
> > facilitate DRM, I'm uncertain it will ever work for anything other than 
> > Intel GPUs, which is only due to specific efforts on Intel's part to 
> make 
> > it work in Xen).  The above issues make it much worse. 
> > 
> 
> The longer-term plan is for us to move to HVMs for everything: AppVMs, 
> ServiceVMs, and, of course, for the GUI domain also. This should not only 
> resolve the passthrough problems (assuming IOMMU works well), but also 
> reduce 
> complexity in the hypervisor required to handle such VMs (see e.g. XSA 
> 148). We 
> plan on starting this work once an early 4.0 is out. 
> 

I don't see how this resolves things, but it's a little hard to at this 
time since HVM PCI passthrough support is currently broken.  There is no 
way for me to compare the bugginess of PCI passthrough on HVM versus PV.  I 
have not experimented with Linux HVMs using PCI passthrough - do 
pcifront/pciback drop out of the picture, and it just looks and behaves 
like normal (I would expect the hypervisor to still impose some 
restrictions on use/abuse of PCI devices)?  Is there some other significant 
difference?
 

> Are the DRM-related problems with passthrough for some GPUs you mentioned 
> above 
> also encountered on HVMs, or are they limited to PVs only? 
> 

All of the efforts I have seen involving GPU passthrough have involved HVM, 
maybe since most people pursuing it are seeking to run 3D Windows 
applications (games or otherwise). 
 http://wiki.xen.org/wiki/XenVGAPassthrough#Why_can.27t_I_do_VGA_passthrough_to_Xen_PV_.28paravirtual.29_guest.3F \
 discusses why passthrough cannot be done to a PV domain.

The bizarre things GPU makers have done with PCI registers and bars 
generally defies how one expects well-mannered PCI devices to behave.  MS 
Vista not only introduced, but required this nonsense 
(http://www.cypherpunks.to/~peter/vista.pdf).  As noted on page 22 of that 
PDF file, each GPU type stepping is/was required to have a different 
mechanism in place.  As a result, no two cards, even by the same vendor, 
have distorted PCI behavior in the quite same way.  The patch 
at http://old-list-archives.xenproject.org/archives/html/xen-devel/2010-10/txtfLpL6CdMGC.txt \
 describes one example:

* ATI VBIOS Working Mechanism 
*
* Generally there are three memory resources (two MMIO and one PIO) 
* associated with modern ATI gfx. VBIOS uses special tricks to figure out 
* BARs, instead of using regular PCI config space read.
*
*  (1) VBIOS relies on I/O port 0x3C3 to retrieve PIO BAR 
*  (2) VBIOS maintains a shadow copy of PCI configure space. It retries the 
*      MMIO BARs from this shadow copy via sending I/O requests to first 
two 
*      registers of PIO (MMINDEX and MMDATA). The workflow is like this: 
*      MMINDEX (register 0) is written with an index value, specifying the 
*      register VBIOS wanting to access. Then the shadowed data can be 
*      read/written from MMDATA (register 1). For two MMIO BARs, the index 
*      values are 0x4010 and 0x4014 respectively. 

Not how your typical PCI device behaves.

Often 1:1 mapping of bars has been required to get video drivers to work 
with passthrough GPUs.  Sometimes needing to deal with booting the card's 
BIOS through QEMU's emulated BIOS becomes an issue (with the solution being 
copying the VBIOS using some hacked together mechanism, and rolling it into 
the SEABIOS image).

Between NVIDIA and AMD, generally users have had a worse time getting 
NVIDIA devices working via passthrough (although AMD has been plenty 
difficult).  The most reliable technique, in my experience, is to use an 
NVIDIA QUADRO that is compatible with NVIDIA GRID 
(http://www.nvidia.com/object/dedicated-gpus.html).  It's the only thing 
I've used for GPU passthrough that "just works" - the NVIDIA drivers are 
specifically written to accept the GPU is running in a virtualized 
environment.

On top of all of this, apparently NVIDIA is _actively_ doing things to 
frustrate attempts at running their hardware within a 
VM: https://www.reddit.com/r/linux/comments/2twq7q/nvidia_apparently_pulls_some_shady_shit_to_do/ \
  with one developer describing the situation as "VM developers have been 
having a little arms race with NVIDIA."

About 1-2 years ago, KVM + QEMU put in a lot of effort into getting GPU 
passthrough working more smoothly.  However, I do not know what the state 
of things is today, and Xen has not engaged in similar efforts to anywhere 
near the same degree.  Looking the last 6 months or so of xen-devel, most 
of subject lines mentioning PCI passthrough are efforts towards getting 
Intel's IGD working via passthrough (related to this, Intel is also still 
actively developing XenGT) .  Given the number of posts on the topic, I'm 
guessing it was nontrivial - and this is with an actively cooperating GPU 
vendor.

As noted above, most attempts have involved running GPUs in a virtualized 
Windows session.  It is be possible that things will go more smoothly when 
running Linux drivers.  On the other hand, I have not heard particularly 
good things about running with nouveau, and using NVIDIA's own Linux 
drivers may pull in the same headaches seen on Windows.

So, expect getting GPU passthrough working across the wide range of 
notebook and desktop GPUs deployed out there to be a substantial challenge, 
and make sure you have lots of different GPUs on hand for testing purposes. 
 You may discover, after not digging too deeply into it, that it is not 
worth the headaches and risk of rendering a lot of hardware 
Qubes-incompatible.  I understand (at least some of) the reasons for 
wanting to move the GUI out of dom0, but it may not be practical due to 
what the GPU vendors have done to support DRM.

Eric

-- 
You received this message because you are subscribed to the Google Groups \
"qubes-devel" group. To unsubscribe from this group and stop receiving emails from \
it, send an email to qubes-devel+unsubscribe@googlegroups.com. To post to this group, \
send email to qubes-devel@googlegroups.com. To view this discussion on the web visit \
https://groups.google.com/d/msgid/qubes-devel/92515724-519f-4cb8-a08f-27530517f80e%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


[Attachment #5 (text/html)]

<div dir="ltr">On Sunday, January 31, 2016 at 11:27:40 AM UTC-5, joanna \
wrote:<blockquote class="gmail_quote" style="margin: 0;margin-left: \
0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">-----BEGIN PGP SIGNED \
MESSAGE----- <br>Hash: SHA256
<br>
<br>On Sat, Jan 30, 2016 at 08:20:45AM -0800, Eric Shelton wrote:
<br>&gt; I&#39;m not sure that I have anything concrete enough to open an issue yet 
<br>&gt; (aside from <a href="https://github.com/QubesOS/qubes-issues/issues/1659" \
target="_blank" rel="nofollow" \
onmousedown="this.href=&#39;https://www.google.com/url?q\75https%3A%2F%2Fgithub.com%2F \
QubesOS%2Fqubes-issues%2Fissues%2F1659\46sa\75D\46sntz\0751\46usg\75AFQjCNHf7W1ySN4KRHioTUiYHMvN04Q0UQ&#39;;return \
true;" onclick="this.href=&#39;https://www.google.com/url?q\75https%3A%2F%2Fgithub.com \
%2FQubesOS%2Fqubes-issues%2Fissues%2F1659\46sa\75D\46sntz\0751\46usg\75AFQjCNHf7W1ySN4KRHioTUiYHMvN04Q0UQ&#39;;return \
true;">https://github.com/QubesOS/<wbr>qubes-issues/issues/1659</a>), but I  <br>&gt; \
think it is worth initiating a discussion about PCI passthrough support  <br>&gt; \
having regressed from Qubes 2.0 (where pretty much everything seemed to  <br>&gt; \
work, including passthrough to HVM domains), and that it appears to only be  <br>&gt; \
getting worse over time.   For example, see these recent discussions: <br>&gt; 
<br>&gt; <a href="https://groups.google.com/forum/#!topic/qubes-users/VlbfFyNGNTs" \
target="_blank" rel="nofollow" \
onmousedown="this.href=&#39;https://groups.google.com/forum/#!topic/qubes-users/VlbfFyNGNTs&#39;;return \
true;" onclick="this.href=&#39;https://groups.google.com/forum/#!topic/qubes-users/VlbfFyNGNTs&#39;;return \
true;">https://groups.google.com/<wbr>forum/#!topic/qubes-users/<wbr>VlbfFyNGNTs</a> \
<br>&gt; <a href="https://groups.google.com/d/msg/qubes-devel/YqdhlmYtMY0/vCO3QHLBBgAJ" \
target="_blank" rel="nofollow" \
onmousedown="this.href=&#39;https://groups.google.com/d/msg/qubes-devel/YqdhlmYtMY0/vCO3QHLBBgAJ&#39;;return \
true;" onclick="this.href=&#39;https://groups.google.com/d/msg/qubes-devel/YqdhlmYtMY0/vCO3QHLBBgAJ&#39;;return \
true;">https://groups.google.com/d/<wbr>msg/qubes-devel/YqdhlmYtMY0/<wbr>vCO3QHLBBgAJ</a>
 <br>&gt; <a href="https://groups.google.com/d/msg/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ" \
target="_blank" rel="nofollow" \
onmousedown="this.href=&#39;https://groups.google.com/d/msg/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ&#39;;return \
true;" onclick="this.href=&#39;https://groups.google.com/d/msg/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ&#39;;return \
true;">https://groups.google.com/d/<wbr>msg/qubes-users/cmPRMOkxkdA/<wbr>gIV68O0-CQAJ</a>
 <br>&gt; 
<br>&gt; The state of things seems to be something along these lines:
<br>&gt; - everything worked pretty well under Qubes 2.0
<br>&gt; - starting with Qubes 3.0, PCI passthrough to HVM domains broke (I think it 
<br>&gt; is a libvirt related issue, as passthrough would still work using \
&#39;xl&#39; to  <br>&gt; start a domain)
<br>&gt; - it seems people are having less success doing passthrough under Xen 4.6 
<br>&gt; than under Xen 4.4.   However, there is no hard evidence on this.
<br>&gt; - Linux kernel 4.4 appears to break PCI passthrough for network devices 
<br>&gt; entirely (not so much what is going on with Qubes today, but evidence that 
<br>&gt; Xen&#39;s PCI passthrough support is getting worse, not better).   Although \
the  <br>&gt; Qubes team has opted to move forward despite the above issues, this 
<br>&gt; represents the point at which Qubes will no longer be able to do the one 
<br>&gt; passthrough thing it relies on - isolating network adapters.
<br>&gt; 
<br>&gt; The effects of this today are that PCI passthrough questions are popping up 
<br>&gt; more frequently on qubes-users, and the fixes that used to reliably address 
<br>&gt; passthrough problems (for example, setting passthrough to permissive) seem 
<br>&gt; to be less helpful, as problems seem to be lurking deeper within Xen, or 
<br>&gt; perhaps changes to Xen mean that other fixes should be used instead.
<br>&gt; 
<br>&gt; The effects of this in the future is that it grows less and less likely 
<br>&gt; that the vision of doing a separate GUI domain via GPU passthrough can be 
<br>&gt; successfully executed.   It is hard enough to get a GPU passed through to 
<br>&gt; begin with (thanks to weird abuses of PCI features by GPU makers to 
<br>&gt; facilitate DRM, I&#39;m uncertain it will ever work for anything other than 
<br>&gt; Intel GPUs, which is only due to specific efforts on Intel&#39;s part to \
make  <br>&gt; it work in Xen).   The above issues make it much worse.
<br>&gt; 
<br>
<br>The longer-term plan is for us to move to HVMs for everything: AppVMs,
<br>ServiceVMs, and, of course, for the GUI domain also. This should not only
<br>resolve the passthrough problems (assuming IOMMU works well), but also reduce
<br>complexity in the hypervisor required to handle such VMs (see e.g. XSA 148). We
<br>plan on starting this work once an early 4.0 is out.
<br></blockquote><div><br></div><div>I don&#39;t see how this resolves things, but \
it&#39;s a little hard to at this time since HVM PCI passthrough support is currently \
broken.   There is no way for me to compare the bugginess of PCI passthrough on HVM \
versus PV.   I have not experimented with Linux HVMs using PCI passthrough - do \
pcifront/pciback drop out of the picture, and it just looks and behaves like normal \
(I would expect the hypervisor to still impose some restrictions on use/abuse of PCI \
devices)?   Is there some other significant difference?</div><div>  </div><blockquote \
class="gmail_quote" style="margin: 0;margin-left: 0.8ex;border-left: 1px #ccc \
solid;padding-left: 1ex;">Are the DRM-related problems with passthrough for some GPUs \
you mentioned above <br>also encountered on HVMs, or are they limited to PVs only?
<br></blockquote><div><br></div><div>All of the efforts I have seen involving GPU \
passthrough have involved HVM, maybe since most people pursuing it are seeking to run \
3D Windows applications (games or otherwise).   \
http://wiki.xen.org/wiki/XenVGAPassthrough#Why_can.27t_I_do_VGA_passthrough_to_Xen_PV_.28paravirtual.29_guest.3F \
discusses why passthrough cannot be done to a PV domain.</div><div><br></div><div>The \
bizarre things GPU makers have done with PCI registers and bars generally defies how \
one expects well-mannered PCI devices to behave.   MS Vista not only introduced, but \
required this nonsense (http://www.cypherpunks.to/~peter/vista.pdf).   As noted on \
page 22 of that PDF file, each GPU type stepping is/was required to have a different \
mechanism in place.   As a result, no two cards, even by the same vendor, have \
distorted PCI behavior in the quite same way.   The patch at  \
http://old-list-archives.xenproject.org/archives/html/xen-devel/2010-10/txtfLpL6CdMGC.txt \
describes one example:</div><div><br></div><div><div class="prettyprint" \
style="border: 1px solid rgb(187, 187, 187); word-wrap: break-word; background-color: \
rgb(250, 250, 250);"><code class="prettyprint"><div class="subprettyprint"><span \
style="color: #660;" class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify"> ATI VBIOS </span><span style="color: #606;" \
class="styled-by-prettify">Working</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #606;" \
class="styled-by-prettify">Mechanism</span><span style="color: #000;" \
class="styled-by-prettify"> <br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify"><br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #606;" \
class="styled-by-prettify">Generally</span><span style="color: #000;" \
class="styled-by-prettify"> there are three memory resources </span><span \
style="color: #660;" class="styled-by-prettify">(</span><span style="color: #000;" \
class="styled-by-prettify">two MMIO </span><span style="color: #008;" \
class="styled-by-prettify">and</span><span style="color: #000;" \
class="styled-by-prettify"> one PIO</span><span style="color: #660;" \
class="styled-by-prettify">)</span><span style="color: #000;" \
class="styled-by-prettify"> <br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify"> associated </span><span style="color: #008;" \
class="styled-by-prettify">with</span><span style="color: #000;" \
class="styled-by-prettify"> modern ATI gfx</span><span style="color: #660;" \
class="styled-by-prettify">.</span><span style="color: #000;" \
class="styled-by-prettify"> VBIOS uses special tricks to figure </span><span \
style="color: #008;" class="styled-by-prettify">out</span><span style="color: #000;" \
class="styled-by-prettify"> <br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #606;" \
class="styled-by-prettify">BARs</span><span style="color: #660;" \
class="styled-by-prettify">,</span><span style="color: #000;" \
class="styled-by-prettify"> instead of </span><span style="color: #008;" \
class="styled-by-prettify">using</span><span style="color: #000;" \
class="styled-by-prettify"> regular PCI config space read</span><span style="color: \
#660;" class="styled-by-prettify">.</span><span style="color: #000;" \
class="styled-by-prettify"><br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify"><br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify">   </span><span style="color: #660;" \
class="styled-by-prettify">(</span><span style="color: #066;" \
class="styled-by-prettify">1</span><span style="color: #660;" \
class="styled-by-prettify">)</span><span style="color: #000;" \
class="styled-by-prettify"> VBIOS relies on I</span><span style="color: #660;" \
class="styled-by-prettify">/</span><span style="color: #000;" \
class="styled-by-prettify">O port </span><span style="color: #066;" \
class="styled-by-prettify">0x3C3</span><span style="color: #000;" \
class="styled-by-prettify"> to retrieve PIO BAR <br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify">   </span><span style="color: #660;" \
class="styled-by-prettify">(</span><span style="color: #066;" \
class="styled-by-prettify">2</span><span style="color: #660;" \
class="styled-by-prettify">)</span><span style="color: #000;" \
class="styled-by-prettify"> VBIOS maintains a shadow copy of PCI configure \
space</span><span style="color: #660;" class="styled-by-prettify">.</span><span \
style="color: #000;" class="styled-by-prettify"> </span><span style="color: #606;" \
class="styled-by-prettify">It</span><span style="color: #000;" \
class="styled-by-prettify"> retries the <br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify">         MMIO </span><span style="color: #606;" \
class="styled-by-prettify">BARs</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #008;" \
class="styled-by-prettify">from</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #008;" \
class="styled-by-prettify">this</span><span style="color: #000;" \
class="styled-by-prettify"> shadow copy via sending I</span><span style="color: \
#660;" class="styled-by-prettify">/</span><span style="color: #000;" \
class="styled-by-prettify">O requests to first two <br></span><span style="color: \
#660;" class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify">         registers of PIO </span><span style="color: \
#660;" class="styled-by-prettify">(</span><span style="color: #000;" \
class="styled-by-prettify">MMINDEX </span><span style="color: #008;" \
class="styled-by-prettify">and</span><span style="color: #000;" \
class="styled-by-prettify"> MMDATA</span><span style="color: #660;" \
class="styled-by-prettify">).</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #606;" \
class="styled-by-prettify">The</span><span style="color: #000;" \
class="styled-by-prettify"> workflow </span><span style="color: #008;" \
class="styled-by-prettify">is</span><span style="color: #000;" \
class="styled-by-prettify"> like </span><span style="color: #008;" \
class="styled-by-prettify">this</span><span style="color: #660;" \
class="styled-by-prettify">:</span><span style="color: #000;" \
class="styled-by-prettify"> <br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify">         MMINDEX </span><span style="color: #660;" \
class="styled-by-prettify">(</span><span style="color: #008;" \
class="styled-by-prettify">register</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #066;" \
class="styled-by-prettify">0</span><span style="color: #660;" \
class="styled-by-prettify">)</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #008;" \
class="styled-by-prettify">is</span><span style="color: #000;" \
class="styled-by-prettify"> written </span><span style="color: #008;" \
class="styled-by-prettify">with</span><span style="color: #000;" \
class="styled-by-prettify"> an index value</span><span style="color: #660;" \
class="styled-by-prettify">,</span><span style="color: #000;" \
class="styled-by-prettify"> specifying the <br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify">         </span><span style="color: #008;" \
class="styled-by-prettify">register</span><span style="color: #000;" \
class="styled-by-prettify"> VBIOS wanting to access</span><span style="color: #660;" \
class="styled-by-prettify">.</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #606;" \
class="styled-by-prettify">Then</span><span style="color: #000;" \
class="styled-by-prettify"> the shadowed data can be <br></span><span style="color: \
#660;" class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify">         read</span><span style="color: #660;" \
class="styled-by-prettify">/</span><span style="color: #000;" \
class="styled-by-prettify">written </span><span style="color: #008;" \
class="styled-by-prettify">from</span><span style="color: #000;" \
class="styled-by-prettify"> MMDATA </span><span style="color: #660;" \
class="styled-by-prettify">(</span><span style="color: #008;" \
class="styled-by-prettify">register</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #066;" \
class="styled-by-prettify">1</span><span style="color: #660;" \
class="styled-by-prettify">).</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #606;" \
class="styled-by-prettify">For</span><span style="color: #000;" \
class="styled-by-prettify"> two MMIO </span><span style="color: #606;" \
class="styled-by-prettify">BARs</span><span style="color: #660;" \
class="styled-by-prettify">,</span><span style="color: #000;" \
class="styled-by-prettify"> the index <br></span><span style="color: #660;" \
class="styled-by-prettify">*</span><span style="color: #000;" \
class="styled-by-prettify">         values are </span><span style="color: #066;" \
class="styled-by-prettify">0x4010</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #008;" \
class="styled-by-prettify">and</span><span style="color: #000;" \
class="styled-by-prettify"> </span><span style="color: #066;" \
class="styled-by-prettify">0x4014</span><span style="color: #000;" \
class="styled-by-prettify"> respectively</span><span style="color: #660;" \
class="styled-by-prettify">.</span><span style="color: #000;" \
class="styled-by-prettify"> </span></div></code></div><div><br></div></div><div>Not \
how your typical PCI device behaves.</div><div><br></div><div>Often 1:1 mapping of \
bars has been required to get video drivers to work with passthrough GPUs.   \
Sometimes needing to deal with booting the card&#39;s BIOS through QEMU&#39;s \
emulated BIOS becomes an issue (with the solution being copying the VBIOS using some \
hacked together mechanism, and rolling it into the SEABIOS \
image).</div><div><br></div><div>Between NVIDIA and AMD, generally users have had a \
worse time getting NVIDIA devices working via passthrough (although AMD has been \
plenty difficult).   The most reliable technique, in my experience, is to use an \
NVIDIA QUADRO that is compatible with NVIDIA GRID \
(http://www.nvidia.com/object/dedicated-gpus.html).   It&#39;s the only thing \
I&#39;ve used for GPU passthrough that &quot;just works&quot; - the NVIDIA drivers \
are specifically written to accept the GPU is running in a virtualized \
environment.</div><div><br></div><div>On top of all of this, apparently NVIDIA is \
_actively_ doing things to frustrate attempts at running their hardware within a VM:  \
https://www.reddit.com/r/linux/comments/2twq7q/nvidia_apparently_pulls_some_shady_shit_to_do/ \
with one developer describing the situation as &quot;VM developers have been having a \
little arms race with NVIDIA.&quot;</div><div><br></div><div>About 1-2 years ago, KVM \
+ QEMU put in a lot of effort into getting GPU passthrough working more smoothly.   \
However, I do not know what the state of things is today, and Xen has not engaged in \
similar efforts to anywhere near the same degree.   Looking the last 6 months or so \
of xen-devel, most of subject lines mentioning PCI passthrough are efforts towards \
getting Intel&#39;s IGD working via passthrough (related to this, Intel is also still \
actively developing XenGT) .   Given the number of posts on the topic, I&#39;m \
guessing it was nontrivial - and this is with an actively cooperating GPU \
vendor.</div><div><br></div><div>As noted above, most attempts have involved running \
GPUs in a virtualized Windows session.   It is be possible that things will go more \
smoothly when running Linux drivers.   On the other hand, I have not heard \
particularly good things about running with nouveau, and using NVIDIA&#39;s own Linux \
drivers may pull in the same headaches seen on Windows.</div><div><br></div><div>So, \
expect getting GPU passthrough working across the wide range of notebook and desktop \
GPUs deployed out there to be a substantial challenge, and make sure you have lots of \
different GPUs on hand for testing purposes.   You may discover, after not digging \
too deeply into it, that it is not worth the headaches and risk of rendering a lot of \
hardware Qubes-incompatible.   I understand (at least some of) the reasons for \
wanting to move the GUI out of dom0, but it may not be practical due to what the GPU \
vendors have done to support DRM.</div><div><br></div><div>Eric</div></div>

<p></p>

-- <br />
You received this message because you are subscribed to the Google Groups \
&quot;qubes-devel&quot; group.<br /> To unsubscribe from this group and stop \
receiving emails from it, send an email to <a \
href="mailto:qubes-devel+unsubscribe@googlegroups.com">qubes-devel+unsubscribe@googlegroups.com</a>.<br \
/> To post to this group, send email to <a \
href="mailto:qubes-devel@googlegroups.com">qubes-devel@googlegroups.com</a>.<br /> To \
view this discussion on the web visit <a \
href="https://groups.google.com/d/msgid/qubes-devel/92515724-519f-4cb8-a08f-27530517f8 \
0e%40googlegroups.com?utm_medium=email&utm_source=footer">https://groups.google.com/d/ \
msgid/qubes-devel/92515724-519f-4cb8-a08f-27530517f80e%40googlegroups.com</a>.<br /> \
For more options, visit <a \
href="https://groups.google.com/d/optout">https://groups.google.com/d/optout</a>.<br \
/>

------=_Part_2756_1916160933.1454278022014--



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic