[prev in list] [next in list] [prev in thread] [next in thread] 

List:       qubes-devel
Subject:    Re: [qubes-devel] PCI passthrough appears to be regressing
From:       Outback Dingo <outbackdingo () gmail ! com>
Date:       2016-02-01 10:20:48
Message-ID: CAKYr3zznPRAhk31HBYZYYHiKqaSpaB3RO2mLas8mn9qqWm0vVA () mail ! gmail ! com
[Download RAW message or body]

On Mon, Feb 1, 2016 at 9:57 AM, Joanna Rutkowska <
joanna@invisiblethingslab.com> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> 
> On Sun, Jan 31, 2016 at 02:07:01PM -0800, Eric Shelton wrote:
> > On Sunday, January 31, 2016 at 11:27:40 AM UTC-5, joanna wrote:
> > > 
> > > -----BEGIN PGP SIGNED MESSAGE-----
> > > Hash: SHA256
> > > 
> > > On Sat, Jan 30, 2016 at 08:20:45AM -0800, Eric Shelton wrote:
> > > > I'm not sure that I have anything concrete enough to open an issue
> yet
> > > > (aside from https://github.com/QubesOS/qubes-issues/issues/1659),
> but I
> > > > think it is worth initiating a discussion about PCI passthrough
> support
> > > > having regressed from Qubes 2.0 (where pretty much everything seemed
> to
> > > > work, including passthrough to HVM domains), and that it appears to
> only
> > > be
> > > > getting worse over time.  For example, see these recent discussions:
> > > > 
> > > > https://groups.google.com/forum/#!topic/qubes-users/VlbfFyNGNTs
> > > > https://groups.google.com/d/msg/qubes-devel/YqdhlmYtMY0/vCO3QHLBBgAJ
> > > > https://groups.google.com/d/msg/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ
> > > > 
> > > > The state of things seems to be something along these lines:
> > > > - everything worked pretty well under Qubes 2.0
> > > > - starting with Qubes 3.0, PCI passthrough to HVM domains broke (I
> think
> > > it
> > > > is a libvirt related issue, as passthrough would still work using
> 'xl'
> > > to
> > > > start a domain)
> > > > - it seems people are having less success doing passthrough under Xen
> > > 4.6
> > > > than under Xen 4.4.  However, there is no hard evidence on this.
> > > > - Linux kernel 4.4 appears to break PCI passthrough for network
> devices
> > > > entirely (not so much what is going on with Qubes today, but evidence
> > > that
> > > > Xen's PCI passthrough support is getting worse, not better).
> Although
> > > the
> > > > Qubes team has opted to move forward despite the above issues, this
> > > > represents the point at which Qubes will no longer be able to do the
> one
> > > > passthrough thing it relies on - isolating network adapters.
> > > > 
> > > > The effects of this today are that PCI passthrough questions are
> popping
> > > up
> > > > more frequently on qubes-users, and the fixes that used to reliably
> > > address
> > > > passthrough problems (for example, setting passthrough to permissive)
> > > seem
> > > > to be less helpful, as problems seem to be lurking deeper within
> Xen, or
> > > > perhaps changes to Xen mean that other fixes should be used instead.
> > > > 
> > > > The effects of this in the future is that it grows less and less
> likely
> > > > that the vision of doing a separate GUI domain via GPU passthrough
> can
> > > be
> > > > successfully executed.  It is hard enough to get a GPU passed
> through to
> > > > begin with (thanks to weird abuses of PCI features by GPU makers to
> > > > facilitate DRM, I'm uncertain it will ever work for anything other
> than
> > > > Intel GPUs, which is only due to specific efforts on Intel's part to
> > > make
> > > > it work in Xen).  The above issues make it much worse.
> > > > 
> > > 
> > > The longer-term plan is for us to move to HVMs for everything: AppVMs,
> > > ServiceVMs, and, of course, for the GUI domain also. This should not
> only
> > > resolve the passthrough problems (assuming IOMMU works well), but also
> > > reduce
> > > complexity in the hypervisor required to handle such VMs (see e.g. XSA
> > > 148). We
> > > plan on starting this work once an early 4.0 is out.
> > > 
> > 
> > I don't see how this resolves things, but it's a little hard to at this
> > time since HVM PCI passthrough support is currently broken.  There is no
> > way for me to compare the bugginess of PCI passthrough on HVM versus
> PV.  I
> > have not experimented with Linux HVMs using PCI passthrough - do
> > pcifront/pciback drop out of the picture, and it just looks and behaves
> > like normal (I would expect the hypervisor to still impose some
> > restrictions on use/abuse of PCI devices)?  Is there some other
> significant
> > difference?
> > 
> > 
> > > Are the DRM-related problems with passthrough for some GPUs you
> mentioned
> > > above
> > > also encountered on HVMs, or are they limited to PVs only?
> > > 
> > 
> > All of the efforts I have seen involving GPU passthrough have involved
> HVM,
> > maybe since most people pursuing it are seeking to run 3D Windows
> > applications (games or otherwise).
> > 
> http://wiki.xen.org/wiki/XenVGAPassthrough#Why_can.27t_I_do_VGA_passthrough_to_Xen_PV_.28paravirtual.29_guest.3F
> 
> > discusses why passthrough cannot be done to a PV domain.
> > 
> > The bizarre things GPU makers have done with PCI registers and bars
> > generally defies how one expects well-mannered PCI devices to behave.  MS
> > Vista not only introduced, but required this nonsense
> > (http://www.cypherpunks.to/~peter/vista.pdf).  As noted on page 22 of
> that
> > PDF file, each GPU type stepping is/was required to have a different
> > mechanism in place.  As a result, no two cards, even by the same vendor,
> > have distorted PCI behavior in the quite same way.  The patch
> > at
> http://old-list-archives.xenproject.org/archives/html/xen-devel/2010-10/txtfLpL6CdMGC.txt
> 
> > describes one example:
> > 
> > * ATI VBIOS Working Mechanism
> > *
> > * Generally there are three memory resources (two MMIO and one PIO)
> > * associated with modern ATI gfx. VBIOS uses special tricks to figure out
> > * BARs, instead of using regular PCI config space read.
> > *
> > *  (1) VBIOS relies on I/O port 0x3C3 to retrieve PIO BAR
> > *  (2) VBIOS maintains a shadow copy of PCI configure space. It retries
> the
> > *      MMIO BARs from this shadow copy via sending I/O requests to first
> > two
> > *      registers of PIO (MMINDEX and MMDATA). The workflow is like this:
> > *      MMINDEX (register 0) is written with an index value, specifying
> the
> > *      register VBIOS wanting to access. Then the shadowed data can be
> > *      read/written from MMDATA (register 1). For two MMIO BARs, the
> index
> > *      values are 0x4010 and 0x4014 respectively.
> > 
> > Not how your typical PCI device behaves.
> > 
> > Often 1:1 mapping of bars has been required to get video drivers to work
> > with passthrough GPUs.  Sometimes needing to deal with booting the card's
> > BIOS through QEMU's emulated BIOS becomes an issue (with the solution
> being
> > copying the VBIOS using some hacked together mechanism, and rolling it
> into
> > the SEABIOS image).
> > 
> > Between NVIDIA and AMD, generally users have had a worse time getting
> > NVIDIA devices working via passthrough (although AMD has been plenty
> > difficult).  The most reliable technique, in my experience, is to use an
> > NVIDIA QUADRO that is compatible with NVIDIA GRID
> > (http://www.nvidia.com/object/dedicated-gpus.html).  It's the only thing
> > I've used for GPU passthrough that "just works" - the NVIDIA drivers are
> > specifically written to accept the GPU is running in a virtualized
> > environment.
> > 
> > On top of all of this, apparently NVIDIA is _actively_ doing things to
> > frustrate attempts at running their hardware within a
> > VM:
> https://www.reddit.com/r/linux/comments/2twq7q/nvidia_apparently_pulls_some_shady_shit_to_do/
> 
> > with one developer describing the situation as "VM developers have been
> > having a little arms race with NVIDIA."
> > 
> > About 1-2 years ago, KVM + QEMU put in a lot of effort into getting GPU
> > passthrough working more smoothly.  However, I do not know what the state
> > of things is today, and Xen has not engaged in similar efforts to
> anywhere
> > near the same degree.  Looking the last 6 months or so of xen-devel, most
> > of subject lines mentioning PCI passthrough are efforts towards getting
> > Intel's IGD working via passthrough (related to this, Intel is also still
> > actively developing XenGT) .  Given the number of posts on the topic, I'm
> > guessing it was nontrivial - and this is with an actively cooperating GPU
> > vendor.
> > 
> > As noted above, most attempts have involved running GPUs in a virtualized
> > Windows session.  It is be possible that things will go more smoothly
> when
> > running Linux drivers.  On the other hand, I have not heard particularly
> > good things about running with nouveau, and using NVIDIA's own Linux
> > drivers may pull in the same headaches seen on Windows.
> > 
> > So, expect getting GPU passthrough working across the wide range of
> > notebook and desktop GPUs deployed out there to be a substantial
> challenge,
> > and make sure you have lots of different GPUs on hand for testing
> purposes.
> > You may discover, after not digging too deeply into it, that it is not
> > worth the headaches and risk of rendering a lot of hardware
> > Qubes-incompatible.  I understand (at least some of) the reasons for
> > wanting to move the GUI out of dom0, but it may not be practical due to
> > what the GPU vendors have done to support DRM.
> > 
> 
> We would like to primarily target laptops, and this means we're primely
> interested in getting the Intel integrated GPUs to work. Fortunately, Intel
> seems to be putting lots of work into making GPUs not only
> VT-d-assignable, but
> also virtualizable (as in: multiplexible) -- as evidenced by their GVT work
> (formerly XenGT):
> 
> https://01.org/igvt-g
> 
> Thanks,
> joanna.
> 

good to know you have a focus on GPUs, however my pci passthru issue on my
laptop was specifically nic oriented
shared pci bus
02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device
5287 (rev 01)
02:00.1 Ethernet controller: Realtek Semiconductor Co., Ltd.
RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 12)

I did find it quite odd that qubes itself couldnt handle the pci passthru
but XEN 4.5 with Fedora 23 running on top worked perfectly fine
That being said.... I can appreciate that people want the GPUs to work,
personally, however if a simple nic cant be passed thru, Qubes
itself is pretty unuseable.... i guess its about priorities to me




> -----BEGIN PGP SIGNATURE-----
> 
> iQIcBAEBCAAGBQJWrx4VAAoJEDOT2L8N3GcYZOsP/RLTfDl2ws3HUfhfPGv3axCD
> WlcaOn2taH5T5w5pVWXtK7uk84Cvc4ZA4DfZUJMVTyCzdGE5jel64VMgV/IasVx6
> Lt05jYFyoi2T64UpqvFaoKDXPesVMhm22x/mGMuZIwsQe8hd5gLItWEbkKouvXJK
> QQVFzO+1kxFENgOql6T/v9JNHr1G5ue/8cX3izhuoHa0IQykJaoKuTXXwWbkMgk0
> aTVd6+Bp++WL4CUeyujQhNTmwIEl93QMiHMbhtnPpxcW30qHdrqj4IKn6AfL2hLf
> zG0h7DLnrn7B3KJrIEfT0MMHAymPvbLpEQosxDrCdbkkgCt6x3ihlmXKOzfTB5Bz
> tiieP5gHN9iUo/ToZVa4tRLNVE7VLSUdYxvBA24bN/KHm5ad43SfgcNdNTML3Lrn
> gj6OJ44tHyJo7PgjIH80bVF3f0mM2jS3HDriO0at+lWaWAY+00dN8XMb69EahrKb
> Q5vyG+ovjq8sd4oSLEPmIpdpTwq+2RRlENRbjb4VvQ9jqG18sagxs9wmGuXq1iO0
> KIWZ/UporVbWm9nP6rbchlzPnIwCCe4gHOfNzECRnYZjtmeMkSQSeTvmg2X+qx+w
> jXyFE97CpcRqpQ/qQL+IRm1CGT97siHx8p4NaJLbMKmG2uUiE9l0Cif0EdcVQx1c
> ii5TV5hIDJYQRMkqLRRk
> =1Llj
> -----END PGP SIGNATURE-----
> 
> --
> You received this message because you are subscribed to the Google Groups
> "qubes-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to qubes-devel+unsubscribe@googlegroups.com.
> To post to this group, send email to qubes-devel@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/qubes-devel/20160201085758.GA855%40work-mutt
> .
> For more options, visit https://groups.google.com/d/optout.
> 

-- 
You received this message because you are subscribed to the Google Groups \
"qubes-devel" group. To unsubscribe from this group and stop receiving emails from \
it, send an email to qubes-devel+unsubscribe@googlegroups.com. To post to this group, \
send email to qubes-devel@googlegroups.com. To view this discussion on the web visit \
https://groups.google.com/d/msgid/qubes-devel/CAKYr3zznPRAhk31HBYZYYHiKqaSpaB3RO2mLas8mn9qqWm0vVA%40mail.gmail.com.
 For more options, visit https://groups.google.com/d/optout.


[Attachment #3 (text/html)]

<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb \
1, 2016 at 9:57 AM, Joanna Rutkowska <span dir="ltr">&lt;<a \
href="mailto:joanna@invisiblethingslab.com" \
target="_blank">joanna@invisiblethingslab.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span \
                class="">-----BEGIN PGP SIGNED MESSAGE-----<br>
Hash: SHA256<br>
<br>
</span><div><div class="h5">On Sun, Jan 31, 2016 at 02:07:01PM -0800, Eric Shelton \
wrote:<br> &gt; On Sunday, January 31, 2016 at 11:27:40 AM UTC-5, joanna wrote:<br>
&gt; &gt;<br>
&gt; &gt; -----BEGIN PGP SIGNED MESSAGE-----<br>
&gt; &gt; Hash: SHA256<br>
&gt; &gt;<br>
&gt; &gt; On Sat, Jan 30, 2016 at 08:20:45AM -0800, Eric Shelton wrote:<br>
&gt; &gt; &gt; I&#39;m not sure that I have anything concrete enough to open an issue \
yet<br> &gt; &gt; &gt; (aside from <a \
href="https://github.com/QubesOS/qubes-issues/issues/1659" rel="noreferrer" \
target="_blank">https://github.com/QubesOS/qubes-issues/issues/1659</a>), but I<br> \
&gt; &gt; &gt; think it is worth initiating a discussion about PCI passthrough \
support<br> &gt; &gt; &gt; having regressed from Qubes 2.0 (where pretty much \
everything seemed to<br> &gt; &gt; &gt; work, including passthrough to HVM domains), \
and that it appears to only<br> &gt; &gt; be<br>
&gt; &gt; &gt; getting worse over time.   For example, see these recent \
discussions:<br> &gt; &gt; &gt;<br>
&gt; &gt; &gt; <a href="https://groups.google.com/forum/#!topic/qubes-users/VlbfFyNGNTs" \
rel="noreferrer" target="_blank">https://groups.google.com/forum/#!topic/qubes-users/VlbfFyNGNTs</a><br>
 &gt; &gt; &gt; <a href="https://groups.google.com/d/msg/qubes-devel/YqdhlmYtMY0/vCO3QHLBBgAJ" \
rel="noreferrer" target="_blank">https://groups.google.com/d/msg/qubes-devel/YqdhlmYtMY0/vCO3QHLBBgAJ</a><br>
 &gt; &gt; &gt; <a href="https://groups.google.com/d/msg/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ" \
rel="noreferrer" target="_blank">https://groups.google.com/d/msg/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ</a><br>
 &gt; &gt; &gt;<br>
&gt; &gt; &gt; The state of things seems to be something along these lines:<br>
&gt; &gt; &gt; - everything worked pretty well under Qubes 2.0<br>
&gt; &gt; &gt; - starting with Qubes 3.0, PCI passthrough to HVM domains broke (I \
think<br> &gt; &gt; it<br>
&gt; &gt; &gt; is a libvirt related issue, as passthrough would still work using \
&#39;xl&#39;<br> &gt; &gt; to<br>
&gt; &gt; &gt; start a domain)<br>
&gt; &gt; &gt; - it seems people are having less success doing passthrough under \
Xen<br> &gt; &gt; 4.6<br>
&gt; &gt; &gt; than under Xen 4.4.   However, there is no hard evidence on this.<br>
&gt; &gt; &gt; - Linux kernel 4.4 appears to break PCI passthrough for network \
devices<br> &gt; &gt; &gt; entirely (not so much what is going on with Qubes today, \
but evidence<br> &gt; &gt; that<br>
&gt; &gt; &gt; Xen&#39;s PCI passthrough support is getting worse, not better).   \
Although<br> &gt; &gt; the<br>
&gt; &gt; &gt; Qubes team has opted to move forward despite the above issues, \
this<br> &gt; &gt; &gt; represents the point at which Qubes will no longer be able to \
do the one<br> &gt; &gt; &gt; passthrough thing it relies on - isolating network \
adapters.<br> &gt; &gt; &gt;<br>
&gt; &gt; &gt; The effects of this today are that PCI passthrough questions are \
popping<br> &gt; &gt; up<br>
&gt; &gt; &gt; more frequently on qubes-users, and the fixes that used to \
reliably<br> &gt; &gt; address<br>
&gt; &gt; &gt; passthrough problems (for example, setting passthrough to \
permissive)<br> &gt; &gt; seem<br>
&gt; &gt; &gt; to be less helpful, as problems seem to be lurking deeper within Xen, \
or<br> &gt; &gt; &gt; perhaps changes to Xen mean that other fixes should be used \
instead.<br> &gt; &gt; &gt;<br>
&gt; &gt; &gt; The effects of this in the future is that it grows less and less \
likely<br> &gt; &gt; &gt; that the vision of doing a separate GUI domain via GPU \
passthrough can<br> &gt; &gt; be<br>
&gt; &gt; &gt; successfully executed.   It is hard enough to get a GPU passed through \
to<br> &gt; &gt; &gt; begin with (thanks to weird abuses of PCI features by GPU \
makers to<br> &gt; &gt; &gt; facilitate DRM, I&#39;m uncertain it will ever work for \
anything other than<br> &gt; &gt; &gt; Intel GPUs, which is only due to specific \
efforts on Intel&#39;s part to<br> &gt; &gt; make<br>
&gt; &gt; &gt; it work in Xen).   The above issues make it much worse.<br>
&gt; &gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; The longer-term plan is for us to move to HVMs for everything: AppVMs,<br>
&gt; &gt; ServiceVMs, and, of course, for the GUI domain also. This should not \
only<br> &gt; &gt; resolve the passthrough problems (assuming IOMMU works well), but \
also<br> &gt; &gt; reduce<br>
&gt; &gt; complexity in the hypervisor required to handle such VMs (see e.g. XSA<br>
&gt; &gt; 148). We<br>
&gt; &gt; plan on starting this work once an early 4.0 is out.<br>
&gt; &gt;<br>
&gt;<br>
&gt; I don&#39;t see how this resolves things, but it&#39;s a little hard to at \
this<br> &gt; time since HVM PCI passthrough support is currently broken.   There is \
no<br> &gt; way for me to compare the bugginess of PCI passthrough on HVM versus PV.  \
I<br> &gt; have not experimented with Linux HVMs using PCI passthrough - do<br>
&gt; pcifront/pciback drop out of the picture, and it just looks and behaves<br>
&gt; like normal (I would expect the hypervisor to still impose some<br>
&gt; restrictions on use/abuse of PCI devices)?   Is there some other significant<br>
&gt; difference?<br>
&gt;<br>
&gt;<br>
&gt; &gt; Are the DRM-related problems with passthrough for some GPUs you \
mentioned<br> &gt; &gt; above<br>
&gt; &gt; also encountered on HVMs, or are they limited to PVs only?<br>
&gt; &gt;<br>
&gt;<br>
&gt; All of the efforts I have seen involving GPU passthrough have involved HVM,<br>
&gt; maybe since most people pursuing it are seeking to run 3D Windows<br>
&gt; applications (games or otherwise).<br>
&gt;   <a href="http://wiki.xen.org/wiki/XenVGAPassthrough#Why_can.27t_I_do_VGA_passthrough_to_Xen_PV_.28paravirtual.29_guest.3F" \
rel="noreferrer" target="_blank">http://wiki.xen.org/wiki/XenVGAPassthrough#Why_can.27t_I_do_VGA_passthrough_to_Xen_PV_.28paravirtual.29_guest.3F</a><br>
 &gt; discusses why passthrough cannot be done to a PV domain.<br>
&gt;<br>
&gt; The bizarre things GPU makers have done with PCI registers and bars<br>
&gt; generally defies how one expects well-mannered PCI devices to behave.   MS<br>
&gt; Vista not only introduced, but required this nonsense<br>
&gt; (<a href="http://www.cypherpunks.to/~peter/vista.pdf" rel="noreferrer" \
target="_blank">http://www.cypherpunks.to/~peter/vista.pdf</a>).   As noted on page \
22 of that<br> &gt; PDF file, each GPU type stepping is/was required to have a \
different<br> &gt; mechanism in place.   As a result, no two cards, even by the same \
vendor,<br> &gt; have distorted PCI behavior in the quite same way.   The patch<br>
&gt; at <a href="http://old-list-archives.xenproject.org/archives/html/xen-devel/2010-10/txtfLpL6CdMGC.txt" \
rel="noreferrer" target="_blank">http://old-list-archives.xenproject.org/archives/html/xen-devel/2010-10/txtfLpL6CdMGC.txt</a><br>
 &gt; describes one example:<br>
&gt;<br>
&gt; * ATI VBIOS Working Mechanism<br>
&gt; *<br>
&gt; * Generally there are three memory resources (two MMIO and one PIO)<br>
&gt; * associated with modern ATI gfx. VBIOS uses special tricks to figure out<br>
&gt; * BARs, instead of using regular PCI config space read.<br>
&gt; *<br>
&gt; *   (1) VBIOS relies on I/O port 0x3C3 to retrieve PIO BAR<br>
&gt; *   (2) VBIOS maintains a shadow copy of PCI configure space. It retries the<br>
&gt; *         MMIO BARs from this shadow copy via sending I/O requests to first<br>
&gt; two<br>
&gt; *         registers of PIO (MMINDEX and MMDATA). The workflow is like this:<br>
&gt; *         MMINDEX (register 0) is written with an index value, specifying \
the<br> &gt; *         register VBIOS wanting to access. Then the shadowed data can \
be<br> &gt; *         read/written from MMDATA (register 1). For two MMIO BARs, the \
index<br> &gt; *         values are 0x4010 and 0x4014 respectively.<br>
&gt;<br>
&gt; Not how your typical PCI device behaves.<br>
&gt;<br>
&gt; Often 1:1 mapping of bars has been required to get video drivers to work<br>
&gt; with passthrough GPUs.   Sometimes needing to deal with booting the \
card&#39;s<br> &gt; BIOS through QEMU&#39;s emulated BIOS becomes an issue (with the \
solution being<br> &gt; copying the VBIOS using some hacked together mechanism, and \
rolling it into<br> &gt; the SEABIOS image).<br>
&gt;<br>
&gt; Between NVIDIA and AMD, generally users have had a worse time getting<br>
&gt; NVIDIA devices working via passthrough (although AMD has been plenty<br>
&gt; difficult).   The most reliable technique, in my experience, is to use an<br>
&gt; NVIDIA QUADRO that is compatible with NVIDIA GRID<br>
&gt; (<a href="http://www.nvidia.com/object/dedicated-gpus.html" rel="noreferrer" \
target="_blank">http://www.nvidia.com/object/dedicated-gpus.html</a>).   It&#39;s the \
only thing<br> &gt; I&#39;ve used for GPU passthrough that &quot;just works&quot; - \
the NVIDIA drivers are<br> &gt; specifically written to accept the GPU is running in \
a virtualized<br> &gt; environment.<br>
&gt;<br>
&gt; On top of all of this, apparently NVIDIA is _actively_ doing things to<br>
&gt; frustrate attempts at running their hardware within a<br>
&gt; VM: <a href="https://www.reddit.com/r/linux/comments/2twq7q/nvidia_apparently_pulls_some_shady_shit_to_do/" \
rel="noreferrer" target="_blank">https://www.reddit.com/r/linux/comments/2twq7q/nvidia_apparently_pulls_some_shady_shit_to_do/</a><br>
 &gt;   with one developer describing the situation as &quot;VM developers have \
been<br> &gt; having a little arms race with NVIDIA.&quot;<br>
&gt;<br>
&gt; About 1-2 years ago, KVM + QEMU put in a lot of effort into getting GPU<br>
&gt; passthrough working more smoothly.   However, I do not know what the state<br>
&gt; of things is today, and Xen has not engaged in similar efforts to anywhere<br>
&gt; near the same degree.   Looking the last 6 months or so of xen-devel, most<br>
&gt; of subject lines mentioning PCI passthrough are efforts towards getting<br>
&gt; Intel&#39;s IGD working via passthrough (related to this, Intel is also \
still<br> &gt; actively developing XenGT) .   Given the number of posts on the topic, \
I&#39;m<br> &gt; guessing it was nontrivial - and this is with an actively \
cooperating GPU<br> &gt; vendor.<br>
&gt;<br>
&gt; As noted above, most attempts have involved running GPUs in a virtualized<br>
&gt; Windows session.   It is be possible that things will go more smoothly when<br>
&gt; running Linux drivers.   On the other hand, I have not heard particularly<br>
&gt; good things about running with nouveau, and using NVIDIA&#39;s own Linux<br>
&gt; drivers may pull in the same headaches seen on Windows.<br>
&gt;<br>
&gt; So, expect getting GPU passthrough working across the wide range of<br>
&gt; notebook and desktop GPUs deployed out there to be a substantial challenge,<br>
&gt; and make sure you have lots of different GPUs on hand for testing purposes.<br>
&gt;   You may discover, after not digging too deeply into it, that it is not<br>
&gt; worth the headaches and risk of rendering a lot of hardware<br>
&gt; Qubes-incompatible.   I understand (at least some of) the reasons for<br>
&gt; wanting to move the GUI out of dom0, but it may not be practical due to<br>
&gt; what the GPU vendors have done to support DRM.<br>
&gt;<br>
<br>
</div></div>We would like to primarily target laptops, and this means we&#39;re \
primely<br> interested in getting the Intel integrated GPUs to work. Fortunately, \
Intel<br> seems to be putting lots of work into making GPUs not only VT-d-assignable, \
but<br> also virtualizable (as in: multiplexible) -- as evidenced by their GVT \
work<br> (formerly XenGT):<br>
<br>
<a href="https://01.org/igvt-g" rel="noreferrer" \
target="_blank">https://01.org/igvt-g</a><br> <span class=""><br>
Thanks,<br>
joanna.<br></span></blockquote><div><br></div><div>good to know you have a focus on \
GPUs, however my pci passthru issue on my laptop was specifically nic \
oriented</div><div>shared pci bus  </div><div><span \
style="font-family:monospace"><span style="color:rgb(0,0,0)">02:00.0 Unassigned class \
[ff00]: Realtek Semiconductor Co., Ltd. Device 5287 (rev 01) </span><br>02:00.1 \
Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express \
Gigabit Ethernet Controller (rev 12)<br> <br></span></div><div><font \
face="monospace">I did find it quite odd that qubes itself couldnt handle the pci \
passthru but XEN 4.5 with Fedora 23 running on top worked perfectly \
fine</font></div><div><font face="monospace">That being said.... I can appreciate \
that people want the GPUs to work, personally, however if a simple nic cant be passed \
thru, Qubes</font></div><div><font face="monospace">itself is pretty unuseable.... i \
guess its about priorities to me</font></div><div><br></div><div><br></div><div>  \
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span \
                class="">
-----BEGIN PGP SIGNATURE-----<br>
<br>
</span>iQIcBAEBCAAGBQJWrx4VAAoJEDOT2L8N3GcYZOsP/RLTfDl2ws3HUfhfPGv3axCD<br>
WlcaOn2taH5T5w5pVWXtK7uk84Cvc4ZA4DfZUJMVTyCzdGE5jel64VMgV/IasVx6<br>
Lt05jYFyoi2T64UpqvFaoKDXPesVMhm22x/mGMuZIwsQe8hd5gLItWEbkKouvXJK<br>
QQVFzO+1kxFENgOql6T/v9JNHr1G5ue/8cX3izhuoHa0IQykJaoKuTXXwWbkMgk0<br>
aTVd6+Bp++WL4CUeyujQhNTmwIEl93QMiHMbhtnPpxcW30qHdrqj4IKn6AfL2hLf<br>
zG0h7DLnrn7B3KJrIEfT0MMHAymPvbLpEQosxDrCdbkkgCt6x3ihlmXKOzfTB5Bz<br>
tiieP5gHN9iUo/ToZVa4tRLNVE7VLSUdYxvBA24bN/KHm5ad43SfgcNdNTML3Lrn<br>
gj6OJ44tHyJo7PgjIH80bVF3f0mM2jS3HDriO0at+lWaWAY+00dN8XMb69EahrKb<br>
Q5vyG+ovjq8sd4oSLEPmIpdpTwq+2RRlENRbjb4VvQ9jqG18sagxs9wmGuXq1iO0<br>
KIWZ/UporVbWm9nP6rbchlzPnIwCCe4gHOfNzECRnYZjtmeMkSQSeTvmg2X+qx+w<br>
jXyFE97CpcRqpQ/qQL+IRm1CGT97siHx8p4NaJLbMKmG2uUiE9l0Cif0EdcVQx1c<br>
ii5TV5hIDJYQRMkqLRRk<br>
=1Llj<br>
-----END PGP SIGNATURE-----<br>
<span class=""><br>
--<br>
You received this message because you are subscribed to the Google Groups \
&quot;qubes-devel&quot; group.<br> To unsubscribe from this group and stop receiving \
emails from it, send an email to <a \
href="mailto:qubes-devel%2Bunsubscribe@googlegroups.com">qubes-devel+unsubscribe@googlegroups.com</a>.<br>
 To post to this group, send email to <a \
href="mailto:qubes-devel@googlegroups.com">qubes-devel@googlegroups.com</a>.<br> \
</span>To view this discussion on the web visit <a \
href="https://groups.google.com/d/msgid/qubes-devel/20160201085758.GA855%40work-mutt" \
rel="noreferrer" target="_blank">https://groups.google.com/d/msgid/qubes-devel/20160201085758.GA855%40work-mutt</a>.<br>
 <div class=""><div class="h5">For more options, visit <a \
href="https://groups.google.com/d/optout" rel="noreferrer" \
target="_blank">https://groups.google.com/d/optout</a>.<br> \
</div></div></blockquote></div><br></div></div>

<p></p>

-- <br />
You received this message because you are subscribed to the Google Groups \
&quot;qubes-devel&quot; group.<br /> To unsubscribe from this group and stop \
receiving emails from it, send an email to <a \
href="mailto:qubes-devel+unsubscribe@googlegroups.com">qubes-devel+unsubscribe@googlegroups.com</a>.<br \
/> To post to this group, send email to <a \
href="mailto:qubes-devel@googlegroups.com">qubes-devel@googlegroups.com</a>.<br /> To \
view this discussion on the web visit <a \
href="https://groups.google.com/d/msgid/qubes-devel/CAKYr3zznPRAhk31HBYZYYHiKqaSpaB3RO \
2mLas8mn9qqWm0vVA%40mail.gmail.com?utm_medium=email&utm_source=footer">https://groups. \
google.com/d/msgid/qubes-devel/CAKYr3zznPRAhk31HBYZYYHiKqaSpaB3RO2mLas8mn9qqWm0vVA%40mail.gmail.com</a>.<br \
/> For more options, visit <a \
href="https://groups.google.com/d/optout">https://groups.google.com/d/optout</a>.<br \
/>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic