[prev in list] [next in list] [prev in thread] [next in thread]
List: qubes-devel
Subject: Re: [qubes-devel] nested Qubes (Qubes within Qubes) can work - proof of concept
From: Joanna Rutkowska <joanna () invisiblethingslab ! com>
Date: 2015-08-31 13:06:50
Message-ID: 20150831130650.GA3208 () work-mutt
[Download RAW message or body]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Mon, Aug 31, 2015 at 03:02:16PM +0200, Marek Marczykowski wrote:
> On Mon, Aug 31, 2015 at 12:57:39PM +0000, Joanna Rutkowska wrote:
> > On Mon, Aug 31, 2015 at 05:54:53AM -0700, Eric Shelton wrote:
> > > On Monday, August 31, 2015 at 8:47:55 AM UTC-4, joanna wrote:
> > > >
> > > > -----BEGIN PGP SIGNED MESSAGE-----
> > > > Hash: SHA1
> > > >
> > > > On Fri, Aug 21, 2015 at 10:33:05PM -0700, Eric Shelton wrote:
> > > > > Basically, if libvirt can be updated to support nested HVM, the below
> > > > will
> > > > > probably work by merely using a custom config file.
> > > > >
> > > > > Inspired by the recent success with a graphics card passthrough that
> > > > > employed directly calling 'xl create', I thought I would try out a few
> > > > > things. It turns out that by enabling nested HVM, it is possible to
> > > > > successfully run Qubes R3 rc2 within itself, including with networking.
> > > > >
> > > > > Step 1: Create a domain using Qubes VM Manager. In this example, I
> > > > called
> > > > > it 'qtest'.
> > > > >
> > > > > Step 2: Tweak install-qubesr3.cfg to match your particular paths,
> > > > including
> > > > > to the Qubes ISO.
> > > > >
> > > > > Step 3: Keep running 'sudo xl mem-set dom0 2200' and 'sudo xl -vvv
> > > > create
> > > > > ./install-qubesr3.cfg' until qemu starts up successfully (it doesn't
> > > > seem
> > > > > to know how to play nicely with memory ballooning). This will bring up
> > > > an
> > > > > SDL window - you will be running qubes upstream.
> > > > >
> > > > > Step 4: Run the Qubes installer. Automatic disk configuration will not
> > > > > work, you will have to manually create /boot and / partitions. Standard
> > > > > partitions worked for me.
> > > > >
> > > > > Step 5: After the first shutdown after the install, do step 3 again.
> > > > Have
> > > > > it create service and default domains. At the end, you will get an
> > > > error
> > > > > message. Just close the window, click the Finish button, and log in.
> > > > Only
> > > > > dom0 will come up, sys-net still needs a little reconfiguration to work.
> > > > >
> > > > > Step 6: Fixing things up:
> > > > > qvm-prefs -s sys-net pcidevs "['00:04.0']" (might already be set to
> > > > > this)
> > > > > qvm-prefs -s sys-net pci_strictreset False (this is what caused
> > > > the
> > > > > error message)
> > > > > edit /boot/grub2/grub.cfg - set the Linux kernel command line to
> > > > include 'modprobe=xen-pciback.passthrough=1
> > > > > xen-pciback.hide=(00:04.0)'
> > > > >
> > > > > At this point, you could reboot and run step 3 again. sys-net will even
> > > > > start up now, and you can set the network adapter for manual
> > > > configuration.
> > > > > However, networking will not work, because qemu upstream runs in dom0,
> > > > and
> > > > > can't get through to the real sys-firewall. So, not it's time to use
> > > > qemu
> > > > > traditional in a stub domain.
> > > > >
> > > > > Step 7:Tweak run-qubesr3.cfg to match your particular paths.
> > > > >
> > > > > Step 8: Keep running 'sudo xl mem-set dom0 2200' and 'sudo xl -vvv
> > > > create
> > > > > ./run-qubesr3.cfg' until qemu starts up successfully.
> > > > >
> > > > > Problem: Qubes has replaced the display pipeline for its secure display
> > > > > setup, but the domain was started outside of the Qubes framework.
> > > > > Solution: _right_ after step 8, run 'xl list' to get the domain IDs for
> > > > the
> > > > > HVM domain (n) and its stub domain (n+1). Then run these:
> > > > > sudo /usr/sbin/qubesdb-daemon <HVM domain ID> qtest
> > > > > sudo /usr/bin/qubes-guid -d <stub domain ID> -t <HVM domain ID> -N
> > > > qtest
> > > > > -c 0x73d216 -i /usr/share/icons/hicolor/128x128/devices/appvm-green.png
> > > > -l
> > > > > <HVM domain ID> -q
> > > > >
> > > > > New problem: this clobbers the window manager. However, you can see
> > > > Qubes
> > > > > start up in its window and interact with it. Now networking works just
> > > > > fine (manually configure networking if you did not before). A rough
> > > > proof
> > > > > of concept for running nested Qubes.
> > > > >
> > > > > As I mentioned at the beginning, if libvirt can be updated to support
> > > > the
> > > > > nestedhvm feature used in the attached xl config files, all of the nasty
> > > > > mucking about with 'xl create' can go away. Steps 5 and 6 will still be
> > > > > necessary (although possibly only the strict reset part of Setp 6 is
> > > > > required) to deal with the reset issue.
> > > > >
> > > > > Once nested HVM support is in place, hopefully Qubes devs will find some
> > > > > benefit of a Qubes within Qubes setup, and this becomes something more
> > > > than
> > > > > a stupid VM trick. Qubes starts up pretty fast in a VM, particularly
> > > > the
> > > > > second time around.
> > > > >
> > > >
> > > > Hello,
> > > >
> > > > While admittedly a nice feature, we don't want to enable nested
> > > > virtualization
> > > > support in the hypervisor, because IMHO it enlarges the attack surface on
> > > > the
> > > > hypervisor due to extra complexity associated with each VMEXIT processing
> > > > (see
> > > > some of our early work on nested virtualization from a few years back).
> > > >
> > > > Perhaps we could make it a boot option with a big warning? Although I'm
> > > > slightly against even that...
> > > >
> > >
> > > I agree with the security concerns. However, adding support to libvirt to
> > > pass the option through to XL does not mean it is enabled by default for
> > > for all HVM domains. Instead, you have to use a custom config file to
> > > enable the feature. Implemented this way, it would only be available as an
> > > expert, "use at your own risk," type of feature - you would have to
> > > explicitly go out of your way to make use of it.
> > >
> >
> > What we don't want is (default) Xen *hypervisor* (aka xen.gz) to be compiled
> > with nested virtualization support. libvirt/xl can have the support to ask Xen
> > for this, no problem.
>
> I don't think it is an option. The feature is simply there. Can be
> enabled on per-domain basis, it is disable by default.
>
Oh, really? I remember once (perhaps Xen 4.1 or earlier then?) it was
conditionally compiled... Xen really goes into a wrong direction then :/
Somebody should really fork it and stop adding all these new features, which is
so stupid from the security point of view :/
joanna.
-----BEGIN PGP SIGNATURE-----
iQIcBAEBAgAGBQJV5FFpAAoJEDOT2L8N3GcYt/EP/AzppNM2jkpY00tSgFXrADcM
2a+KCa9JK7E6dvn8bOv3FU6Iyx7GTxUZP+MkrozMe4BWhWmh3n1zK++s1UnzFVRN
ONZpdyoo6OlrkaMAk5ueBrQdBeAan4w4v9NFxDutzMO1edfAMhnDJR1NjEktIFDM
9FauXzMT4pzHtS2mH85uLHtM8G9BjfpuME7b4gRTjICIlmvDJudVELxRyfITNcT5
Y8N3s1gBm1bWMkoKUuxxHplrK5pwJYFrKxPesYF2QZ+JOEBgTqm8buCQyAsXDB9k
V89owkRr5eNIlhEmuD6kExgUKhggl2jTOBWBNmd2iafEg2uF4ja1jTdd/YnLtAgQ
Ols1xLKE2QgnsdeLWkf+a+eteo9kkLheTKIqRgSdlndD5tj6dmSv32nRoy1ECwul
GWHbjK0LdwKpiKErPetHcJe0Fl3J105W1fthRPJRkhQJD52zOVq1h7lnUV1v7ttT
XVf5xLOfrqp3A6h2G0WWvjY+UOc438C8PW7lWsIA2VU545NQVR5N+DMxd3ioEe4V
vHQJeE5A1kyjPwfKUP+1ayNmh3UKKJldprl+nnIG8qt8/WtO0k965OD5xny9Cfjj
LgidF1v+dgLaTeY+spC6I/NA/rGu2Vp/LivJoGS1+VAJYETxLZlM+LM5v3OQC4bO
AU6OwARAjwQ7ju5d3ZJe
=8StR
-----END PGP SIGNATURE-----
--
You received this message because you are subscribed to the Google Groups \
"qubes-devel" group. To unsubscribe from this group and stop receiving emails from \
it, send an email to qubes-devel+unsubscribe@googlegroups.com. To post to this group, \
send email to qubes-devel@googlegroups.com. To view this discussion on the web visit \
https://groups.google.com/d/msgid/qubes-devel/20150831130650.GA3208%40work-mutt. For \
more options, visit https://groups.google.com/d/optout.
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic