[prev in list] [next in list] [prev in thread] [next in thread] 

List:       qubes-devel
Subject:    Re: [qubes-devel] DispVM design decisions for Qubes 4.0
From:       Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek () invisiblethingslab ! com>
Date:       2016-05-17 11:07:58
Message-ID: 20160517110758.GF25975 () mail-itl
[Download RAW message or body]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Tue, May 17, 2016 at 10:08:37AM +0200, Joanna Rutkowska wrote:
> On Mon, May 16, 2016 at 01:57:51PM +0200, Marek Marczykowski wrote:
> > On Mon, May 16, 2016 at 12:30:22PM +0200, Joanna Rutkowska wrote:
> > > So, I'm leaning towards changing the syntax for qrexec-client-vm (the tool we
> > > invoke in the srcvm) to the following:
> > > 
> > > qrexec-client-vm RPC_ACTION_NAME[+ARGUMENT]
> > 
> > This is very similar to what is discussed here:
> > https://github.com/QubesOS/qubes-issues/issues/910
> > 
> > But I wouldn't remove target VM name from here. Simply pass empty string
> > ('') as "lets dom0 decide". See below.
> > 
> > > For services where the RPC precisely defines the destvm, this would be handled
> > > according to the policy, e.g:
> > > 
> > > cat qubes.Gpg
> > > work-email  keys-itl-email  allow
> > > 
> > > And for those services where the policy doesn't specify the target VM \
> > > explicitly (i.e. the 2nd argument is $anyvm, or more broadly speaking cannot be \
> > > evaluated into the specific VM in the system, which might be the case when we \
> > > introduce tags in Qubes 4), such as e.g.:
> > > 
> > > cat qubes.FileCopy
> > > $anyvm  $anyvm  allow
> > > 
> > > ...we would have the Dom0's qubes-rpc-multiplexer to popup a window asking
> > 
> > s/qubes-rpc-multiplexer/qrexec-policy/
> > 
> > > the user to provide the target VM manually (something the user is required to \
> > > do anyway for such services, only that presently the VM-side script is asking \
> > > for the target VM). For compatibilily we can allow the qrexec-client-vm to take \
> > > the targetvm argument and just ignore it (or provide as a "hint" to the \
> > > Dom0-asking program maybe?)
> > 
> > I'd not remove an option to specify target VM name by calling party.
> > There are multiple cases when it makes sense. For example you can have
> > policy like this:
> > 
> > cat qubes.Gpg
> > devel    keys-devel  allow
> > release  keys-devel  allow
> > release  keys-release  allow
> > 
> > In other words: allow devel VM to access devel keys, but release VM to
> > allow both devel and release keys. Then have different target domain set
> > in different places (here: keys-devel for git and keys-release for rpm).
> > Asking for target VM in such a case (even with properly set default)
> > will be a huge UX regression...
> > 
> > But in case of not specified target domain, if the policy is
> > unambiguous, that could be used directly. If not - ask for target VM in
> > dom0. Probably using policy as a hint - for example:
> > 
> > cat qubes.FileCopy
> > work work-email ask
> > work work-web ask
> > work work-vault ask
> > 
> > Then the target VM prompt should list just those three VMs, with
> > 'work-email' being the first one (default).
> > 
> 
> If anything should be used as a hint, it's the dstvm specified by the srcvm. The
> policy should always have a priority over that.

Ok. Lets suppose you've got request with empty target VM name ("lets dom0
decide") and policy like the above. Or even like this:

cat qubes.FileCopy
work work-email allow
work work-web allow
work work-vault allow

What should qrexec-policy tool do?

> > Also it isn't exactly what you wanted: you've written "for service
> > requests which otherwise don't specify arguments", but in fact you
> > wanted to "override regardless of argument specified by the caller".
> > Otherwise the called may specify its own policy, like "*:*"...
> 
> That's an irrelevant technicality, because we want the policy to always override
> whatever specified the srcvm in case of any conflict.

Yes, I understand what you wanted to achieve: override the argument. But
this isn't what you've written, just that. I fully agree with such
approach (ability to override the argument by the policy).

> > There is a also another problem with such approach - having complex argument
> > (like full network policy) in a single string may be hard to manage and
> > very error-prone. Also, currently we filter argument from almost any
> > non-alphanumeric characters, to ease secure service implementation (not
> > to worry about shell special chars, path traversal etc). If we want to
> > put full firewall into service argument, that would be tricky (for
> > example '/' and '*' are not allowed, so 1.2.3.0/24:* would not be
> > possible). 
> > 
> 
> Right, and we have been having this discussion for at least a year now :)
> 
> I think the dom0 should _not_ do any parsing of the argument, just treat it as
> an opaque string and pass to the target vm's service. It might do some
> sanitization, that's ok. If '*' and '/' (understendably) should be sanitized out
> (b/c of the other way to specify the argument, i.e. in the policy file after the
> '+ sign), then we should pick some others.

I think firewall is too complex to be handled by very simple policy,
without complicating the policy. IMHO better leave it separate (either
in firewall.xml or elsewhere, with similar GUI as currently) and provide
some way to for "netvm" to retrieve it (qubes.GetFirewallRules
service?).

> > > Where the '$netvm' represents the actual netvm to which the srcvm is connected
> > > to (i.e the netvm should remain a property of the srcvm).
> > 
> > Having '$netvm' which means different things depending on calling VM
> > may be misleading. I'd rather understand it as "system default NetVM".
> > Maybe "$srcvm:netvm"? Later we may introduce another options like this,
> > for example "$srcvm:gpgvm".
> > 
> 
> I'm not terribly against that syntax, just would like to note the fundamental
> difference between the netvm and what you described as gpgvm -- the former is a
> property of the VM, while the latter is just (one of the possible) dest VMs for
> some qrexec service request. Although... if we moved to qrexec-based networking,
> as I described above -- which I doubt we would do anytime soon, because of the
> fundamental disagreement about the argument parsing and work needed to implement
> it -- then suddenly this distinction disappears. In that case, however, the
> generic terms such as "netvm" also looses its meaning, and instead we might want
> to state something like: "the VM which is the target of qubes.Xyz service
> request from VM abc".

Exactly. And at least for some services it would be convenient to set
default target VM (per source-VM). Even when policy allow some other
options too.

Hmm, actually something very similar is already possible, we do have
"target=" option in the policy to override target VM. So it may look
like this:

cat qubes.OpenURL
work  work-web  allow
work  $anyvm    allow,target=$dispvm

This means VM 'work' will be able to open links in 'work-web' VM, but if
not specified (or specified anything else), it will open links in new
DispVM.

But still, I think it would be useful to have such per-service default
target VM as a VM property - it will be more convenient and safer for
the user (harder to introduce undesired change in the policy).

> > > Such central configuration of all the VM interactions from within the Qubes
> > > policy, combined with our management framework, should make it much easier to
> > > offer default preconfigured configurations, hopefully making it harder for
> > > users to shoot themselves into their foot.
> > > 
> > > BTW, I think it might be useful to also provide some kind of a policy \
> > > evaluation tool, or some kind of a "simulator", to let the admins test the \
> > > policies. One example of a policy evaluation tool might be a graphing tool \
> > > (based on dot) for drawing all the allowed RPC invocations between VMs. Am I \
> > > not mistaken that it should work in O(M*N^2) time, where M is the number of \
> > >                 files in
> > > /etc/qubes-rpc/policy, and N is the no of VMs in the system?
> > > 
> > > The remaining questions are how to determine the DispVM's 1) label and 2) \
> > > netvm? I don't have a strong opinion here, although I'm leaning towards having \
> > > the DispVM inherit the two from its template.
> > > 
> > > joanna.
> > > 
> > > [1] https://groups.google.com/forum/#!topic/qubes-devel/uQJL7I70GQs
> > > [2] https://www.qubes-os.org/doc/qrexec3/
> > 
> 
> Given that we won't probably move networking to qrexec service anytime soon, I
> think we should just assume the networking policy should be
> DispVM-template-specific (and not inherited from the VM requesting DispVM start)
> and just remember that the DispVM-template would not need to be the real
> template VM. And as for the need to maintain multiple DispVM-template (i.e. keep
> the updates), that could be solved by our magic mgmt stack now.

Not inherited at all, or not inherited by default and have some setting
to change that? I'm asking because we have currently "dispvm_netvm"
property, which set "NetVM used by DispVMs started from this VM" (can anyone
follow this sentence?). Default is "the same as netvm property" (which
means to inherit the netvm connection).

What about label? There was some discussion about that:
https://github.com/QubesOS/qubes-issues/issues/1788

Since having multiple kind of DispVMs will be possible, maybe also let
it be DispVM-template-specific? Then if you want have DispVMs opened
from your work VM to be green, simply create "dispvm-work" with green
label. And have default DispVM red label.

- -- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJXOvuPAAoJENuP0xzK19csnZYIAIljXe15Or5cckSeVG0306Wc
HRxuI6TUIAQQsf77omNKY/bf/1Rj6Qw+8MG8gYD4zoe/1fmKWepEFeCPDKHnah/x
9VI3Ywpj2391WxDgPrBR6MkUUhwbVUEet9hYMHenN200BUMF/IfbxADfaFvb9zDR
jlWCHNQxLEiY12dyN2/wjjSt+xnmcixv+GPGTNtSTwSs0kGd1HHxk33hJ0nIMFnl
+rTq92PKJj29jg9CeW25yF/ckMxGNZ4acYj9oKpdYk98nyqUXHWuCrwutKQonuUJ
hfXYnrH5SN1reZ7GbwDqt2ouStKZg6pH1K3nayCaoC2fqiq94ZzZNXWGmuTrXK0=
=wUgC
-----END PGP SIGNATURE-----

-- 
You received this message because you are subscribed to the Google Groups \
"qubes-devel" group. To unsubscribe from this group and stop receiving emails from \
it, send an email to qubes-devel+unsubscribe@googlegroups.com. To post to this group, \
send email to qubes-devel@googlegroups.com. To view this discussion on the web visit \
https://groups.google.com/d/msgid/qubes-devel/20160517110758.GF25975%40mail-itl. For \
more options, visit https://groups.google.com/d/optout.


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic