[prev in list] [next in list] [prev in thread] [next in thread] 

List:       cfrg
Subject:    Re: [Cfrg] What is the standard we are going to apply?
From:       Watson Ladd <watsonbladd () gmail ! com>
Date:       2013-12-24 23:05:28
Message-ID: CACsn0c=-Zqu8=p20g-B1CXj1U-+xzaJH3wip0cQ-O-2z_KVzdg () mail ! gmail ! com
[Download RAW message or body]

On Tue, Dec 24, 2013 at 2:27 AM, Yoav Nir <ynir@checkpoint.com> wrote:
> 
> On Dec 24, 2013, at 5:34 AM, Alyssa Rowan <akr@akr.io>
> wrote:
> 
> > 
> > You, John, and others have mentioned a strong desire for protocols
> > and/or primitives being evaluated to have well-vetted proofs:
> > in standard model if possible, else random oracle; and side-channel
> > resistance (i.e. suitability for constant-time implementation, etc).
> > 
> > That sounds to me like an excellent idea, wherever it is practical.
> > 
> > No matter what adversary might seek to interfere, and whether they're
> > RFC3514 compliant or not when doing it, a protocol or primitive with a
> > solid proof is more transparently, demonstrably effective than one
> > without one.
> 
> I agree that given two similar proposed algorithms or protocols, the one with the \
> security proof is the better choice. But please let's not over-state what these \
> proofs actually show. The assertion "X is secure" is not one that can be tested. So \
> all security proofs end up with a model of the protocol that simplifies out some \
> aspects of the protocol, and a set of assumptions that seems (to the researcher) to \
> be reasonable, but may or may not be correct in the real world, and a class of \
> attack that seems important to the researcher (but other types may exist). So a \
> proof that this class of attack cannot succeed under certain assumptions is \
> important, but the protocol may still be vulnerable to other attacks, especially \
> when used in a certain context, where the context may compromise the (otherwise \
> excellent) algorithm, or by implementation details that allow side-channel \
> information leak.

This is very true. That's why proofs should reduce to accepted primitives like a
block cipher as a PRP or Diffie-Hellman, and if the protocol cannot be
proven simplify the protocol.
There are very standard models for the attacker, namely capable of
doing anything to the messages,
and limited by some computational assumptions. If you can't prove
the entire protocol secure, the protocol is overly complex: there
probably is a provably secure protocol
achieving the same outcome.

In particular all the issues in TLS record layer stemmed from not
being concerned with proofs. EtM was and is trivially provable, while
MtE wasn't, and then was shown to be a rather complicated and subtle
story. E&M requires a constraint against dopey MACs that
leak the message. In 1995 EtM was provably secure, the others weren't.
Lesson learned? Apparently not.

Side-channel security we don't know how to do formally. But we do have
primitives without side channels, and protocols that don't demand
branches on secret data. This is only a protocol issue when the
protocol cannot be implemented without side channels.

> 
> We have had several primitives and protocols with well-vetted security proofs that \
> were later shown to be vulnerable. For many things, especially complex ones, we \
> don't have a better method of vetting then saying "100 people looked at it, and 3 \
> hackers + 2 cryptographers tried to break it for a whole week and they couldn't". \
> Sad but true. The TLS renegotiation vulnerability was in a protocol that probably \
> received more attention from researchers and hackers than any other. Thousands of \
> people read the specification, implemented it, taught it, learned it, and wrote \
> papers analyzing it and proving its security. And yet when after 15 years two \
> people separately found the vulnerability, it was jaw-dropping obvious (after the \
> fact).

I don't think anyone proved the security of anything close to TLS. The
formatting of the messages alone is too nasty to formalize, and the
key confirmation is a mistake (it breaks the standard definitions of
key agreement with no gain). Then each one of the key agreements needs
to get checked, and the ciphersuite, and the choice of which key
agreement to use, and then the fact that RC4 and Lucky13 exist needs
to get analyzed, etc. The renegotiation vulnerability would have been
spotted had a proof been available, as it would have shown up as a
mutual authentication failure.

Proofs let those 100 people focus on the assumptions in the proof.
EAX/EAX' is only the latest example of a proof making clear why a tiny
change
in a protocol was a bad idea, and sure enough after the change it was broken.
> 
> What I'm getting at is that having someone present a primitive along with a \
> security proof, and then having CFRG look at the proof and say "seems legit" is not \
> a good enough process. As Stephen said in the other thread, we may be facing a time \
> when NIST is no longer the gold-standard for vendors and standards writers. So we \
> won't have the process we had 13 years ago, where NIST says "here's a new block \
> cipher, we call it AES and it rocks", and then we all implement it in our standards \
> and in our products. CFRG as it currently operates, or CFRG plus the requirement \
> for security proofs is not a suitable replacement. I don't have an answer as to \
> what is a suitable replacement. NIST has the resources to put some people to work \
> full time on analyzing protocols and primitives (part of that is by borrowing \
> expertise from the NSA). I don't know how a volunteer organization like IETF/IRTF \
> can duplicate that kind of effort.

The easy answer is "don't". Send the paper to CRYPTO, and wait a few
years. Also, implementing a block cipher is not enough: it needs a
mode of operation and a protocol to be useful. NIST hasn't done much
in the protocol arena: MQV certainly didn't originate with them, and
trusting NIST didn't save TLS. At the end of the day someone is going
to be evaluating cryptographic protocols in RFCs, and whether they are
the CFRG, or the WG,
or whatever, they need to have the ability to do this right. The
guidance and process provided so far have been inadequate, and this
needs to change.

In particular "primitives" aren't the issue. The TLS WG took a secure
MAC, a secure PRF, and a secure block cipher mode of operation, along
with RSA, and managed to make something that has had recurring issues
for years. None of the underlying primitives has been dented, but the
result has certainly not lived up to what it should. I don't think an
RFC or a BCP or an I-D can really fix the issues leading to this sort
of mistake.

What I do think will work is recognizing which protocols are
"high-risk" and acting to reduce that by simplifying them, demanding
proofs (also has
simplifying effect), and making sure the assumptions being made about
the result are correct. DNSSEC and TLS would definitely take longer to
do this way, but I think the results would definitely have been better
for TLS, and for DNSSEC, RSA was never an appropriate choice given the
necessary key sizes and where the keys appear. No attacks yet, but
when Operation Kilobit happens, expect a lot of late nights for a lot
of people.

Sincerely,
Watson Ladd
> 
> Yoav
> 
> _______________________________________________
> Cfrg mailing list
> Cfrg@irtf.org
> http://www.irtf.org/mailman/listinfo/cfrg



-- 
"Those who would give up Essential Liberty to purchase a little
Temporary Safety deserve neither  Liberty nor Safety."
-- Benjamin Franklin


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic