[prev in list] [next in list] [prev in thread] [next in thread] 

List:       binutils
Subject:    Re: [PING][PATCH] [RFCv2] Document Security process for binutils
From:       Siddhesh Poyarekar <siddhesh () gotplt ! org>
Date:       2021-01-27 6:44:21
Message-ID: b5ed1e4d-65b7-5ecd-14a8-ab92ce744c2d () gotplt ! org
[Download RAW message or body]

On 1/27/21 11:06 AM, Mike Frysinger wrote:
> i've responded inline to points below, but i'll preface things to try and head
> off misunderstandings as it sounds like i'm all doom & gloom.  i'm not happy
> about the current state of the project wrt security, but i'm being practical
> given the engineering resources we have.  if we had programmers dedicated to
> the task of coming up with fundamental solutions to how these libraries work
> or are implemented and make it hard for incorrect code to be merged in the
> first place, then i'd be a lot more supportive of a formalized & dedicated
> security process (including private channels, embargoes, dedicated releases,
> and all that).  remember that we accept ports & contributions from anyone,
> not just corporations, and many of them don't have the time or resources or
> incentives or expertise to assess the security or robustness of their code.
> requiring them to go through an independent security audit (or equiv) before
> merging is untenable.

Got it.  I suppose my view is limited to Linux distribution maintainers 
where we already have much of this in place and we already have to 
acknowledge, evaluate and backport security issues in binutils.

>> On 1/26/21 8:16 AM, Mike Frysinger wrote:
>>> i'm with Alan here with the current state of the world: it is not safe to
>>> run binutils (or gcc fwiw) on untrusted inputs unless the overall execution
>>> environment has been isolated/secured in someway.  i understand that some
>>> people will find this surprising, but that is the reality of the codebase
>>> today.  i've been telling people this in Gentoo for decades.  i don't see
>>> the situation changing until someone steps up to comprehensively tackle it.
>>
>> I don't disagree, but that seems more like an assessment of the
>> robustness in handling untrusted input than a design choice.
> 
> at this point, as long as resources aren't dedicated to changing the
> situation, it's a distinction without a difference.
> 
>> there's a distinction to be made between the tools and the libraries
>> shipped in binutils in terms of how we can dictate use cases.
> 
> i understand what you mean, but i disagree.  if {tool|library} can be fed
> untrusted input that turns into a crash, or into arbitrary code exec, it
> would be reported the same way.  from the binutils project pov, the same
> amount of effort will need to be expended.  maybe the CVE score would be
> different, but it would still prompt a CVE and expectation of a fixed
> release.  CVE's are like -Werror: people consider any output a failure.
> 
>>> i agree that we should have a document clearly defining the security
>>> posture of the project as people will go looking for it.  but trying to do
>>> embargoes or new branch releases for every bug with exploit possibilities
>>> will be useless drain on an already limited developer pool.  bugs should be
>>> treated as bugs which means using bugzilla to report them.
>>
>>   From glibc experience, embargoes are probably the only additional
>> overhead.  AFAICT, we already backport security fixes to older branches
>> based on the severity of the fix.
> 
> there are no "older branches" in this world.  there is the main development
> branch, and there is an active release branch.  the release branch has bug
> fixes cherry-picked to it as people request them, and new versions trickle
> out as changes accumulate, and someone feels like kicking it out.
> 
>> If anything, the documentation would
>> help *limit* what gets called a security issue.  For example, this[1]
>> would have got promptly thrown out if there was a security process; I'm
> 
> we can already throw them all out: if you use binutils in contexts that are
> insufficiently secure by themselves, it's a failing in the environment.
> 
>> sure there are others that sneaked in earlier that resulted in pointless
>> churn under the pretext of it being a security issue.
> 
> i don't think this is accurate.  do you have examples ?  people reporting
> bugs and then having them be fixed, regardless of security implication, is
> not churn imo.  it's not like we'd ever argue "that input was specially
> crafted by you, therefore we don't care that it causes a crash".  files
> could just as easily be corrupted by your computer (e.g. filesystem).
> 
> if you look at the release history, it seems like the release schedule has
> been unpeturbed by any reported issues.
> https://sourceware.org/pub/binutils/releases/?C=M;O=D

The churn is from a distribution perspective where backport decisions 
may be influenced by the potential security impact of a bug.  Having the 
upstream project clearly call out a bug as not being CVE-worthy based on 
defined criteria would make it easier for downstream to prioritize.  I 
suppose it could be argued that it's a downstream-only concern.

> that is insufficient.  creating the appearance of security when the project
> and codebase is not backing it up is worse.  we're polishing a turd and then
> selling it as kobe beef knowing full well it's a turd.
> 
> having dedicated resources such as yourself to manage logistics is important
> and engineers can't replace that.  but conversely, the engineering resources
> need to be in place too.  if either is missing, then it all falls down.

Ahh no, the logistics management under the current proposal could be 
handled by the distribution security teams since they're already doing it.

> that's like saying not running everything as root is a mitigation technique.
> i mean, i guess, but it's a fundamental defense that completely cuts off any
> other problems.  if you're in seccomp mode1 and get arbitrary code exec, then
> no one cares.  all you've got is a DoS at best.  and if we, as a project, say
> that a DoS is just another bug and not one worthy of security handling, then
> that simplifies quite a lot.

I totally agree, maybe the way I wrote it understated my complete 
agreement :)

> as long as the libs are written in C, mitigation techniques are the only real
> defense option.  i assume switching to a memory safe language (e.g. Rust) is
> off the table.
> 
> or something like moving all arch-specific code to be declarative (cgen?),
> and then we only have a single common core to focus on securing.

+1

>> I am also going to look (maybe later this year)
>> at past fuzzing results to see if there's potential cleanup that we can
>> do across the library to make it safer.
> 
> imo this approach is woefully insufficient to accept a stronger security
> position.  if it were all low hanging fruit & easy fixes, i'm pretty sure
> we would have already gotten there.
> 
> i think Alan already said he's been responding to sanitizers & fuzzers for
> over a year without being terribly impressed.  i don't think that's an
> indictment of either tool (i love them both & actively use them), but it
> is a testament that they aren't the solution to our troubles.

No no, I don't want to run fuzzers all over again.  I meant to look at 
patterns across previous reports to see if there are systemic issues I 
could help solve in libbfd to make it safer.  Things like declarative 
architecture-specific code, safer file handling, consolidating common 
logic across architectures, etc.

Siddhesh
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic