tcpdump mailing list archives

Re: openwrt Conclusions from CVE-2024-3094 (libxz disaster)


From: Denis Ovsienko <denis () ovsienko info>
Date: Mon, 1 Apr 2024 23:56:58 +0100

On Mon, 01 Apr 2024 09:53:38 -0400
Michael Richardson <mcr () sandelman ca> wrote:

The entire openwrt thread is at:
    https://lists.openwrt.org/pipermail/openwrt-devel/2024-March/042499.html
continuing at:
    https://lists.openwrt.org/pipermail/openwrt-devel/2024-April/042521.html


Daniel Golle <daniel () makrotopia org> wrote:
    > However, after reading up about the details of this backdoored
    > release tarball, I believe that the current tendency to use
    > tarballs rather than (reproducible!) git checkouts is also
    > problematic to begin with.  

    > Stuff like 'make dist' seems like a weird relic nowadays,
    > creates more problems than it could potentially solve,
    > bandwidth is ubiquitous, and we already got our own tarball
    > mirror of git checkouts done by the buildbots (see
    > PKG_MIRROR_HASH). So why not **always** use that instead of
    > potentially shady and hard to verify tarballs?  
I wonder if we should nuke our own make tarball system.

The short answer is "no".  Before I go into detail, please try not to
take the following comments personally, but to focus on the problem.
The problem seems to be a bit bigger than this project, so this in part
preemptively addresses arguments you did not make, but other people did
in other similar arguments about the same matter.

First of all, the root cause is not a technical problem: a widely used
software project got compromised, on this occasion a developer was the
attack vector.  Some people feel more angry or scared about it than
usual (this is neither the first nor the last nor the biggest incident
of this kind, truth be told) and I sympathise with these temporary
emotions.  However, I do not sympathise with the widespread urge to
demonstrate control over the situation by any means or to vent the
frustration right now at... something... anything really, does not
matter, whatever happens to be in plain sight right now.

Likewise, the proposed move would not be a technical solution, or a
solution at all.  It would scratch a momentary emotional itch today
with no regard to the technical lessons learned yesterday and technical
problems to be solved tomorrow.  Let's try to apply a bit of critical
thinking to the matter:

* Speaking of Autoconf imperfections, please note that CMake has been
  available for several years.  That said, CMake project definitions
  are not immune to being used as a delivery vehicle for an exploit
  into an open source project.  CMake executables are not immune to
  being compromised.  make (whether GNU or not) is not immune.  The
  compiler is not immune.  The linker is not immune.  The libc is not
  immune.  The OS is not immune.  And (now we are getting to the
  difficult part) neither hardware nor VM hypervisors are immune to
  being compromised.  Let's leave Autoconf in peace, it is not the root
  cause.

* Speaking of imperfections of published signed tarballs, please note
  that the public git repositories have been available for many years.
  That said, a git repository is not immune to being used as a delivery
  vehicle for an exploit into an open source project.  Just labelling
  something with "stored securely in git" does not automatically
  eliminate the threat.  For the avoidance of doubt, git as a piece of
  software is not immune to being compromised.  And online services that
  host git repositories are not immune either.

  Considering the problem of getting the source code by some means, the
  only way to guarantee having the exact actual source code is actually
  to have the exact actual source code.  Which in git parlance means
  actually having a local git clone that (possibly among many other
  things) includes the very specific revision of interest, which
  overshoots the problem space massively.  If you look at kernel.org
  and compare downloading of a tarball of a specific kernel version to
  cloning a repository (which? how deep?) and identifying the correct
  tag, the difference should be obvious.  These two solutions address
  two different problems, neither is the root cause and neither is
  immune to being compromised.

  To demonstrate the difference between the two solution spaces further,
  git repositories by design do not store file modification times or
  permissions other than the executable bit, so in practice two
  identical, as far as git is concerned, clones tend to be different,
  as far as "ls -lR" is concerned.  Let's leave signed tarballs in
  peace.

  There is a valid concern of whether a tarball is a close enough
  result of "make releasetar" for the respective git tag.  Let's not
  guess, let's prove it.  Would anybody like to implement a script that
  would compare the contents of https://www.tcpdump.org/release/ with
  the git repositories?

* The "all git repositories are already available" argument does not
  hold, for example, when one starts with a fresh install of NetBSD
  with just make, cc and a tarball of pkgsrc release.  Before you can
  use git, you have to actually compile git, which means downloading a
  tarball of git (Can you get a working copy of a clone of the git
  repository of git without using git?  How?).  And before that it means
  downloading tarballs of all dependencies of git and building those.
  That's why other packages usually do not have git as a build
  dependency, but use release tarballs.  In some scenarios
  bootstrapping is an important factor.  The world does not entirely
  consists of high-end build servers.

* Speaking of "reproducible builds", building from a self-contained
  source package such as .src.rpm with a signed tarball (and any
  patches, etc.) in it is way more reproducible than building from a
  text file that says "in theory, once upon a time there was a
  repository online over there with a tag named such and such".  How
  much of that will hold in 5 years time?  10 years time?  Repositories
  come and go. Tags get created and deleted, by accident or
  deliberately.  Entire git hosting providers and parts of the Internet
  go offline on a regular basis.

  Let's recognise that knowing where the source supposedly is and
  actually having the actual source on the actual computer where it
  needs to be built can be (and sometimes is) two different things.
  This difference is easy to ignore, until it is not.  Release tarballs
  among other things are a time-proven means to provide the source for
  reproducible builds, and the best course of action is not to destroy
  it under the heat of a moment.

* The root cause was a compromised developer.  The technique was
  obfuscation.  To that end, a lot of work has been done in tcpdump and
  libpcap to remove dead code and to clean up working code.  This is a
  long-term effort, plenty of hard work has been done on zero budget
  and some hard work still remains to be done, but the difference is
  already visible.  Let's continue this work.  Clean and maintainable
  code does not prevent compromising a developer in principle, but it
  makes obfuscation more likely to manifest if attempted.

  That said, compromising a developer would be a very serious blow to
  any project, and nobody would like to have such a problem in the
  first place.  Certain things potentially could be done in this project
  to minimise, if not to eliminate, such possibility, but this would be
  neither trivial nor quick, and it would cost, in terms of money and
  otherwise.  Is there a sufficiently funded demand for it from a
  trustworthy party?  Let me see it.

To sum it up, please do not make any major changes right now.  If
anybody wants to suggest such a change, please try to structure the
proposal along the following lines:

1. If the matter is truly urgent, please state why.
2. What specifically seems to be the problem?
3. What is the root cause of the problem?
4. What is the structure of the problem?
5. Which specific parts of the problem space you suggest to address
   first?
6. What is the proposed solution and how well does the solution space
   match the problem space?
7. How does the solution mitigate/reduce/eliminate the root cause of the
   problem?
8. What would be the best way to confirm the solution actually works as
   expected after it has been implemented?
9. What is the rollback plan, in case the solution did not work?
10. How is the proposed solution supposed to work long-term and
    what is the impact on backward compatibility, if any?

Thank you.

-- 
    Denis Ovsienko
_______________________________________________
tcpdump-workers mailing list -- tcpdump-workers () lists tcpdump org
To unsubscribe send an email to tcpdump-workers-leave () lists tcpdump org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


Current thread: