The XZ attack and the creation of an OSQI in the US
The recent attack on XZ Utils and liblzma has much of the software industry in a bit of a panic, and rightly so. The attack was very sophisticated, both technically and in terms of the social engineering and time commitment. It represents an escalation of the risk which has previously been considered largely theoretical, and highlights the resources of the attackers, even as their identity remains unknown.
We need concerted effort to combat future iterations of this sort of attack, which are all but guaranteed. The creation of an Open Source Quality Institute represents the most promising approach I’ve seen for addressing this risk. The U.S. should create one as a new FFRDC, a sibling organization to the NCF.
The XZ Attack
A lot has been written about the attack itself. The super-short summary:
- XZ Utils and liblzma are existing components with widespread use, including in key system components in many mainstream Linux distributions. It is maintained by Lasse Collin, its principle author.
- 2021-10: Someone (or someones) using the identity “Jia Tan” begins contributing to the XZ project. Their initial patches are innocuous.
- 2022-04 — 2022-05: Two new identities (likely the same person or employed by the same organization) begin pressuring Lasse Collin about the maintenance of the project.
- 2022-09: “Jia Tan” has become a co-maintainer of the project.
- 2023-06 — 2023-07: “Jia Tan” introduces two changes, innocuous on their own, which lay the groundwork for the upcoming attack.
- 2024-02: The attack begins in earnest, via more commits from “Jia Tan”.
- 2024-03: The attack is discovered, essentially by accident, by someone performing performance testing on unrelated software.
The most noteworthy points here are that the social groundwork for the attack started over two years before the technical attack began in full.
For a much more detailed timeline with lots of references and links to other descriptions, I highly recommend Russ Cox’s timeline of the attack. If you’re interested in understanding the technical details of the attack, his technical analysis of the attack is also excellent.
Supply Chain Attacks
The XZ attack is an example of a “supply chain attack”: a method of attacking a software system by finding (or in this case, inserting) a vulnerability in one of the components it includes or depends on.
Most people who work in the software world will tell you such an attack was inevitable, given the state of the open source ecosystem. Many projects are held together by one or two overworked maintainers, often operating outside of their day jobs. This includes projects, like XZ, which are included in core parts of our modern technology stack, like mainstream Linux distributions. Many of our largest corporations depend on these projects, whether they know it or not, and often provide no support back to the people creating that software. These maintainers are often subject to increasing demands on their (unpaid, generally) time; demands which can often become abusive. This was key to the social engineering aspects of the xz attack.
Concerns about software supply chain to this sort of attack are not new. I became aware of the issue about 7 years ago, and it was not new then. We have seen related issues before, but the xz attack is notable for a few reasons:
- It wasn’t a bug, it was an intentionally designed exploit.
- The exploit was very sophisticated, technically.
- It was added publicly.
- The exploit was effective even through two levels of indirection in the software stack.
When I was running for office two years ago, a previous high-profile vulnerability, log4shell was fresh in the minds of many IT professionals. It had me thinking about what action government could take to improve the situation. I didn’t get anything concrete, in part because it was difficult to see state-level action being the right scope.
Earlier this week, Tim Bray published a proposal for an Open Source Quality Institute which represents pretty much exactly what I wish I’d come up with.
The OSQI
Tim’s proposal is well-considered and well-written; you should read it. The key points, I think, are:
- This is a public organization operating in the public interest.
- They are not a standards body, regulatory agency, or similar; they have no “enforcement” function.
- Their primary output is code: mostly patches to existing software.
- That code output would likely include new build and test tools.
- The organization must emphasize transparency throughout.
The proposal as written doesn’t address how we’d create one of these in the U.S. (he’s Canadian, and is hoping several countries will implement the idea), but I think we have a good model for it in the US.
Creating a US OSQI
The U.S. organization with the closest mission today is the National Cybersecurity FFRDC, operated under the National Cybersecurity Center of Excellence by NIST and the Department of Commerce. NCF is the only Federally Funded Research and Development Center operated by the Department of Commerce, and the only one focused on cybersecurity today. The OSQIs mission would be complementary to the NCF’s.
Placing the OSQI under the Department of Commerce fits well with the department’s goal of securing our economy, the same justification used for the entire NCCoE. Keeping it outside the Department of Defence and Department of Homeland Security would go a long way towards alleviating common concerns in the open source community about accepting security fixes from intelligence organizations. And the same benefits of the FFRDC model NIST called out in their announcement of intent to form the NCF — independence, collaboration, efficiency, no bias towards any particular commercial interest — apply just as much here.
The NCF is operated by MITRE, who runs about a half-dozen of the FFRDCs. I have no particular feelings about MITRE one way or the other, but in my ideal version of this plan we’d see the OSQI follow another common FFRDC model: partnership with a university. Another close cousin of the OSQI and NCF is the Software Engineering Institute, which is operated by Carnegie Mellon University in Pittsburgh. My first thought, reflecting both professional experience and an Oregon bias, would be to develop the OSQI as an expansion of the existing Oregon State University Open Source Lab. OSUOSL provides services to approximately 160 open source projects today, including collaboration with the Linux Foundation and Apache Foundation (and the Plan 9 Foundation!).
My back-of-the-envelope for funding, based on Tim’s target size of 250 staff, would put it on the smaller end of the FFRDCs, from both a staffing and budgetary perspective (certainly smaller than all the National Labs). My initial guess would be that it would be about a peer of the existing NCF or SEI. I’d quibble with Tim’s implication that 250 staff represented a meaningful floor. Given the nature of the work, impact will scale more-or-less linearly with staff starting from what I’d think was a much lower floor, although measuring that impact gets difficult at smaller sizes.
Conclusion
Supply chain risks have been a practical threat for a long time, and we’ve now seen explicit, intentional attacks making use of that channel. These attacks represent both a significant economic and security risk, and are almost certain to escalate. We cannot reasonably expect the open market, where the financial incentives have created the situation that breeds these vulnerabilities, to solve this; a “public good” like this needs to be funded and executed by a public organization. The U.S. government has, in the FFRDCs, an existing framework for how to implement such a public interest organization performing work critical to our security.
The U.S. government should create an OSQI as an FFRDC under the Department of Commerce and NIST.