The Linux Plumbers Conference, the premier event for developers working deep in the Linux plumbing layer is coming to Tokyo on December 11–13. This year offers a hybrid format with sessions hosted onsite at Toranomon Hills Forum and remote participation available.
The ELISA Project will be there as part of the Safe Systems with Linux Microconference, where community members are collaborating with kernel developers to improve how Linux supports systems with safety-critical requirements. The MC will explore real progress in linking code, requirements, and tests to reduce technical debt, improve traceability, and support dependable kernel development. Early pilots in subsystems like TRACING have already uncovered and resolved previously unknown issues a strong signal that this approach can benefit more of the kernel.
If you are attending LPC, remember to join the discussions at Safe Systems Linux Microconference.
Session Highlights:
Aspects of Dependable Linux Systems – Kate Stewart (Linux Foundation), Philipp Ahmann (Etas GmbH (BOSCH)) – 12 Dec 2025, 10:00
In regulated industries, Linux is widely used due to its strong software capabilities in areas such as dependability, reliability, and robustness. These industries follow best practices in terms of processes for requirements, design, verification, and change management. These processes are defined in standards that are typically not accessible to the open source kernel community.
However, since these standards represent best practices, they can be incorporated into structured development environments like the Linux kernel even without the knowledge of such standards. The kernel development process is trusted in critical infrastructure systems as it already covers many process elements directly or indirectly.
The purpose of this session is to initiate a discussion on what is currently available and what may be missing in order to enhance the dependability and robustness of Linux kernel-based systems. How can the artifacts be connected? Where are the appropriate places to maintain them? And who is the best responsible for each element of the development lifecycle?
NVIDIA Approach for Achieving ASIL B Qualified Linux: minimizing expectations from upstream kernel processes -Igor Stoppa (nvidia) -12 Dec 2025, 10:10am
Unlike the typical path chosen for attempting to use Linux in safety applications, the approach developed by NVIDIA strives to avoid placing any burden on upstream maintainers and developers. Upstream maintainers should not have to become safety experts, nor the linux kernel should become encumbered by verbose descriptions of what the code does, for it to achieve safety. We want to start a discussion about how we achieve this, and how it can coexist with upstream processes.
Applying Program Verification to Linux Kernel Code: Challenges, Practices, and Automation – Keisuke Nishimura – 12 Dec 2025, 10:35
To maintain software safety, defining specifications and ensuring that implementations meet them are both important. The former has become popular in the Linux kernel in various ways [1,2], while the latter still depends on developers’ manual effort. Recent advances in techniques and tools, however, have made it feasible to systematically apply program verification to Linux kernel code.
In this talk, we share our experience and practices from ongoing work on verifying task-scheduler code of the Linux kernel. We illustrate the challenges we encountered and how verification can be effectively applied in practice, through case studies (e.g., [3,4]) where proving the correctness of certain kernel features resulted in uncovering real bugs (e.g., [5,6]). Furthermore, we present our work to automate this process as much as possible, making verification more practical and scalable. Our goal is to explore how verification can be made a practical part of the Linux kernel development process.
[1] https://lore.kernel.org/all/20250614134858.790460-1-sashal@kernel.org/ “Kernel API Specification Framework”
[2] https://lore.kernel.org/all/20250910170000.6475-1-gpaoloni@redhat.com/ “Add testable code specifications”
[3] Julia Lawall, Keisuke Nishimura, and Jean-Pierre Lozi. 2024. Should We Balance? Towards Formal Verification of the Linux Kernel Scheduler. SAS 2024.
[4] Julia Lawall, Keisuke Nishimura, and Jean-Pierre Lozi. 2025. Understanding Linux Kernel Code through Formal Verification: A Case Study of the Task-Scheduler Function select_idle_core. OLIVIERFEST ’25.
[5] https://lore.kernel.org/all/20231030172945.1505532-1-keisuke.nishimura@inria.fr/ “sched/fair: Fix the decision for load balance”
[6] https://lore.kernel.org/all/20231214175551.629945-1-keisuke.nishimura@inria.fr/ “sched/fair: take into account scheduling domain in select_idle_smt()”
Defining and maintaining requirements in the Linux Kernel – Chuck Wolber, Gabriele Paoloni (Red Hat), Kate Stewart (Linux Foundation) – 12 Dec 2025, 11:00
Last year in Vienna we held a session about “Improving kernel design documentation and involving experts”.
Following such session the ELISA Architecture working group drafted an initial template for the SW Requirements definition, started documenting the expected behaviour for different functions in the TRACING subsystem and made upstream contribution accordingly and finally also started reviewing and adopting a framework for formally specifying kernel APIs (developed and proposed by Sasha Levin [1]).
This session aims to present the latest updates and to involve the experts to define the best next steps for having a path to introduce and maintain requirements in the Kernel (and link them to tests and other verification measures)
[1] https://lore.kernel.org/all/20250711114248.2288591-1-sashal@kernel.org/
KUnit Testing Insufficiencies – Matthew Whitehead (The Boeing Company) – 12 Dec 2025, 12:00
High-integrity applications require rigorous testing to ensure both reliability and compliance with industry standards. Current testing frameworks for the Linux kernel, such as KUnit, face challenges in scalability and integration, particularly in environments with strict certification requirements.
KUnit tests, which are currently the most widely accepted and used solution for testing Linux kernel source code, have a number of drawbacks in the way they are built and executed. The KUnit framework lacks feature parity with other modern unit test frameworks, making it difficult to achieve comprehensive and robust low-level test coverage. Furthermore, the way KUnit tests are built and executed creates a lack of scalability, which is necessary to create and maintain the many thousands of tests that will be required to verify functionality in a robust, complete, and automatable way.
KUnit tests are integrated into the Linux binary. This requires building the kernel and running it to execute the tests. Additionally, the high volume of tests needed for adequate coverage would increase the size of the kernel beyond what is manageable. This makes it necessary to divide the tests so that multiple kernels with different sets of tests are built. This, in turn, necessitates additional analysis to prove that the changes made in each of these individual kernel builds do not negatively impact the fidelity of the tests for the targeted features. Considering the lengthy build and execution times, along with the need to build and analyze multiple kernels, it is evident that scaling up to the creation and maintenance of thousands of tests poses significant challenges.
In addition to the scalability issues, KUnit lacks essential features needed for testing highly reliable systems. It does not provide a consistent, maintainable, and automatable way to isolate small sections of code. This is crucial for low-level testing and coverage. For example, consider a function A that has dependencies on three other functions, B, C, and D. When testing function A, the results of functions B, C, and D may influence the execution path in function A. However, it is not desirable to actually test the implementation of functions B, C, and D. In most common unit test environments, it is straightforward to create either a fake or mock implementation of those functions in the test file, or to link to an object file that contains a fake or mock of those functions. In KUnit, achieving “mocking” (or at least a similar effect) requires creating a patch that modifies the kernel. The simplest approach is to use conditional compilation macros that are controlled through kernel configuration to generate mock versions of the tested function’s dependencies. Every time a mock or fake is used, it must be manually created and maintained through patches. When this effort is multiplied by thousands of tests, the maintenance burden becomes evident.
Topics for discussion:
- Addition of unit test capabilities (extending KUnit or otherwise) that allow the compilation and testing of small, isolated sections of the Linux kernel source out of tree.
- Ability to test Linux kernel source in tests that run in user space (requires the aforementioned compilation out of tree with external tools).
- Integration of mocking of test dependencies (preferably automatically).
Exploring possibilities for integrating StrictDoc with ELISA’s requirements template approach for the Linux kernel – Tobias Deiminger (Linutronix GmbH) – 12 Dec 2025, 12:25
The ELISA project currently works on bringing the Linux kernel closer to safety compliance by proposing enhancements to the kernel documentation. This includes a model expressed as requirement templates inlined to source code. At the same time, comparable efforts with a similar goal are also ongoing in the wider open-source ecosystem. For example, the Zephyr OS is using the FLOSS StrictDoc model and tool to capture and process requirements. Another example is Linutronix who reached IEC 62443 4-2 security certification by using StrictDoc to trace requirements, design and tests for their Debian based IGLOS Secure Beacon embedded system.
This talk picks up the work of ELISA and compares it to a typical StrictDoc setup with the intention to show that both efforts could be joined. While ELISA focuses on the model and assumes tools will emerge from the community, StrictDoc both defines a similar model and provides mature tooling to validate and render requirements, design and tests. We’ll see that the majority of the needs set by ELISA are already fulfilled by StrictDoc. Notably, ELISA’s SPDX-REQ-* tags can be represented and parsed with StrictDoc’s “Parsing SDoc nodes from source code comments” feature. The remaining gap is drift detection, i.e. to store the hash sum over project ID, source file, requirement text and proximal code with the intention to signal a human to check if requirement and code still align when some of the criteria changes. StrictDoc knows meta data, content and proximal code by function name, class name and line ranges, but has no hash generation built in yet. However, StrictDoc is advanced in defining language constructs as part of the model (functions, classes, test cases). It is also advanced in applicability, where for example OEM requirements and OSS project requirements can be linked together in one compatible model and in consequence can be processed and validated by the same tool in a single run. Various input/output format converters exist and customization of validation is next on the roadmap.
The talk concludes with the proposal that StrictDoc could close its gaps by implementing hash generation, optimizing ELISA requirement template parsing and by setting up conformity tests to maintain compatibility. ELISA could in turn list StrictDoc as one of their reference tools, and kernel developers will be invited to try it in their workflow.
BASIL: Open Source Traceability for Safety-Critical Systems” – Luigi Pellecchia – 12 Dec 2025, 12:40
Ensuring traceability and compliance remains a major challenge in the development of safety-critical systems. BASIL — The FuSa Spice — is an open-source tool from the ELISA Project that helps engineers manage traceability across documentation, requirements, implementation, testing, and safety evidence. Designed to integrate seamlessly with existing tools and test infrastructures, BASIL offers a practical and extensible solution to support functional safety assessments and audits.
This talk introduces BASIL’s architecture and key features, illustrates its application in real-world safety workflows, and highlights opportunities for community collaboration. Attendees will discover how BASIL brings openness, automation, and transparency to traceability management in safety-critical software projects.
Tooling and Sharing Traceability Discussion – Luigi Pellecchia, Matthew Whitehead (The Boeing Company), Tobias Deiminger (Linutronix GmbH) – 12 Dec 2025, 12:55
Open Discussion based on previous agenda items
Wrap up and next steps – Kate Stewart (Linux Foundation), Philipp Ahmann (Etas GmbH (BOSCH)) – 12 Dec 2025, 13:20
Learn more about the sessions here.
To register check here.









