THE LINUX FOUNDATION PROJECTS
Category

Ambassadors

ELISA Seminar – From Requirements to Code: Managing End-to-End Traceability with BASIL - recap blog

ELISA Seminar – Recap Notes – From Requirements to Code: Managing End-to-End Traceability with BASIL

By Ambassadors, Blog

This seminar explores BASIL, an open source requirements and traceability management tool under the ELISA Project. BASIL enables teams to connect specifications, requirements, test artifacts, documentation and source code using flexible traceability matrices while integrating with existing test infrastructures. In this session, Luigi Pellecchia, BASIL Maintainer and member of the ELISA Project Technical Steering Committee, presents how BASIL supports end-to-end traceability from requirements to code, improves collaboration and governance through role-based permissions, traceability-as-code, and AI-driven workflow guidance, and helps teams manage software quality evidence in a collaborative environment.

The session includes a live demonstration of BASIL, showcasing its web-based architecture, deployment options, and how users can create, map, and manage work items such as requirements, test specifications, and test cases. It also highlights integration with test management tools, external CI systems, and APIs, along with features for importing data, exporting traceability matrices, and automating workflows. The seminar further introduces advanced capabilities such as repository scanning and building traceability from distributed project assets, illustrating how BASIL can support complex, real-world development environments.

Learn more about BASIL.

What to Expect from the ELISA Project at Embedded World Exhibition & Conference 2026

What to Expect from the ELISA Project at Embedded World 2026

By Ambassadors, Blog, Industry Conference

The ELISA Project will be participating in the upcoming Embedded World Exhibition & Conference, taking place March 10–12, 2026 at Messezentrum Nürnberg, Germany.

Established in 2003, Embedded World has become one of the most important annual gatherings for the global embedded systems community. The event combines a large industry exhibition with a world-class conference program that bridges applied research and real-world industrial applications.

For the ELISA Project community, this event offers an opportunity to connect with engineers, researchers, and organizations working to enable safe use of Linux in safety-critical systems.

ELISA at Embedded World 2026

At this year’s event, the ELISA Project will engage with attendees through:

  • A conference session discussing approaches for assessing the safe usage of Linux

  • On-site discussions with ELISA ambassadors and community members

  • Opportunities to connect with companies building Linux-based safety-critical systems

If you are developing systems where safety, reliability, and open source intersect, this is a great chance to learn more about how the ELISA Project is advancing safety practices around Linux.

Conference Session: Assessing Safe Usage of Linux

A key highlight will be a talk by Kate Stewart from the Linux Foundation.

Approaches on Assessing Safe Usage of Linux

📅 March 10, 2026
⏱ 11:30 (30 minutes)

Linux has become one of the most widely used operating systems across industries—from deeply embedded devices in automotive, aerospace, and medical systems to servers powering global financial infrastructure.

While there are established mechanisms for maintaining and distributing security updates, the question remains:

After applying fixes and updates, how can we demonstrate that a Linux-based system is still safe to use in regulated environments?

In this session, Kate Stewart will explore:

  • Current approaches within the ELISA Project to evaluate Linux in the context of functional safety
  • Methods to support analysis and verification of Linux-based systems
  • Opportunities for automation and collaboration across the ecosystem
  • Emerging best practices for organizations building safety-critical Linux systems

The talk will provide insight into how the community is working to make Linux viable for safety-certified environments.

Learn more about the Embedded World Conference here.

Meet the ELISA Community

In addition to the conference session, several ELISA Project ambassadors and contributors will be attending Embedded World, including: Philipp Ahmann — ETAS GmbH, Nicole Pappler – Alektometis, Simone Weiß — Linutronix along with many other members of the ELISA Project ecosystem.

They will be available throughout the event to discuss:

  • The ELISA Project’s mission and roadmap
  • Collaboration opportunities
  • Safety practices for Linux-based systems
  • How organizations can participate in the project

Let’s Connect

If you are attending Embedded World and already working on Linux-based safety-critical applications, or interested in learning more about the ELISA Project and its goals for 2026 we encourage you to connect with the team during the event.

You can:

  • Reach out directly to ELISA ambassadors onsite
  • Or contact the project team (info@elisa.tech) to schedule a meeting

Embedded World is a fantastic opportunity to exchange ideas, learn from industry leaders, and explore how open source and safety engineering can evolve together. See you there!

What Do You Mean When You Say…? - Introducing the ELISA Glossary for Safety-Critical Open Source Blog by Simone Weiss, Linutronix

What do you mean when you say…?

By Ambassadors, Blog

This blog post “What Do You Mean When You Say…?” Introducing the ELISA Glossary for Safety-Critical Open Source” was written by Simone Weiss, Linutronix.

You’re reading a blog post, and three sentences in, you encounter a term and wonder, “What does the author mean when they say that?” You could research it, but you keep reading, telling yourself, “I’ll figure it out later.” We’ve all been there.

The world of embedded and safety-critical open source uses specific terms that can make it hard to understand what’s meant. That’s why we created the ELISA Glossary—a single place for all those terms.

Take a look at the glossary here:
https://directory.elisa.tech/glossary/index.html

What Is the ELISA Glossary?

The ELISA Glossary is a collection of definitions for terms that frequently come up in the ELISA project. Each entry tries to provide not just the theoretical meaning but also the way of how it’s used within ELISA.

You’ll find definitions covering:

  • Safety and certification concepts
  • Embedded and real-time software terms
  • Open-source processes and tools
  • Standards, specifications, and compliance-related language

The glossary is useful for things like:

  • Reading an ELISA blog post and needing a quick refresher
  • Joining a new working group and encountering unfamiliar terms
  • Ensuring consistent language across documents and discussions

The glossary is work in progress. As tools evolve, standards shift, and best practices change, the glossary will continue to grow. We rely on community feedback – if there’s a term you think should be added or a definition that needs refinement, let us know!

Why the Glossary?

The ELISA Project brings together engineers, safety experts, and organizations working on Linux-based safety-critical systems. This diverse mix of industry, standards, and technical backgrounds is one of ELISA’s strengths—but it also means we use a language that’s not always obvious to newcomers, occasional contributors, or even long-time members diving into new topics.

Since ELISA began, we’ve created:

  • Technical documentation
  • Working group deliverables
  • Presentations

Certain terms pop up again and again, which is where the ELISA Glossary comes in—to help make those terms easier to understand, reference, and use consistently.

Explore the ELISA Glossary

https://directory.elisa.tech/glossary/index.html

Clear language may not solve all the challenges in safety-critical software, but it sure makes collaboration easier.

ELISA Working Group and Special Interest Group Annual Updates 2026

Recap of ELISA Working Group and Special Interest Group Annual Updates 2026

By Ambassadors, Blog, Working Group

On February 11–12, the ELISA Project community gathered for the 2026 Working Group (WG) and Special Interest Group (SIG) Annual Updates. Over two focused sessions, group leads shared key milestones from 2025, current technical priorities, and what lies ahead in 2026, along with concrete opportunities for collaboration and contribution.

The annual updates serve as a checkpoint for the project: a moment to reflect on progress, align on priorities, and welcome new contributors into the work of advancing Linux in safety-critical systems.

The first day opened with an ELISA Project overview from Technical Steering Committee Chair Philipp Ahmann (ETAS), highlighting overall progress and reinforcing ELISA’s mission to define and maintain common elements, processes, and tools that support safety certification for Linux-based systems.

The first day highlighted progress across ELISA’s core Working Groups:

Open Source Engineering Process – Paul Albertella (Codethink) shared updates on process alignment and best practices to support safety certification efforts.

Systems and Automotive – Philipp Ahmann discussed advancements in aligning Linux with functional safety requirements for automotive and system-level applications.

Safety Architecture – Gabriele Paoloni (Red Hat) presented ongoing architectural work supporting safety use cases.

Linux Features for Safety-Critical Systems – Alessandro Carminati (NVIDIA) outlined kernel and feature-level progress enabling dependable Linux deployments.

The second day focused on use-case driven Working Groups and SIGs:

Aerospace – Matthew Weber (The Boeing Company) shared updates on Linux in aerospace systems.

Space Grade Linux – Ramon Roche (The Linux Foundation) discussed the evolution of Space Grade Linux and its relationship with ELISA.

BASIL & Tools WG Evolution – Luigi Pellecchia (Red Hat) highlighted progress in tooling and traceability efforts.

Lighthouse SIG – Philipp Ahmann provided insights into cross-domain collaboration and coordination.

The event concluded with closing reflections and a forward-looking discussion on collaboration opportunities in 2026.

Continuing the Work

The WG & SIG Annual Updates are more than a status review, they are a coordination point for the year ahead. As Linux adoption in safety-critical systems continues to expand across automotive, aerospace, industrial, and emerging domains, ELISA remains committed to open collaboration, practical tooling, and shared technical foundations.

Thank you to all speakers, contributors, and attendees who helped make the 2026 updates a success.

We look forward to another year of advancing Linux in safety-critical environments together.

ELISA Project at FOSDEM 2026

ELISA Project at FOSDEM 2026: Advancing Open Source in Safety-Critical Systems

By Ambassadors, Blog, Industry Conference

As open source software continues to move deeper into safety-critical systems, FOSDEM provides a unique space for the conversations that need to happen between developers, safety engineers, maintainers, and industry stakeholders. For the Enabling Linux in Safety Applications (ELISA) project, FOSDEM 2026 is an opportunity to engage directly with the open source community, share practical progress, and collaborate on the challenges of using Linux in systems where failure can have serious consequences.

ELISA’s mission is to make it easier for organizations to build and certify Linux-based safety-critical applications systems whose failure could result in loss of human life, significant property damage, or environmental harm. By bringing these discussions to FOSDEM, ELISA helps connect real-world safety and certification needs with the developers and projects building the software at the core of these systems.

What ELISA Is Working On

ELISA brings together companies, developers, and safety experts to define and maintain a shared set of tools, processes, and best practices that help organizations demonstrate that Linux-based systems can meet functional safety requirements. Rather than positioning Linux as a standalone “safety solution,” ELISA focuses on how Linux can be used as a component within safety-critical systems, supported by appropriate system-level mitigations, documentation, and evidence.

A key part of this work is collaboration with certification authorities and standardization bodies across multiple industries. By engaging early and openly, ELISA helps clarify expectations around certification pathways, safety arguments, and compliance, reducing uncertainty for both developers and assessors. This approach enables reuse, transparency, and consistency across domains such as automotive, aerospace, railways, industrial automation, and medical systems.

ELISA at FOSDEM 2026

FOSDEM 2026 offers an ideal environment to continue these conversations. As a free, community-driven event that brings together thousands of open source developers from around the world, it allows ELISA to connect directly with the people building and maintaining the software used in safety-critical products.

Throughout the weekend, ELISA Project Ambassadors will be actively participating across the event giving talks, joining technical discussions, and engaging with contributors in multiple developer rooms. Attendees can also meet the ELISA team at the Linux Foundation Europe stand (Building K, Level 2, Group A), where they will be available to discuss ongoing work, community activities, and ways to get involved in the project.

Several members of the ELISA Technical Steering Committee (TSC) will be present as well, providing an opportunity for in-depth conversations around safety concepts, certification challenges, and cross-industry collaboration.

Session Highlight:

Code, Compliance, and Confusion: Open Source in Safety-Critical Products

This talk examines the growing use of open source software in functionally safe systems, including platforms such as Linux, Zephyr, Xen, and automotive middleware. It looks at both the progress made in recent years and the persistent barriers to adoption, from certification uncertainty and fragmented governance to common misunderstandings around safety responsibility and system architecture. Learn more.

BOF/Unconference

In addition to talks, ELISA-related topics will be discussed in Birds of a Feather (BoF) sessions, which offer a more informal space for discussion and idea exchange.

One BoF will focus on Linux & Open Source Software for safety applications in Railways, exploring how large-scale reuse and collaborative development can support the sector’s growing software needs while meeting strict safety requirements. The discussion will also consider whether there is sufficient momentum to form a foundation-backed initiative to support OSS adoption in railways.

Another BoF, Safety-Critical Linux: Challenges across industries, will bring together participants from automotive, aerospace, medical devices, robotics, and rail. The session will explore shared challenges such as documentation, tooling, certification, and system design, and identify opportunities where cross-industry collaboration could reduce duplication and improve outcomes.

Join the Conversation at FOSDEM

FOSDEM 2026 is an opportunity to move beyond theory and engage in practical, technical discussions about open source in safety-critical systems. Whether you are building software, assessing safety cases, or defining certification strategies, ELISA invites you to take part in the conversations, meet the community, and help shape how Linux and open source software are used in systems that demand the highest levels of trust and reliability.

We look forward to connecting with you in Brussels.

Schrödinger’s test: The /dev/mem case - Blog by Alessandro Carminati, Red Hat

Schrödinger’s test: The /dev/mem case

By Ambassadors, Blog

This blog was originally published by Alessandro Carminati, Principal Software Engineer at Red Hat, on his personal blog and is republished here with permission.

Why I Went Down This Rabbit Hole

Back in 1993, when Linux 0.99.14 was released, /dev/mem made perfect sense. Computers were simpler, physical memory was measured in megabytes, and security basically boiled down to: “Don’t run untrusted programs.”

Fast-forward to today. We have gigabytes (or terabytes!) of RAM, multi-layered virtualization, and strict security requirements… And /dev/mem is still here, quietly sitting in the kernel, practically unchanged… A fossil from a different era. It’s incredibly powerful, terrifyingly dangerous, and absolutely fascinating.

My work on /dev/mem is part of a bigger effort by the ELISA Architecture working group, whose mission is to improve Linux kernel documentation and testing. This project is a small pilot in a broader campaign: build tests for old, fundamental pieces of the kernel that everyone depends on but few dare to touch.

In a previous blog post, “When kernel comments get weird”, I dug into the /dev/mem source code and traced its history, uncovering quirky comments and code paths that date back decades. That post was about exploration. This one is about action: turning that historical understanding into concrete tests to verify that /dev/mem behaves correctly… Without crashing the very systems those tests run on.

What /dev/mem Is and Why It Matters

/dev/mem is a character device that exposes physical memory directly to userspace. Open it like a file, and you can read or write raw physical addresses: no page tables, no virtual memory abstractions, just the real thing.

Why is this powerful? Because it lets you:

  • Peek at firmware data structures,
  • Poke device registers directly,
  • Explore memory layouts normally hidden from userspace.

It’s like being handed the keys to the kingdom… and also a grenade, with the pin halfway pulled.

A single careless write to /dev/mem can:

  • Crash the kernel,
  • Corrupt hardware state,
  • Or make your computer behave like a very expensive paperweight.

For me, that danger is exactly why this project matters. Testing /dev/mem itself is tricky: the tests must prove the driver works, without accidentally nuking the machine they run on.

STRICT_DEVMEM and Real-Mode Legacy

One of the first landmines you encounter with /dev/mem is the kernel configuration option STRICT_DEVMEM.

Think of it as a global policy switch:

  • If disabled/dev/mem lets privileged userspace access almost any physical address: kernel RAM, device registers, firmware areas, you name it.
  • If enabled, the kernel filters which physical ranges are accessible through /dev/mem. Typically, it only permits access to low legacy regions, like the first megabyte of memory where real-mode BIOS and firmware tables traditionally live, while blocking everything else.

Why does this matter? Some very old software, like emulators for DOS or BIOS tools, still expects to peek and poke those legacy addresses as if running on bare metal. STRICT_DEVMEM exists so those programs can still work: but without giving them carte blanche access to all memory.

So when you’re testing /dev/mem, the presence (or absence) of STRICT_DEVMEM completely changes what your test can do. With it disabled, /dev/mem is a wild west. With it enabled, only a small, carefully whitelisted subset of memory is exposed.

A Quick Note on Architecture Differences

While /dev/mem always exposes what the kernel considers physical memory, the definition of physical itself can differ across architectures. For example, on x86, physical addresses are the real hardware addresses. On aarch64 with virtualization or secure firmware, EL1 may only see a subset of memory through a translated view, controlled by EL2 or EL3.

The main function that the STRICT_DEVMEM kernel configuration option provides in Linux is to filter and restrict access to physical memory addresses via /dev/mem. It controls which physical address ranges can be legitimately accessed from userspace by helping implement architecture-specific rules to prevent unsafe or insecure memory accesses.

32-Bit Systems and the Mystery of High Memory

On most systems, the kernel needs a direct way to access physical memory. To make that fast, it keeps a linear mapping: a simple, one-to-one correspondence between physical addresses and a range of kernel virtual addresses. If the kernel wants to read physical address 0x00100000, it just uses a fixed offset, like PAGE_OFFSET + 0x00100000. Easy and efficient.

But there’s a catch on 32-bit kernels: The kernel’s entire virtual address space is only 4 GB, and it has to share that with userspace. By convention, 3 GB is given to userspace, and 1 GB is reserved for the kernel, which includes its linear mapping.

Now here comes the tricky part: Physical RAM can easily exceed 1 GB. The kernel can’t linearly map all of it: there just isn’t enough virtual address space.

The extra memory beyond the first gigabyte is called highmem (short for high memory). Unlike the low 1 GB, which is always mapped, highmem pages are mapped temporarily, on demand, whenever the kernel needs them.

Why this matters for /dev/mem/dev/mem depends on the permanent linear mapping to expose physical addresses. Highmem pages aren’t permanently mapped, so /dev/mem simply cannot see them. If you try to read those addresses, you’ll get zeros or an error, not because /dev/mem is broken, but because that part of memory is literally invisible to it.

For testing, this introduces extra complexity:

  • Some reads may succeed on lowmem addresses but fail on highmem.
  • Behavior on a 32-bit machine with highmem is fundamentally different from a 64-bit system, where all RAM is flat-mapped and visible.

Highmem is a deep topic that deserves its own article, but even this quick overview is enough to understand why it complicates /dev/mem testing.

How Reads and Writes Actually Happen

A common misconception is that a single userspace read() or write() call maps to one atomic access to the underlaying block device. In reality, the VFS layer and the device driver may split your request into multiple chunks, depending on alignment and boundaries, in this case.

Why does this happen?

  • Many devices can only handle fixed-size or aligned operations.
  • For physical memory, the natural unit is a page (commonly 4 KB).

When your request crosses a page boundary, the kernel internally slices it into:

  1. A first piece up to the page boundary,
  2. Several full pages,
  3. A trailing partial page.

For /dev/mem, this is a crucial detail: A single read or write might look seamless from userspace, but under the hood it’s actually several smaller operations, each with its own state. If the driver mishandles even one of them, you could see skipped bytes, duplicated data, or mysterious corruption.

Understanding this behavior is key to writing meaningful tests.

Safely Reading and Writing Physical Memory

At this point, we know what /dev/mem is and why it’s both powerful and terrifying. Now we’ll move to the practical side: how to interact with it safely, without accidentally corrupting your machine or testing in meaningless ways.

My very first test implementation kept things simple:

  • Only small reads or writes,
  • Always staying within a single physical page,
  • Never crossing dangerous boundaries.

Even with these restrictions, /dev/mem testing turned out to be more like diffusing a bomb than flipping a switch.

Why “success” doesn’t mean success (in this very specific case)

Normally, when you call a syscall like read() or write(), you can safely assume the kernel did exactly what you asked. If read() returns a positive number, you trust that the data in your buffer matches the file’s contents. That’s the contract between userspace and the kernel, and it works beautifully in everyday programming.

But here’s the catch: We’re not just using /dev/mem; we’re testing whether /dev/mem itself works correctly.

This changes everything.

If my test reads from /dev/mem and fills a buffer with data, I can’t assume that data is correct:

  • Maybe the driver returned garbage,
  • Maybe it skipped a region or duplicated bytes,
  • Maybe it silently failed in the middle but still updated the counters.

The same goes for writes: A return code of “success” doesn’t guarantee the write went where it was supposed to, only that the driver finished running without errors.

So in this very specific context, “success” doesn’t mean success. I need independent ways to verify the result, because the thing I’m testing is the thing that would normally be trusted.

Finding safe places to test: /proc/iomem

Before even thinking about reading or writing physical memory, I need to answer one critical question:

“Which parts of physical memory are safe to touch?”

If I just pick a random address and start writing, I could:

  • Overwrite the kernel’s own code,
  • Corrupt a driver’s I/O-mapped memory,
  • Trash ACPI tables that the system kernel depends on,
  • Or bring the whole machine down in spectacular fashion.

This is where /proc/iomem comes to the rescue. It’s a text file that maps out how the physical address space is currently being used. Each line describes a range of physical addresses and what they’re assigned to.

Here’s a small example:

By parsing /proc/iomem, my test program can:

  1. Identify which physical regions are safe to work with (like RAM already allocated to my process),
  2. Avoid regions that are reserved for hardware or critical firmware,
  3. Adapt dynamically to different machines and configurations.

This is especially important for multi-architecture support. While examples here often look like x86 (because /dev/mem has a long history there), the concept of mapping I/O regions isn’t x86-specific. On ARM, RISC-V, or others, you’ll see different labels… But the principle remains exactly the same.

In short: /proc/iomem is your treasure map, and the first rule of treasure hunting is “don’t blow up the ship while digging for gold.”

The Problem of Contiguous Physical Pages

Up to this point, my work focused on single-page operations. I wasn’t hand-picking physical addresses or trying to be clever about where memory came from. Instead, the process was simple and safe:

  1. Allocate a buffer in userspace, using mmap() so it’s page-aligned,
  2. Touch the page to make sure the kernel really backs it with physical memory,
  3. Walk /proc/self/pagemap to trace which physical pages back the virtual address in the buffer.

This gives me full visibility into how my userspace memory maps to physical memory. Since the buffer was created through normal allocation, it’s mine to play with, there’s no risk of trampling over the kernel or other userspace processes.

This worked beautifully for basic tests:

  • Pick a single page in the buffer,
  • Run a tiny read/write cycle through /dev/mem,
  • Verify the result,
  • Nothing explodes.

But then came the next challenge: What if a read or write crosses a physical page boundary?

Why boundaries matter

The Linux VFS layer doesn’t treat a read or write syscall as one giant, indivisible action. Instead, it splits large operations into chunks, moving through pages one at a time.

For example:

  • I request 10 KB from /dev/mem,
  • The first 4 KB comes from physical page A,
  • The next 4 KB comes from physical page B,
  • The last 2 KB comes from physical page C.

If the driver mishandles the transition between pages, I’d never notice unless my test forces it to cross that boundary. It’s like testing a car by only driving in a straight line: Everything looks fine… Until you try to turn the wheel.

To properly test /dev/mem, I need a buffer backed by at least two physically contiguous pages. That way, a single read or write naturally crosses from one physical page into the next… exactly the kind of situation where subtle bugs might hide.

And that’s when the real nightmare began.

Why this is so difficult

At first, this seemed easy. I thought:

“How hard can it be? Just allocate a buffer big enough, like 128 KB, and somewhere inside it, there must be two contiguous physical pages.”

Ah, the sweet summer child optimism. The harsh truth: modern kernels actively work against this happening by accident. It’s not because the kernel hates me personally (though it sure felt like it). It’s because of its duty to prevent memory fragmentation.

When you call brk() or mmap(), the kernel:

  1. Uses a buddy allocator to manage blocks of physical pages,
  2. Actively spreads allocations apart to keep them tidy,
  3. Reserves contiguous ranges for things like hugepages or DMA.

From the kernel’s point of view:

  • This keeps the system stable,
  • Prevents large allocations from failing later,
  • And generally makes life good for everyone.

From my point of view? It’s like trying to find two matching socks in a dryer while it is drying them.

Playing the allocation lottery

My first approach was simple: keep trying until luck strikes.

  1. Allocate a 128 KB buffer,
  2. Walk /proc/self/pagemap to see where all pages landed physically,
  3. If no two contiguous pages are found, free it and try again.

Statistically, this should work eventually. In reality? After thousands of iterations, I’d still end up empty-handed. It felt like buying lottery tickets and never even winning a free one.

The kernel’s buddy allocator is very good at avoiding fragmentation. Two consecutive physical pages are far rarer than you’d think, and that’s by design.

Trying to confuse the allocator

Naturally, my next thought was:

“If the allocator is too clever, let’s mess with it!”

So I wrote a perturbation routine:

  • Allocate a pile of small blocks,
  • Touch them so they’re actually backed by physical pages,
  • Free them in random order to create “holes.”

The hope was to trick the allocator into giving me contiguous pages next time. The result? It sometimes worked, but unpredictably. 4k attempts gave me >80% success. Not reliable enough for a test suite where failures must mean a broken driver, not a grumpy kernel allocator.

The options I didn’t want

There are sure-fire ways to get contiguous pages:

  • Writing a kernel module and calling alloc_pages().
  • Using hugepages.
  • Configuring CMA regions at boot.

But all of these require special setup or kernel cooperation. My goal was a pure userspace test, so they were off the table.

A new perspective: software MMU

Finally, I relaxed my original requirement. Instead of demanding two pages that are both physically and virtually contiguous, I only needed them to be physically contiguous somewhere in the buffer.

From there, I could build a tiny software MMU:

  • Find a contiguous physical pair using /proc/self/pagemap,
  • Expose them through a simple linear interface,
  • Run the test as if they were virtually contiguous.

This doesn’t eliminate the challenge, but it makes it practical. No kernel hacks, no special boot setup, just a bit of clever user-space logic.

From Theory to Test Code

All this theory eventually turned into a real test tool, because staring at /proc/self/pagemap is fun… but only for a while. The test lives here:

github.com/alessandrocarminati/devmem_test

It’s currently packaged as a Buildroot module, which makes it easy to run on different kernels and architectures without messing up your main system. The long-term goal is to integrate it into the kernel’s selftests framework, so these checks can run as part of the regular Linux testing pipeline. For now, it’s a standalone sandbox where you can:

  • Experiment with /dev/mem safely (on a test machine!),
  • Play with /proc/self/pagemap and see how virtual pages map to physical memory,
  • Try out the software MMU idea without needing kernel modifications.

And expect it still work in progress.

ELISA Project - Open Source Summit: Tokyo, Japan 2025

ELISA Project at Open Source Summit: Tokyo, Japan 2025

By Ambassadors, Blog, Critical Software Summit, Safety-Critical Software Summit

Open Source Summit is the place to connect directly with the people shaping open source – maintainers, developers, and community leaders, while learning from their experience and insights. It’s an opportunity to discover emerging technologies, explore practical solutions you can apply immediately, and collaborate on ideas and code that drive projects forward. Whether you’re looking to grow your skills, expand your network, or advance your career, the summit offers a unique environment to learn, contribute, and be part of the momentum powering the future of open source.

ELISA project will be represented by our community members at the Safety Critical Track.

This track explores the intersection of open source and safety standards, covering best practices for regulatory compliance, security updates, and safety engineering. Sessions will delve into requirements traceability, quality assessments, safety analysis methodologies, and technical development for safety-critical systems. Learn more.

Track Highlights:

1. Keynote: Space Grade Linux: Building a Safer, Open Source Future for Space Systems – Ramon Roche, General Manager, Dronecode Foundation

As launch cadence increases and development cycles tighten, the space industry turns to open source to meet the moment. Enter Space Grade Linux (SGL) — an initiative under the ELISA Project aimed at creating a reusable, safety-aware Linux foundation for spaceflight systems.

This talk will introduce the goals and current status of SGL, highlighting three foundational focus areas:
1. Kernel Configuration – Defining a shared starting point for space-focused Linux systems, emphasizing predictability, determinism, and traceability.
2. Booting into Linux: Exploring the safety-critical implications of system bring-up and strategies for improving reliability in space-grade deployments.
3. Userspace Strategy – Discussing early-stage decisions around minimal runtime environments, supervision, and what a safe, maintainable userspace might look like.

Attendees will get a hands-on overview of what’s already available in the GitHub repository, including a Yocto-based reference implementation and working kernel configuration. More importantly, they’ll learn how to get involved — through technical contributions, architecture discussions, or community collaboration.

2. A Human-Centric Quality Assurance Process for Open Source Software Projects – Wendi Urribarri & Carlos Ramirez, Woven by Toyota – Wednesday December 10, 2025 11:10 – 11:50 JST

As autonomous systems become part of our daily environments, ensuring software quality is critical, especially when defects can cause physical harm. In safety-critical domains like automotive, functional safety must be supported by development processes that ensure high quality and reliability, not only for embedded systems but also, in some cases, for software tools.

This talk presents an approach to support quality assurance of open source software projects. What sets this effort apart is the proposed integration of a human-centric quality strategy, rooted in human-error research informed by cognitive psychology and human factors, in the development process of these projects. We introduce a defect prediction engine designed to anticipate common error modes, enabling proactive defect prevention, focused code reviews, and targeted documentation checks. Our approach offers a fresh perspective on improving software quality across domains while aligning with the expectations of safety-critical frameworks.

3. Comparison and Proposal of Vulnerability Management Approaches in Yocto-Based Linux for the CRA – Akihiko Takahashi, Fujitsu Limited – Wednesday December 10, 2025 12:00 – 12:40 JST

Fujitsu has long provided multilateral support for SPDX, especially through activities in Yocto and SPDX. From 2016, we have been joining maintainers of meta-spdxscanner, enabling SPDX functionality for the Yocto Project. In 2024, We joined OpenSSF to enhance the security and trustworthiness of the global software supply chain. This marked a step forward in our continued dedication to this mission.

Due to the EU CRA, manufacturers in the EU will be obligated to report vulnerabilities starting in September 2026. In the context of Yocto, several vulnerability management approaches are being considered, such as cve-check, yocto-vex-check, and third-party tools. However, as of now, there is no clearly established best practice.
In this session, we will apply these vulnerability management approaches to practical use cases relevant to manufacturers covered by the CRA. The comparison includes the use of SBOMs and VEX to evaluate the effectiveness of each method. Through this analysis, we will clarify the strengths and challenges of vulnerability management in Yocto-based Linux and propose which approach is most suitable depending on the context.

4. Driving Safety Forward: Lessons Learned From Deploying OSS in Real-world Automotive – Jaylin Yu, EMQ – Wednesday December 10, 2025 14:00 – 14:40 JST

While OSS in Automotive is seen as the holy grail to solve SDV complexity challenges with faster time to market and higher performance, it still lacks practical real-world examples and showcases that address OSS usage in compliance with the stringent safety and security demands of Automotive.
In this talk, the author shares his real-world story of bringing OSS into mass production vehicles. This includes the impact of a healthy open-source community and how academic research helped solve security gaps, leading to increased system stability. This also embraces the impact of the software supply chain, providing a proven approach, refined through failures, helping to lower dependency risk for MQTT-based remote vehicle diagnostics.
The session is rounded out by highlighting the link between system utilities and safety functions, covering time synchronization, dependency management, and data integrity within a Linux system, which impact the selection of a file system, and what happens when a customer suddenly requires STR.
The audience will leave the session with a holistic impression of what to consider when creating a secure, safe, OSS-based SDV automotive system.

5. Decoding Safe(ty) Linux Architectural Approaches for Critical Systems – Philipp Ahmann, Etas GmbH – Wednesday December 10, 2025 14:50 – 15:30 JST

For years, diverse interpretations about what it means to “enable Linux in safety applications” exist – an observation spanning multiple industries but particularly pronounced in automotive. With its long history of Linux adoption (like AGL) and current software-defined vehicle (SDV) innovation challenges, the automotive sector is undergoing a transition by both manufacturers and suppliers seeking to implement Linux also in safety critical production systems.

This presentation intends to resolve confusion around the terminology “safety Linux” versus “safe Linux”, clarifying where safety responsibility is allocated to Linux itself versus handled at the system level. By examining architectural system concepts currently implemented in products or under development, the author cuts through marketing rhetoric to provide clear distinctions between approaches. It showcases solutions employed by distributors and identifies crucial elements for safety argumentation like watchdog & monitoring.

Attendees will gain practical insights for evaluating safety approaches in Linux-based systems, including key questions to ask when assessing different safety concepts.

6. LF Energy 101: How Open Source Is Powering the Digital Energy Transition – Darshan Chawda & Nao Nishijima, Hitachi -Wednesday December 10, 2025 16:40 – 17:20 JST

The current energy sector must shift from legacy control systems, which are rigid and hardware-bound, to digital, software-defined systems that enable greater sustainability, resilience, and intelligence. To support this transformation, LF Energy, a Linux Foundation initiative, has empowered industrial partners for over seven years to collaborate through community-driven OSS projects, accelerating innovation across the digital energy ecosystem.

This talk offers a beginner-friendly introduction to LF Energy and its key projects, with a demo highlighting their role in virtualizing substations, forecasting energy, and simplifying operations through automation. These projects show how IT and AI technologies enhance grid safety, which is critical because failures in energy systems can disrupt public infrastructure. However, unlike pure IT systems, energy infrastructure relies heavily on physical hardware, making large-scale digital adoption more complex. LF Energy’s open innovation model, focused on IT/OT convergence, helps overcome these barriers by enabling redundancy, virtualization, and collaborative development, which leads to a more reliable and intelligent energy future.

Learn more about the event and register here.

ELISA Project - Blog: When Kernel Comments Get Weird: The Tale of `drivers/char/mem.c`

When Kernel Comments Get Weird: The Tale of `drivers/char/mem.c`

By Ambassadors, Blog, Working Group

This blog is written by Alessandro Carminati, Principal Software Engineer at Red Hat and lead for the ELISA Project’s Linux Features for Safety-Critical Systems (LFSCS) WG.

As part of the ELISA community, we spend a good chunk of our time spelunking through the Linux kernel codebase. It’s like code archeology: you don’t always find treasure, but you _do_ find lots of comments left behind by developers from the ’90s that make you go, “Wait… really?”

One of the ideas we’ve been chasing is to make kernel comments a bit smarter: not only human-readable, but also machine-readable. Imagine comments that could be turned into tests, so they’re always checked against reality. Less “code poetry from 1993”, more “living documentation”.

Speaking of code poetry, [here] one gem we stumbled across in `mem.c`:

```
/* The memory devices use the full 32/64 bits of the offset,
 * and so we cannot check against negative addresses: they are ok.
 * The return value is weird, though, in that case (0).
 */
 ```

This beauty has been hanging around since **Linux 0.99.14**… back when Bill Clinton was still president-elect, “Mosaic” was the hot new browser,
and PDP-11 was still produced and sold.

Back then, it made sense, and reflected exactley what the code did.

Fast-forward thirty years, and the comment still kind of applies
but mostly in obscure corners of the architecture zoo.
On the CPUs people actually use every day?

 

```
$ cat lseek.asm
BITS 64

%define SYS_read    0
%define SYS_write   1
%define SYS_open    2
%define SYS_lseek   8
%define SYS_exit   60

; flags
%define O_RDONLY    0
%define SEEK_SET    0

section .data
    path:    db "/dev/mem",0
section .bss
    align 8
    buf:     resq 1

section .text
global _start
_start:
    mov     rax, SYS_open
    lea     rdi, [rel path]
    xor     esi, esi
    xor     edx, edx
    syscall
    mov     r12, rax        ; save fd in r12

    mov     rax, SYS_lseek
    mov     rdi, r12
    mov     rsi, 0x8000000000000001
    xor     edx, edx
    syscall

    mov     [rel buf], rax

    mov     rax, SYS_write
    mov     edi, 1
    lea     rsi, [rel buf]
    mov     edx, 8
    syscall

    mov     rax, SYS_exit
    xor     edi, edi
    syscall
$ nasm -f elf64 lseek.asm -o lseek.o
$ ld lseek.o -o lseek
$ sudo ./lseek| hexdump -C
00000000  01 00 00 00 00 00 00 80                           |........|
00000008
$ # this is not what I expect, let's double check
$ sudo gdb ./lseek
GNU gdb (Fedora Linux) 16.3-1.fc42
Copyright (C) 2024 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./lseek...
(No debugging symbols found in ./lseek)
(gdb) b _start
Breakpoint 1 at 0x4000b0
(gdb) r
Starting program: /tmp/lseek

Breakpoint 1, 0x00000000004000b0 in _start ()
(gdb) x/30i $pc
=> 0x4000b0 <_start>:   mov    $0x2,%eax
   0x4000b5 <_start+5>: lea    0xf44(%rip),%rdi        # 0x401000
   0x4000bc <_start+12>:        xor    %esi,%esi
   0x4000be <_start+14>:        xor    %edx,%edx
   0x4000c0 <_start+16>:        syscall
   0x4000c2 <_start+18>:        mov    %rax,%r12
   0x4000c5 <_start+21>:        mov    $0x8,%eax
   0x4000ca <_start+26>:        mov    %r12,%rdi
   0x4000cd <_start+29>:        movabs $0x8000000000000001,%rsi
   0x4000d7 <_start+39>:        xor    %edx,%edx
   0x4000d9 <_start+41>:        syscall
   0x4000db <_start+43>:        mov    %rax,0xf2e(%rip)        # 0x401010
   0x4000e2 <_start+50>:        mov    $0x1,%eax
   0x4000e7 <_start+55>:        mov    $0x1,%edi
   0x4000ec <_start+60>:        lea    0xf1d(%rip),%rsi        # 0x401010
   0x4000f3 <_start+67>:        mov    $0x8,%edx
   0x4000f8 <_start+72>:        syscall
   0x4000fa <_start+74>:        mov    $0x3c,%eax
   0x4000ff <_start+79>:        xor    %edi,%edi
   0x400101 <_start+81>:        syscall
   0x400103:    add    %al,(%rax)
   0x400105:    add    %al,(%rax)
   0x400107:    add    %al,(%rax)
   0x400109:    add    %al,(%rax)
   0x40010b:    add    %al,(%rax)
   0x40010d:    add    %al,(%rax)
   0x40010f:    add    %al,(%rax)
   0x400111:    add    %al,(%rax)
   0x400113:    add    %al,(%rax)
   0x400115:    add    %al,(%rax)
(gdb) b *0x4000c2
Breakpoint 2 at 0x4000c2
(gdb) b *0x4000db
Breakpoint 3 at 0x4000db
(gdb) c
Continuing.

Breakpoint 2, 0x00000000004000c2 in _start ()
(gdb) i r
rax            0x3                 3
rbx            0x0                 0
rcx            0x4000c2            4194498
rdx            0x0                 0
rsi            0x0                 0
rdi            0x401000            4198400
rbp            0x0                 0x0
rsp            0x7fffffffe3a0      0x7fffffffe3a0
r8             0x0                 0
r9             0x0                 0
r10            0x0                 0
r11            0x246               582
r12            0x0                 0
r13            0x0                 0
r14            0x0                 0
r15            0x0                 0
rip            0x4000c2            0x4000c2 <_start+18>
eflags         0x246               [ PF ZF IF ]
cs             0x33                51
ss             0x2b                43
ds             0x0                 0
es             0x0                 0
fs             0x0                 0
gs             0x0                 0
fs_base        0x0                 0
gs_base        0x0                 0
(gdb) # fd is just fine rax=3 as expected.
(gdb) c
Continuing.

Breakpoint 3, 0x00000000004000db in _start ()
(gdb) i r
rax            0x8000000000000001  -9223372036854775807
rbx            0x0                 0
rcx            0x4000db            4194523
rdx            0x0                 0
rsi            0x8000000000000001  -9223372036854775807
rdi            0x3                 3
rbp            0x0                 0x0
rsp            0x7fffffffe3a0      0x7fffffffe3a0
r8             0x0                 0
r9             0x0                 0
r10            0x0                 0
r11            0x246               582
r12            0x3                 3
r13            0x0                 0
r14            0x0                 0
r15            0x0                 0
rip            0x4000db            0x4000db <_start+43>
eflags         0x246               [ PF ZF IF ]
cs             0x33                51
ss             0x2b                43
ds             0x0                 0
es             0x0                 0
fs             0x0                 0
gs             0x0                 0
fs_base        0x0                 0
gs_base        0x0                 0
(gdb) # According to that comment, rax should have been 0, but it is not.
(gdb) c
Continuing.
�[Inferior 1 (process 186746) exited normally]
(gdb) 
```

Not so much. Seeking at `0x8000000000000001`…
Returns `0x8000000000000001` not `0` as anticipated in the comment.
We’re basically facing the kernel version of that “Under Construction”
GIF on websites from the 90s, still there, but mostly just nostalgic
decoration now.

## The Mysterious Line in `read_mem`

Let’s zoom in on one particular bit of code in [`read_mem`](https://elixir.bootlin.com/linux/v6.17-rc2/source/drivers/char/mem.c#L82):

```
	phys_addr_t p = *ppos;
	/* ... other code ... */
	if (p != *ppos) return  0;
```

At first glance, this looks like a no-op; why would `p` be different from
`*ppos` when you just copied it?
It’s like testing if gravity still works by dropping your phone…
**spoiler: it does.**

But as usual with kernel code, the weirdness has a reason.

## The Problem: Truncation on 32-bit Systems

Here’s what’s going on:

– `*ppos` is a `loff_t`, which is a 64-bit signed integer.
– `p` is a `phys_addr_t`, which holds a physical address.

On a 64-bit system, both are 64 bits wide. Assignment is clean, the check
always fails (and compilers just toss it out).

But on a 32-bit system, `phys_addr_t` is only 32 bits. Assign a big 64-bit
offset to it, and **boom**, the top half vanishes.
Truncated, like your favorite TV series canceled after season 1.

That `if (p != *ppos)` check is the safety net.
It spots when truncation happens and bails out early, instead of letting
some unlucky app read from la-la land.

## Assembly Time: 64-bit vs. 32-bit

On 64-bit builds (say, AArch64), the compiler optimizes away the check.

```
┌ 736: sym.read_mem (int64_t arg2, int64_t arg3, int64_t arg4);
│ `- args(x1, x2, x3) vars(13:sp[0x8..0x70])
│           0x08000b10      1f2003d5       nop
│           0x08000b14      1f2003d5       nop
│           0x08000b18      3f2303d5       paciasp
│           0x08000b1c      fd7bb9a9       stp x29, x30, [sp, -0x70]!
│           0x08000b20      fd030091       mov x29, sp
│           0x08000b24      f35301a9       stp x19, x20, [var_10h]
│           0x08000b28      f40301aa       mov x20, x1
│           0x08000b2c      f55b02a9       stp x21, x22, [var_20h]
│           0x08000b30      f30302aa       mov x19, x2
│           0x08000b34      750040f9       ldr x21, [x3]
│           0x08000b38      e10302aa       mov x1, x2
│           0x08000b3c      e33700f9       str x3, [var_68h]        ; phys_addr_t p = *ppos;
│           0x08000b40      e00315aa       mov x0, x21
│           0x08000b44      00000094       bl valid_phys_addr_range
│       ┌─< 0x08000b48      40150034       cbz w0, 0x8000df0        ;if (!valid_phys_addr_range(p, count))
│       │   0x08000b4c      00000090       adrp x0, segment.ehdr
│       │   0x08000b50      020082d2       mov x2, 0x1000
│       │   0x08000b54      000040f9       ldr x0, [x0]
│       │   0x08000b58      01988152       mov w1, 0xcc0
│       │   0x08000b5c      f76303a9       stp x23, x24, [var_30h]
[...]
```
Nothing to see here, move along.
But on 32-bit builds (like old-school i386), the check shows up loud and 
proud in the assembly. 
```
[0x080003e0]> pdf
┌ 392: sym.read_mem (int32_t arg_8h);
│ `- args(sp[0x4..0x4]) vars(5:sp[0x14..0x24])
│           0x080003e0      55             push ebp
│           0x080003e1      89e5           mov ebp, esp
│           0x080003e3      57             push edi
│           0x080003e4      56             push esi
│           0x080003e5      53             push ebx
│           0x080003e6      83ec14         sub esp, 0x14
│           0x080003e9      8955f0         mov dword [var_10h], edx
│           0x080003ec      8b5d08         mov ebx, dword [arg_8h]
│           0x080003ef      c745ec0000..   mov dword [var_14h], 0
│           0x080003f6      8b4304         mov eax, dword [ebx + 4] 
│           0x080003f9      8b33           mov esi, dword [ebx]     ; phys_addr_t p = *ppos;
│           0x080003fb      85c0           test eax, eax
│       ┌─< 0x080003fd      7411           je 0x8000410             ; if (!valid_phys_addr_range(p, count))
│     ┌┌──> 0x080003ff      8b45ec         mov eax, dword [var_14h]
│     ╎╎│   0x08000402      83c414         add esp, 0x14
│     ╎╎│   0x08000405      5b             pop ebx
│     ╎╎│   0x08000406      5e             pop esi
│     ╎╎│   0x08000407      5f             pop edi
│     ╎╎│   0x08000408      5d             pop ebp
│     ╎╎│   0x08000409      c3             ret
[...]
```

The CPU literally does a compare-and-jump to enforce it. So yes, this is a _real_ guard, not some leftover fluff.

## Return Value Oddities

Now, here’s where things get even funnier. If the check fails in `read_mem`, the function returns `0`. That’s “no bytes read”, which in file I/O land is totally fine.

But in the twin function `write_mem`, the same situation returns `-EFAULT`. That’s kernel-speak for “Nope, invalid address, stop poking me”.

So, reading from a bad address? You get a polite shrug. Writing to it? You get a slap on the wrist. Fair enough, writing garbage into memory is way more dangerous than failing to read it. Come on, probably here we need to fix things up.

Wrapping It Up

This little dive shows how a single “weird” line of code carries decades of context, architecture quirks, type definitions, and evolving assumptions.
It also shows why comments like the one from 0.99.14 are dangerous: they freeze a moment in time, but reality keeps moving.

Our mission in Elisa Architecture WG is to bring comments back to life: keep them up-to-date, tie them to tests, and make sure they still tell the truth. Because otherwise, thirty years later, we’re all squinting at a line saying “the return value is weird though” and wondering if the developer was talking about code… or just their day.

And now, a brief word from our *sponsors* (a.k.a. me in a different hat): When I’m not digging up ancient kernel comments with the Architecture WG, I’m also leading the Linux Features for Safety-Critical Systems (LFSCS) WG. We’re cooking up some pretty exciting stuff there too.

So if you enjoy the kind of archaeology/renovation work we’re doing there, come check out LFSCS as well: same Linux, different adventure.

ELISA Project Welcomes Simone Weiss to the Governing Board!

By Ambassadors, Blog

We are excited to announce that Simone Weiss, Product Owner at Elektrobit, has joined the Governing Board of the Enabling Linux in Safety Applications (ELISA) Project. She brings a wealth of experience in functional safety, embedded systems, and open source leadership that will help guide ELISA’s mission to enable the use of Linux in safety-critical applications. One of Simone’s first tasks will be to lead the creation of a glossary in the ELISA Project directory.

ELISA Project Welcomes Simone Weiss to the Governing Board!Elektrobit has been an active contributor to the ELISA Project for several years, and Simone’s appointment reflects the company’s commitment to advancing the use of open source technologies in industries such as automotive, industrial, medical, and beyond.

“It’s an honor to join ELISA’s Governing Board. I’m looking forward to working with the community to support collaboration between industry and safety experts and drive broader adoption of Linux in safety-critical domains.” – Simone Weiss, Elektrobit

The ELISA Governing Board plays a critical role in setting the project’s strategic direction, ensuring sustainability, and supporting the vibrant technical community that underpins ELISA’s success. With the addition of Simone, the board strengthens its collective expertise and reaffirms its dedication to transparency, collaboration, and safety excellence.

Simone recently traveled to Open Source Summit North America, which happened in Denver, Colorado in June, to attend her first in-person Governing Board meeting. 

ELISA Project Governing Board 2025

Please join us in welcoming Simone to the ELISA Project Governing Board!

Documenting the Design of the Linux Kernel - Chuck Wolber, The Boeing Company; Kate Stewart, The Linux Foundaiton; Gabriele Paoloni, Red Hat

Talk Highlights: Documenting the Design of the Linux Kernel – Chuck Wolber, The Boeing Company; Kate Stewart, The Linux Foundation; Gabriele Paoloni, Red Hat

By Ambassadors, Blog, Critical Software Summit, Industry Conference, Safety-Critical Software Summit

Open Source Summit North America, which happened on June 23-25 in  Denver, Colorado, had a total of 1,535 in-person attendees (47% hold technical positions) that represented 732 organizations. This year’s event featured vibrant conversations in the Safety-Critical Software tracksponsored by ELISA Project member Honda.

Safety-critical systems — whether in automotive, industrial, medical, or aerospace — are increasingly adopting open source technologies. The sessions in this dedicated track tackled real-world challenges and shared solutions around functional safety, tool qualification, compliance, and certifiability of open source software.

Highlights included:

  • Panel discussions on bridging the gap between open source innovation and safety assurance

  • Technical deep dives into applying safety analysis methods to Linux-based systems

  • Case studies from the ELISA Project working groups showcasing progress in automotive, medical, and industrial domains

This week we are highlighting the talk “Documenting the Design of the Linux Kernel – Chuck Wolber, The Boeing Company; Kate Stewart, The Linux Foundaiton; Gabriele Paoloni, Red Hat” from the Open Source Summit, North America 2025.

Documenting the Design of the Linux Kernel – Chuck Wolber, The Boeing Company; Kate Stewart, The Linux Foundaiton; Gabriele Paoloni, Red Hat

As Linux adoption grows in safety-critical industries like aerospace and automotive, structured design documentation and traceability become increasingly important. This talk presented the ELISA Project’s efforts to reverse-engineer and document low-level developer intent within the Linux kernel using a new, machine-readable requirements template.

Building on earlier discussions at Linux Plumbers 2024 and the December ELISA Workshop at NASA Goddard, the session outlined a proposed framework for capturing “testable expectations” in line with kernel development norms. The goal is to support pass/fail test development, improve test precision using code coverage, and eventually link low-level requirements to higher-level system design.

The speakers showcased early examples from the kernel’s tracing subsystem, discussed the balance between testability and maintainability, and explained how the effort helps address kernel technical debt and reduce certification barriers. The proposal also seeks to avoid burdening maintainers by decoupling documentation from core development.

Key topics included:

  • A breakdown of the proposed requirement template structure and fields
  • Examples of real-world kernel functions instrumented with low-level requirements
  • Integration plans with KernelCI for test coverage and traceability
  • Challenges encountered, such as avoiding pseudo-code duplication and handling evolving code
  • Community feedback from upstream maintainers and next steps toward broader adoption

To learn more and get involved in the Safety Architecture Working Group, check here.

What’s Next?

We’re excited to continue the conversations sparked at OSSummit through our public working groups, monthly meetings and upcoming events. Join the ELISA Project at Open Source Summit Europe, happening on August 25-27 in Amsterdam, at the Safety-Critical Software Summit. Check out the schedule or visit the ELISA Project ambassadors and leaders at the booth #29. Learn more here.

Learn more about the conference or register for it at the main Open Source Summit Europe page.

For more ELISA Project updates, subscribe to the LinkedIn pageYoutube Channel or join the community on our new Discord channel!