THE LINUX FOUNDATION PROJECTS
All Posts By

elisaproject

open source summit - Europe - 2025

Key Takeaways from the Safety Critical Track at Open Source Summit Europe 2025 – 1

By Blog, Critical Software Summit, Industry Conference, Linux Foundation, Safety-Critical Software Summit

The ELISA Project participated in Open Source Summit Europe 2025 (August 25–27, Amsterdam), the premier gathering for open source developers, technologists, and community leaders. With over 2,000 attendees representing 900+ organizations, the event showcased the strength, diversity, and innovation of the ecosystem.

For ELISA (Enabling Linux in Safety Applications), the summit was an invaluable opportunity to engage with developers, architects, and functional safety experts working at the intersection of Linux and safety-critical systems. ELISA was featured prominently in the Safety-Critical Software Summit, where sessions explored topics such as kernel safety, automotive innovation, and compliance and trust in regulated environments.

Sessions covered a wide range of important topics, including kernel safety (identifying weaknesses, fault propagation, and Linux as a safety element out of context), automotive innovation (safe platforms, prototyping frameworks, and software-defined vehicles), and compliance and trust (continuous compliance, traceability, and statistical methods in safety analysis). These talks reflected the growing maturity of the ecosystem and highlighted the shared challenges the community is tackling from technical methodologies to regulatory alignment.

This week we highlight two talks from the Safety Critical Summit session:

Looking at Linux as a SEooC – Kate Stewart, The Linux Foundation; Nicole Pappler, AlektoMetis & Chuck Wolber, The Boeing Company

Linux is increasingly deployed in safety-critical systems as a Safety Element out of Context (SEooC), yet its scale and rapid evolution, thousands of contributors and near-continuous upstream change pose unique assurance challenges. This talk explains what SEooC means in practice, why it should be understood as a “safety element with assumed context,” and the implications for integrators: a SEooC is not plug-and-play. System developers remain responsible for confirming compatibility, reviewing the safety manual and assumptions of use, ensuring traceability to their own requirements, configuring the element correctly, and validating it within their specific hazard and timing constraints. We frame the work through design assurance hazard identification, design mitigation via requirements-based engineering, and implementation assurance highlighting current gaps between kernel behavior and requirements-derived tests. 

The session outlines community efforts to close those gaps: defining low-level Linux kernel requirements with maintainer sign-off; advancing coverage (statement, decision, MC/DC) using LLVM-based kernel coverage and object-code mapping; and packaging evidence with an SPDX functional-safety profile. Speakers also address non-determinism (focusing on deterministic outcomes, minimal configurations) and introduce knaf for call-tree analysis from specific entry points. 

Overall, these efforts show how scaling requirements, testing, and coverage within open collaboration can yield reusable evidence, strengthen kernel reliability, and align with a substantial portion of DO-178C DAL A objectives across industries.

Identifying Safety Weaknesses and Fault Propagation in the Linux Kernel – Igor Stoppa, NVIDIA

With growing interest in using Linux in safety-critical domains such as automotive, traditional functional safety practices need to be applied to an open source environment. One such practice is fault injection, where failures are deliberately introduced to study how the system reacts.

This talk by Igor Stoppa, NVIDIA, introduced a tool and methodology for injecting controlled faults into Linux kernel data structures. The goal is to uncover subtle forms of degradation that may not trigger a crash but can compromise safety goals, such as delayed system responses. By running repeatable experiments, the approach makes it possible to check whether safety mechanisms detect and report problems consistently and within required timing constraints.

The work highlights both the challenges of applying safety analysis to a large, fast-moving project like the Linux kernel and the opportunities to integrate such testing into the regular release process. Over time, this could provide valuable data on fault propagation, improve kernel reliability, and strengthen Linux’s role in safety-critical applications.

What’s Next?

The Safety-Critical Software track at Open Source Summit Europe 2025 highlighted the important progress being made toward making Linux a reliable choice in regulated and safety-sensitive domains. From exploring Linux as a Safety Element out of Context to fault injection techniques that expose hidden weaknesses, these discussions show how the community is tackling complex challenges with rigor and collaboration. 

To learn more, be sure to check our upcoming blogs where we will cover more sessions from the track. If you are interested in shaping this work, we invite you to join ELISA working groups and contribute to advancing safety practices in open source together.

ELISA Project - Blog: When Kernel Comments Get Weird: The Tale of `drivers/char/mem.c`

When Kernel Comments Get Weird: The Tale of `drivers/char/mem.c`

By Ambassadors, Blog, Working Group

This blog is written by Alessandro Carminati, Principal Software Engineer at Red Hat and lead for the ELISA Project’s Linux Features for Safety-Critical Systems (LFSCS) WG.

As part of the ELISA community, we spend a good chunk of our time spelunking through the Linux kernel codebase. It’s like code archeology: you don’t always find treasure, but you _do_ find lots of comments left behind by developers from the ’90s that make you go, “Wait… really?”

One of the ideas we’ve been chasing is to make kernel comments a bit smarter: not only human-readable, but also machine-readable. Imagine comments that could be turned into tests, so they’re always checked against reality. Less “code poetry from 1993”, more “living documentation”.

Speaking of code poetry, [here] one gem we stumbled across in `mem.c`:

```
/* The memory devices use the full 32/64 bits of the offset,
 * and so we cannot check against negative addresses: they are ok.
 * The return value is weird, though, in that case (0).
 */
 ```

This beauty has been hanging around since **Linux 0.99.14**… back when Bill Clinton was still president-elect, “Mosaic” was the hot new browser,
and PDP-11 was still produced and sold.

Back then, it made sense, and reflected exactley what the code did.

Fast-forward thirty years, and the comment still kind of applies
but mostly in obscure corners of the architecture zoo.
On the CPUs people actually use every day?

 

```
$ cat lseek.asm
BITS 64

%define SYS_read    0
%define SYS_write   1
%define SYS_open    2
%define SYS_lseek   8
%define SYS_exit   60

; flags
%define O_RDONLY    0
%define SEEK_SET    0

section .data
    path:    db "/dev/mem",0
section .bss
    align 8
    buf:     resq 1

section .text
global _start
_start:
    mov     rax, SYS_open
    lea     rdi, [rel path]
    xor     esi, esi
    xor     edx, edx
    syscall
    mov     r12, rax        ; save fd in r12

    mov     rax, SYS_lseek
    mov     rdi, r12
    mov     rsi, 0x8000000000000001
    xor     edx, edx
    syscall

    mov     [rel buf], rax

    mov     rax, SYS_write
    mov     edi, 1
    lea     rsi, [rel buf]
    mov     edx, 8
    syscall

    mov     rax, SYS_exit
    xor     edi, edi
    syscall
$ nasm -f elf64 lseek.asm -o lseek.o
$ ld lseek.o -o lseek
$ sudo ./lseek| hexdump -C
00000000  01 00 00 00 00 00 00 80                           |........|
00000008
$ # this is not what I expect, let's double check
$ sudo gdb ./lseek
GNU gdb (Fedora Linux) 16.3-1.fc42
Copyright (C) 2024 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./lseek...
(No debugging symbols found in ./lseek)
(gdb) b _start
Breakpoint 1 at 0x4000b0
(gdb) r
Starting program: /tmp/lseek

Breakpoint 1, 0x00000000004000b0 in _start ()
(gdb) x/30i $pc
=> 0x4000b0 <_start>:   mov    $0x2,%eax
   0x4000b5 <_start+5>: lea    0xf44(%rip),%rdi        # 0x401000
   0x4000bc <_start+12>:        xor    %esi,%esi
   0x4000be <_start+14>:        xor    %edx,%edx
   0x4000c0 <_start+16>:        syscall
   0x4000c2 <_start+18>:        mov    %rax,%r12
   0x4000c5 <_start+21>:        mov    $0x8,%eax
   0x4000ca <_start+26>:        mov    %r12,%rdi
   0x4000cd <_start+29>:        movabs $0x8000000000000001,%rsi
   0x4000d7 <_start+39>:        xor    %edx,%edx
   0x4000d9 <_start+41>:        syscall
   0x4000db <_start+43>:        mov    %rax,0xf2e(%rip)        # 0x401010
   0x4000e2 <_start+50>:        mov    $0x1,%eax
   0x4000e7 <_start+55>:        mov    $0x1,%edi
   0x4000ec <_start+60>:        lea    0xf1d(%rip),%rsi        # 0x401010
   0x4000f3 <_start+67>:        mov    $0x8,%edx
   0x4000f8 <_start+72>:        syscall
   0x4000fa <_start+74>:        mov    $0x3c,%eax
   0x4000ff <_start+79>:        xor    %edi,%edi
   0x400101 <_start+81>:        syscall
   0x400103:    add    %al,(%rax)
   0x400105:    add    %al,(%rax)
   0x400107:    add    %al,(%rax)
   0x400109:    add    %al,(%rax)
   0x40010b:    add    %al,(%rax)
   0x40010d:    add    %al,(%rax)
   0x40010f:    add    %al,(%rax)
   0x400111:    add    %al,(%rax)
   0x400113:    add    %al,(%rax)
   0x400115:    add    %al,(%rax)
(gdb) b *0x4000c2
Breakpoint 2 at 0x4000c2
(gdb) b *0x4000db
Breakpoint 3 at 0x4000db
(gdb) c
Continuing.

Breakpoint 2, 0x00000000004000c2 in _start ()
(gdb) i r
rax            0x3                 3
rbx            0x0                 0
rcx            0x4000c2            4194498
rdx            0x0                 0
rsi            0x0                 0
rdi            0x401000            4198400
rbp            0x0                 0x0
rsp            0x7fffffffe3a0      0x7fffffffe3a0
r8             0x0                 0
r9             0x0                 0
r10            0x0                 0
r11            0x246               582
r12            0x0                 0
r13            0x0                 0
r14            0x0                 0
r15            0x0                 0
rip            0x4000c2            0x4000c2 <_start+18>
eflags         0x246               [ PF ZF IF ]
cs             0x33                51
ss             0x2b                43
ds             0x0                 0
es             0x0                 0
fs             0x0                 0
gs             0x0                 0
fs_base        0x0                 0
gs_base        0x0                 0
(gdb) # fd is just fine rax=3 as expected.
(gdb) c
Continuing.

Breakpoint 3, 0x00000000004000db in _start ()
(gdb) i r
rax            0x8000000000000001  -9223372036854775807
rbx            0x0                 0
rcx            0x4000db            4194523
rdx            0x0                 0
rsi            0x8000000000000001  -9223372036854775807
rdi            0x3                 3
rbp            0x0                 0x0
rsp            0x7fffffffe3a0      0x7fffffffe3a0
r8             0x0                 0
r9             0x0                 0
r10            0x0                 0
r11            0x246               582
r12            0x3                 3
r13            0x0                 0
r14            0x0                 0
r15            0x0                 0
rip            0x4000db            0x4000db <_start+43>
eflags         0x246               [ PF ZF IF ]
cs             0x33                51
ss             0x2b                43
ds             0x0                 0
es             0x0                 0
fs             0x0                 0
gs             0x0                 0
fs_base        0x0                 0
gs_base        0x0                 0
(gdb) # According to that comment, rax should have been 0, but it is not.
(gdb) c
Continuing.
�[Inferior 1 (process 186746) exited normally]
(gdb) 
```

Not so much. Seeking at `0x8000000000000001`…
Returns `0x8000000000000001` not `0` as anticipated in the comment.
We’re basically facing the kernel version of that “Under Construction”
GIF on websites from the 90s, still there, but mostly just nostalgic
decoration now.

## The Mysterious Line in `read_mem`

Let’s zoom in on one particular bit of code in [`read_mem`](https://elixir.bootlin.com/linux/v6.17-rc2/source/drivers/char/mem.c#L82):

```
	phys_addr_t p = *ppos;
	/* ... other code ... */
	if (p != *ppos) return  0;
```

At first glance, this looks like a no-op; why would `p` be different from
`*ppos` when you just copied it?
It’s like testing if gravity still works by dropping your phone…
**spoiler: it does.**

But as usual with kernel code, the weirdness has a reason.

## The Problem: Truncation on 32-bit Systems

Here’s what’s going on:

– `*ppos` is a `loff_t`, which is a 64-bit signed integer.
– `p` is a `phys_addr_t`, which holds a physical address.

On a 64-bit system, both are 64 bits wide. Assignment is clean, the check
always fails (and compilers just toss it out).

But on a 32-bit system, `phys_addr_t` is only 32 bits. Assign a big 64-bit
offset to it, and **boom**, the top half vanishes.
Truncated, like your favorite TV series canceled after season 1.

That `if (p != *ppos)` check is the safety net.
It spots when truncation happens and bails out early, instead of letting
some unlucky app read from la-la land.

## Assembly Time: 64-bit vs. 32-bit

On 64-bit builds (say, AArch64), the compiler optimizes away the check.

```
┌ 736: sym.read_mem (int64_t arg2, int64_t arg3, int64_t arg4);
│ `- args(x1, x2, x3) vars(13:sp[0x8..0x70])
│           0x08000b10      1f2003d5       nop
│           0x08000b14      1f2003d5       nop
│           0x08000b18      3f2303d5       paciasp
│           0x08000b1c      fd7bb9a9       stp x29, x30, [sp, -0x70]!
│           0x08000b20      fd030091       mov x29, sp
│           0x08000b24      f35301a9       stp x19, x20, [var_10h]
│           0x08000b28      f40301aa       mov x20, x1
│           0x08000b2c      f55b02a9       stp x21, x22, [var_20h]
│           0x08000b30      f30302aa       mov x19, x2
│           0x08000b34      750040f9       ldr x21, [x3]
│           0x08000b38      e10302aa       mov x1, x2
│           0x08000b3c      e33700f9       str x3, [var_68h]        ; phys_addr_t p = *ppos;
│           0x08000b40      e00315aa       mov x0, x21
│           0x08000b44      00000094       bl valid_phys_addr_range
│       ┌─< 0x08000b48      40150034       cbz w0, 0x8000df0        ;if (!valid_phys_addr_range(p, count))
│       │   0x08000b4c      00000090       adrp x0, segment.ehdr
│       │   0x08000b50      020082d2       mov x2, 0x1000
│       │   0x08000b54      000040f9       ldr x0, [x0]
│       │   0x08000b58      01988152       mov w1, 0xcc0
│       │   0x08000b5c      f76303a9       stp x23, x24, [var_30h]
[...]
```
Nothing to see here, move along.
But on 32-bit builds (like old-school i386), the check shows up loud and 
proud in the assembly. 
```
[0x080003e0]> pdf
┌ 392: sym.read_mem (int32_t arg_8h);
│ `- args(sp[0x4..0x4]) vars(5:sp[0x14..0x24])
│           0x080003e0      55             push ebp
│           0x080003e1      89e5           mov ebp, esp
│           0x080003e3      57             push edi
│           0x080003e4      56             push esi
│           0x080003e5      53             push ebx
│           0x080003e6      83ec14         sub esp, 0x14
│           0x080003e9      8955f0         mov dword [var_10h], edx
│           0x080003ec      8b5d08         mov ebx, dword [arg_8h]
│           0x080003ef      c745ec0000..   mov dword [var_14h], 0
│           0x080003f6      8b4304         mov eax, dword [ebx + 4] 
│           0x080003f9      8b33           mov esi, dword [ebx]     ; phys_addr_t p = *ppos;
│           0x080003fb      85c0           test eax, eax
│       ┌─< 0x080003fd      7411           je 0x8000410             ; if (!valid_phys_addr_range(p, count))
│     ┌┌──> 0x080003ff      8b45ec         mov eax, dword [var_14h]
│     ╎╎│   0x08000402      83c414         add esp, 0x14
│     ╎╎│   0x08000405      5b             pop ebx
│     ╎╎│   0x08000406      5e             pop esi
│     ╎╎│   0x08000407      5f             pop edi
│     ╎╎│   0x08000408      5d             pop ebp
│     ╎╎│   0x08000409      c3             ret
[...]
```

The CPU literally does a compare-and-jump to enforce it. So yes, this is a _real_ guard, not some leftover fluff.

## Return Value Oddities

Now, here’s where things get even funnier. If the check fails in `read_mem`, the function returns `0`. That’s “no bytes read”, which in file I/O land is totally fine.

But in the twin function `write_mem`, the same situation returns `-EFAULT`. That’s kernel-speak for “Nope, invalid address, stop poking me”.

So, reading from a bad address? You get a polite shrug. Writing to it? You get a slap on the wrist. Fair enough, writing garbage into memory is way more dangerous than failing to read it. Come on, probably here we need to fix things up.

Wrapping It Up

This little dive shows how a single “weird” line of code carries decades of context, architecture quirks, type definitions, and evolving assumptions.
It also shows why comments like the one from 0.99.14 are dangerous: they freeze a moment in time, but reality keeps moving.

Our mission in Elisa Architecture WG is to bring comments back to life: keep them up-to-date, tie them to tests, and make sure they still tell the truth. Because otherwise, thirty years later, we’re all squinting at a line saying “the return value is weird though” and wondering if the developer was talking about code… or just their day.

And now, a brief word from our *sponsors* (a.k.a. me in a different hat): When I’m not digging up ancient kernel comments with the Architecture WG, I’m also leading the Linux Features for Safety-Critical Systems (LFSCS) WG. We’re cooking up some pretty exciting stuff there too.

So if you enjoy the kind of archaeology/renovation work we’re doing there, come check out LFSCS as well: same Linux, different adventure.

ELISA Project Welcomes Simone Weiss to the Governing Board!

By Ambassadors, Blog

We are excited to announce that Simone Weiss, Product Owner at Elektrobit, has joined the Governing Board of the Enabling Linux in Safety Applications (ELISA) Project. She brings a wealth of experience in functional safety, embedded systems, and open source leadership that will help guide ELISA’s mission to enable the use of Linux in safety-critical applications. One of Simone’s first tasks will be to lead the creation of a glossary in the ELISA Project directory.

ELISA Project Welcomes Simone Weiss to the Governing Board!Elektrobit has been an active contributor to the ELISA Project for several years, and Simone’s appointment reflects the company’s commitment to advancing the use of open source technologies in industries such as automotive, industrial, medical, and beyond.

“It’s an honor to join ELISA’s Governing Board. I’m looking forward to working with the community to support collaboration between industry and safety experts and drive broader adoption of Linux in safety-critical domains.” – Simone Weiss, Elektrobit

The ELISA Governing Board plays a critical role in setting the project’s strategic direction, ensuring sustainability, and supporting the vibrant technical community that underpins ELISA’s success. With the addition of Simone, the board strengthens its collective expertise and reaffirms its dedication to transparency, collaboration, and safety excellence.

Simone recently traveled to Open Source Summit North America, which happened in Denver, Colorado in June, to attend her first in-person Governing Board meeting. 

ELISA Project Governing Board 2025

Please join us in welcoming Simone to the ELISA Project Governing Board!

Arduino Portenta X8 as a Community Reference Hardware for Safe Systems – Highlights from the ELISA Project Workshop

Arduino Portenta X8 as a Community Reference Hardware for Safe Systems – Highlights from the ELISA Project Workshop

By Blog, Workshop

At the ELISA Project Workshop held May 7-9, 2025, in Lund, Sweden, co-hosted with Volvo Cars, Arduino co-founder and Head of Research at Malmö University, David Cuartielles shared an insightful session on using the Portenta X8 as a reference hardware platform for building safe and secure embedded Linux systems.

In his presentation, David walked through Arduino’s journey into Linux-capable hardware, the motivations behind creating the Portenta X8, and how it came to be through European-funded research projects. With industrial-grade capabilities, real-time microcontroller support, and built-in fleet management, the Portenta X8 stands out as a robust platform for prototyping secure and sustainable embedded Linux systems.

David also shared his insights into sustainability challenges in hardware manufacturing, highlighting Arduino’s ongoing research into biocompatible PCBs using PLA-flax substrates. The talk offers insights into balancing innovation with ecological responsibility, and how that might impact Linux-compatible hardware in the future.

To learn more, watch the session here. Slides available here.

ELISA Project workshop 2025 - Lund, Sweden

Recap of the ELISA Project Workshop 2025: Lund, Sweden

By Blog, Workshop

The ELISA Project’s workshop in Lund, Sweden brought together project members, contributors, and ecosystem partners for three days of focused collaboration and planning. From May 7 – 9, attendees convened at the Volvo Cars Lund Office to advance safety-critical Linux development and map out future goals.

On the afternoon of May 7, the workshop kicked off with welcome note by Philipp Ahmann (ETAS GmbH), Kate Stewart (Linux Foundation), and Robert Fekete (Volvo Cars), followed by an “Ask Me Anything” panel on ELISA and OSS safety applications featuring Philipp Ahmann and Gabriele Paoloni (Red Hat). David Cuartielles then demonstrated the Arduino Portenta X8 as community reference hardware for safe systems, and a cross-community case study highlighted collaboration with AGL, Eclipse S-Core, KernelCI, Xen, Zephyr, and more. The day closed with discussions on ELISA’s interaction with adjacent communities including Eclipse, Linaro, Rust, SPDX, and Yocto before an offsite dinner at Stäket.

Day 2 began with a comparison of Safety Linux vs. Safe(ty) Linux led by Philipp Ahmann and Paul Albertella (Codethink). Olivier Charrier (Wind River) and Alessandro Carminati (Red Hat) then explored hardware-level integration in the Linux kernel. After lunch, a series of special topics covered PX4Space (Pedro Roque, KTH), SPDX Safety Profile (Nicole Pappler, AlektoMetis), Safe Continuous Deployment (Håkan Sivencrona, Volvo Cars), and Resilient Safety Analysis (Igor Stoppa, NVIDIA). The afternoon sessions on KernelCI, BASIL & Testing (Luigi Pellecchia, Gustavo Padovan) and Requirements Traceability (Kate Stewart, Gabriele Paoloni) concluded with an engaging networking session.

On the morning of May 9, attendees discussed the Trustable Software Framework (Paul Albertella, Daniel Krippner) and examined Rust’s role in safety-critical applications. The final session on Best Practices Standard, presented by Philipp Ahmann, Gabriele Paoloni, and Olivier Charrier, distilled key takeaways and action items for ELISA’s roadmap. The workshop ended with stronger community connections and a clear plan for the project’s next steps.

We extend our thanks to Volvo Cars Lund for hosting, to all speakers and participants for their insights, and to the ELISA Project community for making this gathering a success. 

Videos from the workshop are now available on the YouTube channel of the ELISA Project. Watch the full playlist here.

Slides can be accessed here at the ELISA Project directory.

Keep an eye out for details on the next in-person workshop and virtual participation options here!

Criteria and Process for Evaluating Open-Source Documentation

By Ambassadors, Blog, Seminar Series

As the open source and safety (and security) communities collaborate more closely, there’s an opportunity to build trust by showcasing how open source development aligns with key safety principles. As part of the ELISA Seminar series, Pete Brink, Principal Consultant at UL Solutions and ELISA Project ambassador, recently presented the process designed to adapt to a variety of projects and contexts, including evaluation criteria.

This video aims to introduce a flexible, practical framework for evaluating documentation that supports trustworthiness in development practices. The goal is to empower teams to highlight their commitment to quality and safety in a way that works for them. Watch here:


The ELISA Seminar Series focuses on hot topics related to ELISA’s mission to define and maintain a common set of elements, processes and tools that can be incorporated into Linux-based, safety-critical systems amenable to safety certification. Speakers are members, contributors and thought leaders from the ELISA Project and surrounding communities. Each seminar comprises a 45-minute presentation and a 15-minute Q&A, and it’s free to attend. You can watch all videos on the ELISA Project Youtube Channel ELISA Seminar Series Playlist here.

For more ELISA Project updates, subscribe to @ProjectElisa or our LinkedIn page or our Youtube Channel.

How open projects rethink safety culture

By Blog, Workshop

Written by Paul Albertella, ELISA Project TSC member, Chair for Open Source Engineering Process Working Group and Consultant at Codethink

This blog originally ran on the Codethink website. For more content like this, click here.

In 2016, Codethink started out on a journey to discover how open source software can be safely used to build safety-critical systems — that is, in products where people might be harmed if the software fails to do its job correctly.

Free / libre open source software (FLOSS) projects like Linux have clearly demonstrated the value of collaboration in public when creating software that is — amongst many other things — trusted as the backbone of the web and millions of smart phones. FLOSS projects have also established the essential role of transparency and rapid software updates in dealing with cybersecurity threats. When it comes to safety, however, the difficulties of making a case for using FLOSS in a solution have long been a frustrating obstacle for product developers.

Immediately following Codethink’s announcement about our latest milestone in this journey, I took part in two workshops focussing on safety and open source. This gave me the opportunity to talk about the Trustable Software Framework (TSF) and how we are using it in our development of CTRL OS. I also learnt more from other open source projects about their approaches to creating software where trustability is as important.

The workshops were hosted by Volvo Cars in the Swedish city of Lund, and our hosts also provided several enthusiastic participants. The events were organised by two open source projects that have common goals and challenges, but approach these from different perspectives and with different focuses. The Eclipse SDV project aims to build an automotive software stack to provide “an open technology platform for the software-defined vehicle of the future”. In contrast, the ELISA project is concerned with the use of Linux-based operating systems for safety applications in a range of different domains.

Image of Lund University Library

Day 1

Markus Bechter from BMW started the Eclipse SDV workshop by describing the approach to safety being developed for the Eclipse S-CORE or Safe Open Vehicle Core project. The intent is to establish a common set of development processes for components of this project, making the software amenable to safety certification using the ISO 26262 Automotive Safety Standard.

The Trustable Software Framework project was recently accepted into the Eclipse Foundation, so I gave the next presentation. TSF approaches the challenge of using FLOSS in safety more broadly: how can we make a case for using software that has not been developed following a process that conforms to an applicable safety standard? Since this describes the vast majority of existing FLOSS, including many of the tools and dependencies that S-CORE plans to use, an answer to this question is sorely needed, and TSF provides a methodology for making such a case.

After lunch, it was time to welcome a new set of participants and start the ELISA workshop. This began with an introduction to the project for newcomers (see my retrospective from last year’s workshop if you are also new to the project), followed by an Ask Me Anything discussion. Then we had a fascinating talk from David Cuartielles, a founder of the Arduino project who was recently honoured in the European Open Source Awards. After telling us about the latest Arduino (the Portenta x8) and the features of the boards that are relevant for trust, he went on to talk about a topic that he is passionate about: the DESIRE4EU project, which is exploring how to make printed circuit boards that are recyclable, in support of the European sustainable electronics goal.

The rest of the day focussed on the efforts of the ELISA Systems working group to describe and build systems involving Linux in combination with two other FLOSS components: the Zephyr RTOS and the Xen Hypervisor. This led naturally into a discussion of ELISA’s interactions with other adjacent open source communities.

Image of a presentation

Day 2

Philipp Ahmann and I started the second day with a discussion exploring some common misapprehensions about Linux and safety. We talked about some of the ‘routes’ to certification in the safety standards for pre-existing software, and why these are difficult to apply to open source software. We also explained why the notion of creating a ‘safe’ Linux is misleading, because safety can only really be understood in terms of a system, as opposed to an intrinsic property of a component. This led into discussions of various system models involving Linux, the use of complete redundant systems as part of a larger system design, and the role of hardware components in this, which was a perfect segue to the next session.

Olivier Charrier talked about the role of hardware integration in safety, describing how the responsibilities for achieving specific safety objectives as part of a system design are typically assigned to hardware and software components, and then refined or re-defined in a series of iterations to address the identified gaps. Alessandro Carminati then shared the results of a Linux Features working group investigation to build and analyse a minimal Linux configuration and identify a core set of features that must be considered for any Linux-based system.

After lunch we had a series of ‘special topic’ talks, beginning with interesting talks on PX4SPace — a flight control solution for drones that is being used to build robotic space vehicle solutions — and the SPDX Safety Profile, which extends the SPDX 3.0 ‘knowledge graph’ to include metadata relating to development processes for safety.

Håkan Sivencrona from Volvo then talked about Safe Continuous Deployment, emphasising the importance of building development processes that deliver an ongoing stream of ‘safe’ software deliveries using DevOps principles, not just one ‘blessed’ release that is never expected to change. Igor Stoppa’s talk on “Resilient Safety Analysis and Qualification” sparked a lively discussion, as he argued that any safety analysis of Linux must be based on a detailed understanding of the code, and that this might be a reason not to rely on more complex features or extensions of the kernel.

We then had a talk by Gustavo Padovan of the Kernel CI project, which recently became an associate member of ELISA. He explained that a key goal of the project is to enable projects and organisations testing the kernel to share their results with the wider kernel community by providing a common framework for reporting results. Recent developments include kci.dev, a command line tool enabling developers and maintainers to interact with Kernel CI, and a YAML config file format to enable Linux subsystems to share tailored test case executions for maintainers and the wider community.

The rest of the day focussed on requirements management and traceability, looking first at ELISA’s BASIL tool, and then at an initiative with the Linux Tracing subsystem to develop a low-level requirements specification approach. The latter involved documenting detailed requirements for each function in the kernel, which would be intended to support complete reimplementation of the functionality without reference to the code. One participant noted that this approach might enable the kernel to be re-written in Rust!

Image of a street lamp in Lund

Day 3

I kicked off the last day by reprising my presentation about the TSF from the Eclipse workshop for the ELISA attendees. Once again, the enthusiastic engagement and insightful questions from the participants were very gratifying, and Daniel Krippner helped to illustrate how the framework may be applied in practice by talking through his use of it as part of the Eclipse uProtocol project. Daniel and I followed this with a quick discussion of how Rust is becoming increasingly relevant in the safety sphere, and how this may be relevant for ELISA.

The workshop wrapped up with a discussion on the Open Source Best Practices Standard, an initiative that was launched earlier this year. It included a live survey collecting input from the audience about their awareness of existing standards and suggestions for projects to be considered for examples of best practices.

Key Takeaways

I’ve attended numerous ELISA workshops since the first one in 2019, and it was wonderful to note how many passionate and enthusiastic newcomers we had attending this time. We also had participants from a variety of different backgrounds, including academics from the local university and engineers from the rail, medical and aeronautics industries, as well as the always-prevalent automotive specialists.

ELISA’s increasing engagement with other open source communities, including those from the Eclipse Foundation and Linux Foundation projects, is also good to see. The growing interest in safety-related topics in these communities, building on the already well-established awareness of cybersecurity topics, is also encouraging. After the enthusiastic reception that my talks had last week, I am hopeful that the Trustable Software Framework can help to continue this trend, giving all open source projects a way to start engaging with these topics and to share their thinking and strategies for building trust with other projects and communities.

Stay tuned here for links to the videos and presentations.

Additional Resources:

Automated Testing Summit (ATS) 2025

By Blog, Industry Conference

In March, the ELISA Project welcomed KernelCI, a community-based open source distributed test automation system focused on building a collaborative ecosystem around upstream kernel development, to our ecosystem. The primary goal of KernelCI is to use an open testing philosophy to ensure the quality, stability and long-term maintenance of the Linux kernel. The Project is currently working on improved LTS kernel testing and validation; consolidation of existing testing initiatives; quality-of-life improvements to the current service; expanded compute resources; and increased pool of hardware to be tested. Learn more about why they joined the project here.

KernelCI will be hosting the Automated Testing Summit (ATS) 2025 on Thursday, June 26 from 9 am – 5 pm in Denver, Colorado, as part of  Open Source Summit North America. The Automated Testing Summit is a technical conference focused on the key challenges, tools, and infrastructure involved in testing and quality assurance for the Linux ecosystem — with an emphasis on upstream kernel development, embedded systems, cloud environments and CI integration.

Modern software stacks grow increasingly complex and heterogeneous. Ensuring their stability requires scalable, reproducible, and automated testing solutions that can operate across diverse hardware platforms, kernel versions, and integration layers. ATS brings together engineers working on KernelCI, test frameworks, lab automation, CI/CD pipelines, fuzzing, performance analysis, and more.

The event is a platform for in-depth technical talks, demos, and collaboration sessions that tackle real-world problems in automated testing. Topics range from designing interoperable systems for sharing test results, to debugging kernel regressions across distributed hardware labs.

ATS is currently accepting speaking proposals. Submit a proposal here by Sunday, May 18.

How to Register: Pre-registration is required.

To register to attend in-person at Automated Testing Summit 2025, add it to your Open Source Summit North America registration.

To register to attend virtually, please register here.

 

NEW FREE COURSE: Understanding the EU Cyber Resilience Act (CRA)

By Announcement, Linux Foundation Education

This blog originally ran on the Linux Foundation Education website. For more content like this, click here.

Quickly Grasp the Key Requirements of the CRA with this Express Learning Video Course

OpenSSF and Linux Foundation Education have announced the launch of Understanding the EU Cyber Resilience Act (CRA) (LFEL1001), a new, free, Express Learning video course that covers:

  • Key requirements of the EU’s Cyber Resilience Act (CRA)
  • Digital product impacts
  • Compliance strategies
  • How to navigate uncertainties in the law, including for open source software

The course is ideal for anyone needing to adapt to these new legal requirements, especially decision-makers and software developers – including those working with open source software – whose products may be commercially available in the EU.

“The Cyber Resilience Act (CRA) is critically important for all software developers and their managers to understand. It imposes requirements on many kinds of software, including open source, that have never been regulated before. The CRA applies even if the software wasn’t developed in the EU,” said David A. Wheeler, PhD, Director of Open Source Supply Chain Security, OpenSSF. “This completely changes the software development landscape. You could risk its substantial penalties, but it’s wiser to gain an understanding of it.”

EU Law with Global Impact

The CRA is a landmark law that imposes new requirements on products with digital elements, including software, that are made commercially available within the European Union. It also imposes significant penalties for failure to comply in certain cases. Given the global nature of software and hardware development, many organizations and individuals not based in the EU will find themselves affected by the CRA.

Understanding the EU Cyber Resilience Act (CRA) (LFEL1001) will help those affected better prepare to understand and meet their obligations of the law and avoid the significant penalties the law can enforce. This includes the CRA’s requirements for developing secure software and managing vulnerability reports. The course will also note some of the uncertainties in the new law, explain how some are being addressed and provide recommendations on how to deal with such uncertainties.

Understanding the EU Cyber Resilience Act (CRA) (LFEL1001) is a free, 90-minute, self-paced, e-Learning video course. Those who successfully complete the course receive a digital badge and certificate of completion.

Don’t Let the CRA Catch You Off Guard
Enroll Today!

New Initiative Seeks to Establish Open Source Software Best Practices Standard

By Blog, Industry Partners, Linux Foundation, News

In an era of rapid digital transformation, open source software has become the backbone of technological innovation across industries. The Linux Foundation Europe is proud to partner with Enabling Linux in Safety Applications (ELISA) Project to support an initiative, aimed at addressing a critical challenge in the software ecosystem. As the demand for open source software into regulated and safety-critical systems increases (e.g. in aerospace, automotive, and medical industries), the need for a robust, standardized approach to evaluate its quality and security has never been more urgent. This initiative promises to reshape how we assess and integrate open source software into mission-critical environments. Learn more in this blog article authored by Philipp Ahmann (ETAS GmbH) and Gabriele Paoloni (Red Hat).