Skip to main content
All Posts By

ELISA

Make Your Voice Heard – 2022 Open Source Jobs Report!

By Blog, LF Training & Certification

Written by Dan Brown, Senior Manager, Content & Social Media, Linux Foundation Training & Certification

The Linux Foundation has once again partnered with edX for the next iteration of the Open Source Jobs Report. The report examines the latest trends in open source careers, which skills are in demand, what motivates open source job seekers, and how employers can attract and retain top talent. Last year’s report can be found here. This year’s report will also examine the extent the “Great Resignation” has affected the technology industry.

The report is anchored by a survey exploring what hiring managers are looking for in employees, and what motivates open source professionals. All participants will receive a discount code for a Linux Foundation training course or certification exam upon survey completion.

We encourage you to share your thoughts and experiences. The survey takes around 10 minutes to complete, and all data is collected anonymously.

Check out the 2021 Open Source Jobs Report here.

Deterministic Construction Service

By Blog, Workshop

This blog previously ran on the Codethink website. Click here for more content like this.

Paul Albertella spoke at the ELISA November 2021 Workshop about how Codethink’s Deterministic Construction Service achieved ISO 26262 certification. In this article he explains the purpose of DCS and how it paves the way towards one of Codethink’s longer-term goals: establishing a viable approach to safety certification for Linux-based operating systems. Read more or watch the video from the ELISA Workshop below.

Background

Deterministic Construction Service (DCS) is Codethink’s design pattern for constructing critical software components. It defines a controlled process, based on an automated continuous integration (CI) workflow, for constructing and managing changes to software components, and to the tools used to build and verify them. A reference implementation of this design pattern was recently assessed by Exida and qualified using the ISO 26262 safety standard for use with automotive safety applications up to ASIL D.

DCS was made possible by many years of work on construction and integration tooling at Codethink, and builds directly on the previous efforts of open-source projects such as Baserock, BuildStream, Freedesktop SDK and Reproducible Builds. These projects have helped to establish and refine both the principles that inform DCS and the techniques used in its implementation.

As a tool, DCS is an important foundation for building a safety-certifiable Linux-based OS, but in creating and certifying it, Codethink had another goal: to validate the safety approach that we have been developing in collaboration with Exida and ELISA. This approach is called RAFIA, which is an acronym of Risk Analysis, Fault Injection and Automation); it was introduced in a previous article and further discussed in a second article.

Goals and principles

The goals of DCS are:

  • To construct software in such a way that it is consistently reproducible
  • To verify this property for a given set of inputs, for a given instantiation of DCS
  • To make use of this property to inform verification and impact analysis
  • To automate all of this as part of a continuous integration (CI) workflow

Reproducible, in this context, means that the outputs of the construction process (a binary fileset) for a given set of inputs (source code, dependencies, build instructions, etc) for the target software (the components that we are constructing and verifying) must be demonstrably identical. That is to say, re-running the DCS process without explicitly changing any of the inputs, must produce exactly the same set of binary outputs every time.

Inputs, in this context, means everything that is required to construct and verify the software, which includes:

  • the target software and its dependencies,
  • the actions required to construct and verify these,
  • the tools used to perform these actions, and
  • the execution environments for these actions

An instantiation of DCS is an implementation of the design pattern using a specific set of tools, configuration and infrastructure. The reference implementation, for example, is based on Codethink’s managed Gitlab service and its associated servers, resources (e.g. CI runners), and configurations (e.g. access control), together with a set of hosted git repositories. These repositories contain the component tools used to realise DCS, the build and test inputs for these – including safety-related criteria and tests for the DCS design pattern – and the automation scripts that implement the overall service logic.

DCS control structure

In order to have reproducible outputs, we must have clearly defined and consistent inputs. This means having fine-grained change and revision control over the corresponding files and their organisation into components, configurations, target systems, etc.

It is essential to track all inputs, including (but not limited to): the source code of the target software; any other build-time dependencies required to construct it; any run-time dependencies required to verify it; the configuration, calibration or test data used to inform or verify its behaviour; the tools used to perform construction and verification actions; the execution environments within which these actions are performed; the criteria that are used to evaluate verification actions; and instructions detailing the actions required to provide, build or verify all of these as part of an automated workflow.

We also need fine-grained control over our construction processes, which means that build actions must not only be consistently performed, but must be executed in a controlled environment to avoid the introduction of unspecified or unplanned inputs into the build process.

Purpose

DCS verifies that we have control over our construction process and all of its inputs by comparing the binary outputs of two completed construction pipelines (automated executions of the specified construction steps and actions). If the results are identical, then the inputs and build actions may be considered under control. If the results differ, then the cause of the difference must be investigated, to determine whether an unspecified or uncontrolled input is involved in the construction process.

Once we have control over our process and inputs, we can use the same principle to inform impact analysis for the constructed software. If a change to one of the inputs has no effect on the output binaries, then we can be confident that there will be no impact on the software’s behaviour or properties, which may avoid unnecessary re-testing. Similarly, if we were expecting a change to have an effect (e.g. to fix a bug), then we can know that this will not be the case without having to re-test it.

These ‘no change’ cases may seem insignificant at first sight – after all, why would we want to make a change that has no effect? – but when maintaining complex software systems, the practice of regularly and systematically applying atomic changes can be invaluable. Not all changes to an input will affect the output for a given construction, because the source code may have conditional compilation sections, which are not used for a specific build. By atomically applying individual changes over time, instead of applying a large change set in one go, we are able to determine when a specific change does have an effect and use this to guide our impact analysis.

This becomes even more valuable if we are using artifact caching as part of a construction process. By storing the artifacts (binary outputs and intermediate objects) produced by previous build actions in a shared cache, we can dramatically reduce build times for large software components. But how can we be confident that these cached artifacts directly relate to our input files? Different caching solutions approach this in different ways, but by periodically rebuilding from source (e.g. with a weekend rebuild pipeline), and comparing the result with a build that uses cached artifacts, we can independently verify the integrity of our cache, regardless of the cache indexing strategy.

We can use the same principles to show that the property of reproducibility is independent of the specific instantiation of DCS – including host hardware, operating systems, compilers and other tools. This allows us to confirm that a new instantiation of DCS meets the design pattern requirements, by comparing the binary outputs for a reference build.

This approach can be extended to verify that a change to a tool used as part of the DCS instantiation, or as an input to the construction process, has no effect on the output. For tools that we expect to have no direct impact on the outputs, this is a confirmation of our analysis. For tools that we do expect to affect outputs (e.g. compilers), this is a confirmation that an upgraded tool has not introduced an unexpected change – or if a change is detected, to drive our analysis of its potential impact.

Using RAFIA for certification

Codethink’s DCS reference implementation was certified by Exida based on the ISO 26262 tool qualification requirements. This was achieved using safety argumentation that was developed using RAFIA, and a safety lifecycle built around the DCS controlled process itself.

We used STPA to analyse the risks associated with the specific purpose of DCS and to define safety requirements in the form of constraints. These were then used to derive tests to verify that a given DCS instantiation satisfies the applicable requirements, or to specify process requirements that must be applied by the user of DCS and verified as part of a safety assessment. We also identified loss scenarios that might lead to violation of constraints and developed fault injection tests to show that our mitigations were effective.

By applying this controlled process to all inputs to the certification assessment, we were able to demonstrate that we had addressed all of the applicable requirements in the standard, and provide evidence to support this. Controlled inputs included documentation and requirements as well as the build inputs. This included the documented STPA results, for which we developed a YAML data structure and validation tools, which have been shared in a new open source repository. For the reference implementation, all inputs were stored in git repositories managed by Gitlab.

This allowed us to map evidence to individual certification criteria based on the applicable safety standard (ISO 26262 for DCS). For each of these, we documented how the criteria were satisfied and provided links to documents, source code, tests or CI-generated output that provided supporting evidence.

By tracking all certification criteria, assertions, and evidence in the same manner as the software, all potential changes could be managed by the same CI-driven change control process. We could use CI to verify that supporting evidence links are valid and up-to-date, and trace requirements to tests, and to test results. We were also able to produce human-readable reports for safety assessors directly from the stored and generated evidence for a given release.

Role in safety and future work

As we have seen, DCS allows us to:

  • Verify that we have control of all of our inputs, including dependencies
  • Avoid retesting or re-validating unchanged binaries
  • Identify and investigate differences when changes are detected
  • Verify that our process is isolated from environmental disturbances
  • Show that tool upgrades do not impact previously validated binaries

But how can DCS contribute to a broader safety process? And how will this help us to certify a Linux-based OS?

Safety standards such as ISO 26262 identify the key engineering processes that are expected to be used in the development of safety-relevant software, as well as organisational processes and controls (e.g. quality and safety management) that are required to ensure that the engineering processes are correctly and consistently followed. However, the standards provide only limited guidance on implementing such processes, as every organisation is expected to have its own particular approach and tools.

A deterministic construction process provides a foundation for many of these engineering processes, as as well as a way to monitor and enforce the required organisational controls. The DCS design pattern defines a consistent, automated and verifiable foundation for implementing a safety lifecycle for large-scale and complex software components. It was developed with Linux and open source software in mind, but the principles can be applied to any software, and many aspects of the RAFIA process can be applied to hardware components as well.

DCS and RAFIA enable software components and their associated documentation, as well as the required engineering and organisational process criteria and automation tools, to be managed and maintained in close alignment with the software development process. They also support key processes as follows:

Change Management and Configuration Management are, as we have seen, fundamental parts of the DCS design pattern, and also key topics in safety standards. DCS allows us to verify that we have control over all changes to our software and its configurations. This can be especially important when components or component dependencies are provided by a supply chain.

Verification of the software (e.g. through testing and static analysis) is required to confirm that it satisfies both its functional requirements and its safety requirements. DCS gives us control over all of the inputs to verification actions, as well as the tools and execution environments that are used to perform them. Using it as part of RAFIA also allows us identify and specify safety requirements for components and tools, and to manage these requirements in close coordination with the software.

Validation of the software as part of its target system is required to confirm that it fulfils its intended purpose in the system, including its role in fulfilling the system’s safety goals. This may require the software to be constructed for one or more system configurations (CPU architecture, hardware configurations, etc) and deployed to a specific test environment (e.g. a test rig). DCS provides a way to manage this, and RAFIA provides a way to validate safety mitigations at the system level, by defining fault injection cases for software components.

Impact analysis is required to determine whether a change to software may impact the target system’s safety goals. DCS can be used to drive this process for software components, by identifying when changes actually have an impact on the deployed binaries, and helping to identify the specific change that resulted in this impact.

Tool qualification is used to provide confidence in the use of software tools. As demonstrated by the DCS reference implementation, RAFIA can provide a basis for qualifying and validating open-source toolchains, and DCS can be used to control the specific source and configuration of tools that are used. DCS can also help to classify new tools by determining their impact on the constructed output, and to validate tool upgrades by determining their impact on a known and verified previous build.

On this basis, we believe that DCS represents a solid foundation for defining and constructing a Linux-based OS for a safety-related use case. Furthermore, the qualification of DCS demonstrates that the RAFIA approach can be used to provide the required safety argumentation and evidence to achieve a formal safety certification for tools based on open source software; we are confident that this can be extended to support the same goal for a safety-related system.

Established Working Group Updates

By Blog, Working Group, Workshop

Planning is currently underway for the ELISA Project Spring Workshop, which takes place virtually on April 5-7. If you haven’t yet, you can submit a CFP here (by Friday, March 4) or register to attend here.

As we prepare for the next workshop, we’ll be taking a look at the most popular sessions from the November event. A full recap by Philipp Ahmann, ELISA Project Ambassador and TSC member can be found here.

In this video, Shuah Khan, ELISA Project Chair of the Technical Steering Committee, kicks off the November Workshop with an overview of the TSC and introductions to a few of the Working Group Chairs for updates. Watch the video to learn more about the focused working groups for Safety Architecture, Tool Investigation and Code Improvement, Medical Devices and Automotive.

To join a working group click here: https://elisa.tech/community/working-groups/.

To attend the April 2022 Workshop, register here: https://events.linuxfoundation.org/elisa-workshop-spring/register/.

Linux in Safety Systems

By Blog, Workshop

Planning is currently underway for the ELISA Project Spring Workshop, which takes place virtually on April 5-7. If you haven’t yet, you can submit a CFP here (by Friday, March 4) or register to attend here.

As we prepare for the next workshop, we’ll be taking a look at the most popular sessions from the November event. A full recap by Philipp Ahmann, ELISA Project Ambassador and TSC member can be found here.

In this video, Christopher Temple, Lead Safety & Reliability Architect at Arm Germany GmbH, provides an overview of the challenges and solutions of Linux in safety systems.

How can we make Linux functionally safe for automotive?

By Ambassadors, Blog

Written by Jeffrey “Jefro” Osier-Mixon, ELISA Project Ambassador and member of the TSC and Senior Principal Community Architect at Red Hat

This blog originally ran on the Red Hat website. For more content like this, click here.

The automotive computing world, like many other industries, is going through a transformation. Traditionally discrete computing systems are becoming more integrated, with workloads consolidated into systems that look remarkably more like edge systems than embedded devices. The ideas driving this shift come from open source, but will Linux be part of this future, given that the existing standards for functional safety do not currently accommodate Linux-based operating systems?

Red Hat safety expert Gabriele Paoloni will present our safety methodology to the Automotive Linux Summit as an update to the presentation given at the recent ELISA Workshop. This methodology outlines the path Red Hat is taking toward creating an in-vehicle OS that incorporates modern ideas around workload orchestration, secure process isolation, and consolidation of mixed-criticality workloads, field-updatable and continuously certified for functional safety. We recognize that this is a huge task, but we believe it to be possible by adhering to these development pillars:

  • Open methodology development within ELISA
  • Participation in the ISO 26262 update process
  • Open code development within the CentOS Automotive SIG

ELISA (Enabling Linux in Safety Applications) is a vital part of the Linux universe, and it provides a community where those of us who care deeply about functional safety can address the challenges of certification and create solutions to resolve those challenges. ELISA is the cornerstone open source community for functional safety, and automotive is a big focus as the industry is clamoring for transformation and advanced computing facilities within the car. Red Hat recognizes that value and has emerged as a leader in the community.

ISO 26262 was originally developed in a time when automotive computing was managed through Electronic Control Units (ECUs), black boxes that had a deterministic output when given specific inputs. The standard is notably lacking support for pre-existing complex systems, including Linux. Red Hat is a member of an ongoing effort to update ISO 26262, known as ISO-PAS 8926, which has been accepted as a new working item proposal to the ISO committee.

Finally, Red Hat continues its commitment to work transparently as well as upstream first by forming an automotive special interest group (SIG) within CentOS. This Automotive SIG meets twice a month to collaboratively discuss automotive issues, including safety, and to produce a reference automotive OS based on CentOS Stream. We hope you will join us on this journey.

You can view Gabriele Paoloni’s “Functional Safety certification methodology for Red Hat In Vehicle OS” video from Automotive Linux Summit below.

Discovery Linux Kernel Subsystems used by OpenAPS

By Blog, Working Group

Written by Shuah Khan, Chair of the ELISA Project Technical Steering Committee, and Milan Lakhani, member of the ELISA Medical Devices Working Group

Key Points

  • Understanding system resources necessary to build and run a workload is important.
  • Linux tracing and strace can be used to discover the system resources in use by a workload.
  • Once we discover and understand the workload needs, we can focus on them to avoid regressions and evaluate safety.

OpenAPS is an open source Artificial Pancreas System designed to automatically adjust an insulin pump’s insulin delivery to keep Blood Glucose in a safe range at all times.

It is an open and transparent effort to make safe and effective basic Automatic Pancreas System technology widely available to anyone with compatible medical devices who is willing to build their own system.

Broadly speaking, the OpenAPS system can be thought of performing 3 main functions. Monitoring the environment and operational status of devices with as much data relevant to therapy as possible collected, predicting what should happen to glucose levels next, and enacting changes through issuing commands, emails and even phone calls.

ELISA Medical Devices WG team has set out to discover the Linux kernel subsystems used by OpenAPS. Understanding the kernel footprint necessary to run a workload helps us focus on the  subsystem and modules that make up the footprint for safety. We set out to answer the following questions:

  • What happens when an OpenAPS workload runs on Linux?
  • What are the subsystems and modules that are in active use when OpenAPS is running?
  • What are the interactions between OpenAPS and the kernel when a user checks how much insulin is left in the insulin pump?

So how do we discover the Linux kernel subsystem in use? The Linux kernel has several features and tools that can help discover which modules and functions are being used by an application during run-time. Using these tools, we can gather the system state while the OpenAPS workload is running to determine which parts of the kernel are being used.

Let’s talk a bit about kernel states. The kernel system state can be viewed as a combination of static and dynamic features and modules. Let’s first define what static and dynamic system states are and then explore how we can visualize the static and dynamic system parts of the kernel.

Static System View comprises system calls, features, static modules and dynamic modules enabled in the kernel configuration. Supported system calls and Kernel features are architecture dependent. System call numbering is different on different architectures. We can get the supported system call information using the auditd package tool. ausyscall –dump prints out the supported system calls on a system. You can install the auditd package by running “sudo apt-get install auditd” on Debian systems. Linux kernel script scripts/checksyscalls.sh can be used to check if current architecture is missing any function calls compared to i386. scripts/get_feat.pl <get_feat.pl list> can be used to list the Kernel feature support matrix for a system or get_feat.pl list –arch=arm for example lists the Kernel feature support matrix of the ‘arm’ architecture

Dynamic System View comprises system calls, ioctls invoked, and subsystems used during the runtime. A workload could load and unload modules and also change the dynamic system configuration to suit its needs.

OpenAPS Static View

Let’s first look at the OpenAPS sources to understand the workload from a static view. The OpenAPS workload is a collection of  python libraries, python-dev, software-properties-common, python-software-properties, python-numpy, python-pip, nodejs-legacy and npm. You can have a look at the OpenAPS repositories in https://github.com/openaps with the two main ones being https://github.com/openaps/oref0 and https://github.com/openaps/openaps . The initial dependencies that the user installs can be seen here https://github.com/openaps/docs/blob/master/scripts/quick-packages.sh 

One easier way to understand its runtime characteristics is watching the system state while a workload is running. We determined that the following methodology and tools would work well for us to observe the system activity. 

What is the methodology?

The first step is gathering the default system state such as the dynamic and static modules loaded on the system. lsmod command prints out the dynamically loaded modules on a system. Statically configured modules can be found in the kernel configuration file. Understanding the default system is necessary to determine the changes if any made by the workload.

The next step is discovering the Linux kernel footprint used by OpenAPS by enabling event tracing before starting the workload, gathering dynamic modules while the workload is running. Once the workload is stopped, gather the event logs, kernel messages.

Once we have the necessary information, we can extract the system call numbers from the event trace log and map them to the supported system calls.

Putting our methodology to test

Our initial plan was to use strace to trace system calls and signals used by OpenAPS commands in strace. strace runs the specified command until it  exits. It intercepts  and  records  the  system  calls  which are called by the process and the signals which are received by the process. It gives insight into the syscalls OpenAPS commands use. However, we realized quickly that OpenAPS employs setup scripts to launch its workload. As a result, using strace was not an option.

We modified OpenAPS oref0-setup.sh in https://github.com/OpenAPS/oref0.git to enable event tracing before OpenAPS starts its workload (processes, shell scripts).  This approach gives us an overall information about OpenAPS. We will develop a higher level view and then dive into individual OpenAPS commands..

==================================================================

diff –git a/bin/oref0-setup.sh b/bin/oref0-setup.sh

index 261da95b..5ae666e2 100755

— a/bin/oref0-setup.sh

+++ b/bin/oref0-setup.sh

@@ -1269,6 +1269,11 @@ if prompt_yn “” N; then

 fi # from ‘read -p “Continue? y/[N] ” -r’ after interactive setup is complete

+# ELISA enable event tracing

+echo “ELISA: Enable event tracing on all events”

+echo 1 > /sys/kernel/debug/tracing/events/enable

+cat /sys/kernel/debug/tracing/events/enable

+

 # Start cron back up in case the user doesn’t decide to reboot service cron start

==================================================================

We were able to gather traces with the above modification to oref0-setup.sh. As mentioned earlier, the ausyscall tool dumps out mapping for syscalls and their corresponding syscall table numbers. The mapping is architecture dependent for some system calls. The trace data includes NR followed by a number in the trace file. These are the syscalls that are run during the time the tracing was on. Using the tracing and system call information, we determined the system calls used by the OpenAPS workload. In addition we used scripts/checksyscalls.sh to check for system call support status on RasPi.

What did we do to gather traces and system state?

  • Start OpenAPS workload with modified the modification to enable tracing
  • Let the workload run for 30 minutes
  • Discard last 5 minutes of trace from analysis to account for interference (rsync and plugging devices) with trace file extraction
  • Stop OpenAPS.
  • Extract trace file from the system – cat /sys/kernel/debug/tracing/trace > trace.out
  • Run lsmod after OpenAPS workload starts to gather module information
  • Run “ausyscall –dump > syscalls_dump.out

Analyzing traces:

  • Map the NR (syscal) numbers from the trace to syscalls from the syscalls dump.
  • Categorize system calls and map them to Linux subsystems.

Findings and observations:

Kernel module usage:

Module NameUsage count  (OpenAPS)Usage count (default)
cmacNot loaded1
ecdh_generic1 (bluetooth)2 (bluetooth)
ecc1 (ecdh_generic)1 (ecdh_generic)
spidev2Not loaded
i2c_bcm28351Not loaded
spi_bcm28350Not loaded
i2c_dev2Not loaded
ipv62624

Subsystem usage:

Subsystem# of calls
kmem_*200990
mm_*182471
sched_*195241
rcu_*223011
irq_*1781503
kmalloc79801
cpu_idle62623
rss_stat22130
ipi_*42514
sys_*148034
vm_*60489
task_*72813
timer_58572
hrtimer_*152271
softirq_*28860
workqueue_*6129
writeback_*3933
ext4_*34461
jbd2_*2062
block_*3590
dwc_otg*27701
arch_timer13446

Updated System view:

Conclusion

This tracing activity was a good way of identifying which parts of the kernel are used by OpenAPS. This helped to generate the Updated system view, so it is useful for our goal to do an STPA analysis of OpenAPS Operating System activity. We plan to theoretically analyse how these different subsystems can interact unsafely, while using fault injection and mocked components to collect more traces.

As mentioned earlier, the approach we used so far gave us a higher level of information about the OpenAPS usage. It isn’t ideal that this higher information doesn’t tell us the system usage by individual OpenAPS commands. As an example, we won’t be able to clearly identify which system calls are invoked when a user queries insulin pump status.

We are in the process of gathering fine grained information about individual OpenAPS commands and important use-cases. As an example, what subsystems are used when a user queries the insulin pump status. We are  using the strace command to trace the OpenAPS commands. We will share our findings in our next blog on this topic.

SPDX-License-Identifier: CC-BY-4.0

This document is released under the Creative Commons Attribution 4.0 International License, available at https://creativecommons.org/licenses/by/4.0/legalcode. Pursuant to Section 5 of the license, please note that the following disclaimers apply (capitalized terms have the meanings set forth in the license). To the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.

To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.

The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.

Analyzing Open Source Interactions in Linux Based Medical Devices

By Blog, Working Group

Written by Kate Stewart and Jason Smith, ELISA Project Medical Devices Working Group Members

The ELISA Project has several working groups with different focuses including Automotive, Linux Features for Safety-Critical Systems, Medical Devices, Open Source Engineering Process, Safety Architecture and Tool Investigation and Code Improvement. The Medical Devices Working Group consists of experts in Linux, medical, and functional safety applications that work together on activities and deliverables intended to help the safe development of medical devices that include Linux-based software.  These activities include, but are not limited to, white papers describing best practices and safety requirements for medical devices using operating systems such as Linux, and conducting safety analyses of open source medical device projects that use Linux such as OpenAPS.

The safety of medical devices is very important, and can be influenced to a great extent by any software that is contained in the medical device.  Failure of software in a medical device can unfortunately cause harm to persons or worse, as demonstrated in the incidents involving the Therac-25 several decades ago.  Therefore, if a medical device is using an operating system such as Linux, the performance and safety of Linux then comes under scrutiny.

In the context of medical device safety standards such as IEC 62304, when Linux is incorporated into a medical device, it is considered to be something called Software of Unknown Provenance (SOUP).  In this case, the medical device manufacturer incorporating Linux into their device did not develop Linux and therefore does not fully know what level of quality processes were used to develop Linux in the first place.  Standards like IEC 62304 allow the usage of SOUP such as Linux; however, IEC 62304 requires that risks associated with the failure of SOUP have been considered and addressed by the manufacturer.

The Medical Devices Working Group is in the process of developing a white paper summarizing requirements from IEC 62304 pertaining to SOUP to assist medical device manufacturers.  If you have experience in Linux, medical, or functional safety applications, the Medical Devices Working Group welcomes your input on this white paper.

One of the interesting challenges with medical devices is that often most of the source for the system is restricted, and not openly available.  This presents a challenge when trying to do analysis on how Linux is being used in such systems.   

The OpenAPS project is a hobbyist project to create a feedback system between an insulin pump and glucose monitor to aid the Type 1 diabetes users to build systems to help manage their blood glucose levels.  That the project is open source means that we can see the code and have a starting point for analysis. 

The Medical Devices Working Group has been using System Theoretic Process Analysis (STPA) to analyze the system, which they call a “rig”, and the Linux system interactions within it.  A rig consists of the Raspberry PI (running Linux and algorithms), glucose monitor (commercial) , insulin pump (commercial), and some data logging device.  How to set up a rig and use it is documented by the OpenAPS project, which has significantly aided our analysis. 

At this point,  we’ve applied the STPA analysis through a couple of levels and have iterated on the analysis a few times (STPA process helped us identify some factors we’d not considered in diagraming the system initially). The team is now working on collecting traces of the system interacting with the Linux kernel.   Tracing will let us continue to take the STPA analysis into the kernel subsystems.    

We are interested in learning of other open source projects using Linux in the context of a medical device.   If you know of such a project, or are interested in working with our team of volunteers,  please feel free to reach out at medical-devices@lists.elisa.tech.

2022 Predictions for the ELISA Project

By Blog, News

Happy New Year! We hope that everyone in the ELISA ecosystem and community had a wonderful and safe holiday season. As we take a look at the blank slate of the new year, the Linux Foundation’s Shuah Khan, ELISA Project Chair of the Technical Steering Committee, shares a few of the predictions of what the project will achieve in 2022. She chats with Swapnil Bhartiya, The Fourth Industrial Revolution Creator and Host, in this video. Watch it or read the predictions below.

Swapnil Bhartiya:

Hi, this is your host, Swapnil Bhartiya. And welcome back to TFiR’s predictions for 2022. And today we have with us, once again, Shuah Khan, Linux fellow and chair of the ELISA Project Technical Steering Committee. Shuah, it’s great to have you on the show.

Shuah Khan:

Thank you for having me.

Swapnil Bhartiya:

Before we ask you to grab your crystal ball and share your predictions, I want to know a little bit about the project. Tell us what is the project all about?

Shuah Khan:

ELISA project is all about enabling Linux and safety critical applications. What that means is that at ELISA project, what we are doing is we are bringing safety experts and Linux experts together to collaborate on developing best practices and resources for people that are enabling Linux in their products, on their safety critical applications.

Swapnil Bhartiya:

Now, if I ask you, please grab the crystal ball and tell me what predictions you have for 2022.

Shuah Khan:

My first prediction is ELISA community will keep growing. We’ll continue to add new members and will continue to engage with the kernel and safety communities.

My second prediction is we will expand our critical spaces. Right now, we focus on several like medical and automotive. We will expand into other industries by adding members from aviation and industrial spaces.

Swapnil Bhartiya:

Thanks for sharing these two predictions. Now, if I ask you what is going to be the focus of the project in 2022?

Shuah Khan:

The focus for us in 2020 do is continuing to work with Automotive Grade Linux (AGL) and Autostar in the automotive space and continue engaging the kernel and safety communities. Our second focus is to continue harmonizing best practices for our members. We want to be able to make best practices, processes, and resources available to our members that are enabling Linux on safety clinical applications.

Swapnil Bhartiya:

Shuah, thank you so much for sharing these predictions and also, the focus for the project for 2022. And as usual, I would love to have you back on the show. Thank you.

Shuah Khan:

Thank you, Swapnil.

Recap of the ELISA November Workshop

By Blog, Workshop

Written by Philipp Ahmann, ELISA Project Ambassador and TSC member

The 8th ELISA Workshop, which took place on November 8-10, had 158 registrants that learned more about the various working groups and networked with ambassadors. Participants were active and stable in the technical discussions, which helped plan for next year, and proved that there is continuous interest and motivations in these topics. Additionally, new ideas were shared and can be considered as sources of inspiration and for critical thinking with e.g. talks about STPA, the z-model or open source and the community problem along with new approaches to safety and importance of processes and testing.

Overall, ELISA workshops have more of a conference character presenting major achievements and results of achievements from the last quarter. Interested newcomers as well as community participants receive a very good status update in which direction ELISA moves forward on its way to enable Linux in safety applications. 

On the other hand, this brilliant forum to sync up and align in a virtual format, misses a bit of the in-person discussion and brainstorming that made ELISA’s first workshops so enjoyable. With a community spread across the world longer working sessions are seldom and almost not possible. Prime time slots are used to align to reach as many people as possible.

As this year’s workshops come to a close, we hope that travelling and hybrid workshops become possible again. In this type of format, the core community can take joint working sessions, while talks and presentations can still attract newcomers and other interested to get an overview about what is going on in the community and what to target next. We hope to see you in-person next year if possible. 

In case you missed the November virtual workshop, let us take a look at a few selected sessions from the conference. 

Day 1 started with the newcomer and welcome session. It provided an overview about what Linux is and how safety comes into picture to form the name ELISA. The statistics shown during the talk on the usage of Linux in industry are quite impressive and speak for themselves. Linux is everywhere. (Or almost everywhere.)

Additionally, new details about two new working groups were presented – Linux Features for Safety Critical Systems Working Group and Open Source Engingeering Process Working Group. Originally, these two groups started as one but as the group grew and evolved two streams were observed and needed. Both are in good hands, with Paul Albertella from Codethink, which will continue to focus on the actual open source development process and Elana Copperman from Mobileye/Intel will start concentrating on Linux features for safety-critical systems.

How important such Linux features and development processes are could be slightly depicted from the presentation about Linux in Safety Systems explained by Christopher Temple from arm. He illustrates quite well which different levels of complexity a system could have. He also reminded the audience that there can be a difference between simply following a safety integrity standard for the necessary processes and actually creating a safe product. It is interesting to see that you may be able to show standard conformance without having safety properly implemented and creating a safe system which is not certifiable or in conformance with a safety standard.

On the second day the slides about Certification Using the New Approach to Safety presented by Paul Albertella showed that modern safety certification and assessments need to bring a good tool support towards tool support and automation along with proper tool classification and qualification as a major step to achieve efficient certification and generation of evidence, irrespective if you are running proprietary or open source software. 

In another session Lukas Bulwahn from Elektrobit helped the participants on understanding the z-model in which the challenge on pre-existing software is illustrated, where a logical model would be to write requirements, go over to integration and verification to derive software architectural design, leading to software unit verification to finally create a “z” by moving to the originally existing Software unit design and implementation. This can help to approach existing stacks which do not fall into strict recommendations such as demanded by e.g. ISO26262. This said, it has to be kept in mind that the “z” fills the gap and can create a complete “v” in the end, although you may run into issues during an ASPICE assessment.

Day two was concluded by Shaun Mooney from Codethink, who gave insights on how to use STPA for ISO26262. STPA has been used within the medical devices work group since a long time and recently found its way also into the automotive work group, where it serves as an alternative to a traditional HARA. A very important element of the STPA, beside others, is the identification of unsafe control actions (UCA) to unveil potential harms/hazards and risks in a very structured and visual way.

Day three was a nice mixture of technical insights into the Linux kernel and new approaches and directions towards safety qualification of Linux application. The strong demand to consider both Linux software development within the community and the strict regulations by safety integrity standards to come to a certifiable product, were brought to the point by Lukas Bulwahn’s talk. It was thought to encourage critical thinking on safety integrity standards and the community problem. Let us hope that the work of ELISA can make a difference and the effort we take will direct in the solution of this problem, eventually even with updated or new safety integrity standards, which include state of the art software development process and quality measures, so much needed for complex systems.

As for the previous workshops the last session wraps up and includes the goal settings for the next quarter along with the request to not let the discussions stop here…

If you reach this point of reading the blog post, you seem to be really interested in joining the ELISA community, so don’t miss to register to your mailing list of choice.

Short TL:DR summarizing words about the workshop:

  • Less registered people, but very stable number of attendees during the workshop and on level of last workshop
  • Good “take home” messages letting you think about the challenges of Linux and open source communities to approach safety integrity standards
  • New approaches in fields of architectural analysis, tools, development process, testing and engineering show the demands where Linux and open source need to go different ways and where safety integrity standards need to evolve to keep up with the complexity of software written by a large scale community.
  • ELISA community would benefit from a hybrid approach enabling in person working sessions to let the workshop be a workshop and have less conference style.
  • The ELISA community grows and reaches a point where harmonization is needed. Brainstorming times are over and everybody shares concepts and proposals how to achieve the goal to enable Linux in safety applications.

Videos and recordings of the workshop presentations can be found here

Lock in Best Pricing of the Year on Linux Foundation Training & Certification for Cyber Monday

By Blog, LF Training & Certification

Written by Dan Brown, Senior Manager, Content & Social Media, LF Training

As we approach a new year, this is the perfect time to consider what you want your career to look like in 2022. Job openings are at record highs, and this is especially true in the IT field, where the 2021 Open Source Jobs Report found that 92% of hiring managers are unable to find enough talent to meet their organizations’ needs. A primary mission of The Linux Foundation is helping close the talent gap so the industry has the talent necessary to carry out digital transformation activities and continue innovating, while also creating accessible pathways for anyone who wants to start an IT career to do so.

We are excited to once again offer our best pricing of the year on our entire catalog of training courses, certification exams, bundled programs, and bootcamps for Cyber Monday. From now through December 6, 2021, all these fantastic offerings covering hot topics like cloud computing, system administration, networking, blockchain, web development, embedded systems, and more are available at significantly reduced cost. As the home of some of the most important open source technologies like Kubernetes, Linux, Node.js, Hyperledger, and more, The Linux Foundation provides vendor-neutral training directly from the experts helping build these projects.

This year’s Cyber Monday offers include:

Bootcamps (Save 65%. Use Code: CYBER21BC)

PowerBundles (Save 65%. Use Code: CYBER21PB)

Pricing:  Pricing is $1150 $399

  1. LF258+CKA with LFD259+CKAD
  2. LFS200+LFCA with LFS201+LFCS 
  3. LFS258+CKA+LFS260+CKS
  4. LFS250+KCNA+LFS258+CKA
  5. LFW211+JSNAD+LFW212+JSNSD

Bundles (Save 65%. Use Code: CYBER21BUN)

Pricing:  Pricing is $575 $199 except LFCA+LFS200 and KCNA+LFS250 are $299 $105

  1. CKA+LFS258 – Kubernetes Admin
  2. LFCS+LFS201 – Linux SysAdmin
  3. CKS+LFS260 – Kubernetes Security
  4. CKA+CKS – Kubernetes Administration and Security
  5. LFD259+CKAD – Kubernetes Developer
  6. JSNAD+LFW211 – Node.js. Application Developer
  7. JSNSD+LFW212 – Node.js Services Developer
  8. LFS272+CHFA – Hyperledger Fabric Admin
  9. LFD272+CHFD – Hyperledger Fabric Developer
  10. LFS232+CFCD – Cloud Foundry Developer

Bundles (Save 65%. Use Code: CYBER21NEW)

Pricing:  Pricing is $299 $105 for LFCA+LFS200 and KCNA+LFS250; $425 $149 for LFCA+KCNA

  1. LFCA+ LFS200 – Entry-Level IT
  2. KCNA+LFS250 – Entry-Level Cloud Native
  3. KCNA + LFCA – Entry-Level IT and Cloud Native

Certifications (Save 50%. Use Code: CYBER21CC)

Pricing: Pricing is $375 $187.50 for most certifications; $250 $125 for LFCA and KCNA

View the certification catalog

You can check out the full details of everything that is on offer on our Cyber Monday Landing Page. Begin your journey to a long-term, successful career in IT today!