Skip to main content
Category

Blog

Xen Project: How we do functional safety

By Blog, Workshop

In May, the ELISA Project hosted its 7th Workshop with 239 participants from 37 different countries. For a complete recap of the workshop, click here. Today, we’ll take a look at one of the sessions led by Artem Mygaiev, Director of Technology Solutions at EPAM Systems, Stefano Stabellini, Principal Engineer at Xilinx, about the Xen Project.

Tailored versions of Xen Hypervisor are used in mission-critical systems for years, but this was never the case for Xen’s mainline. Starting 2019, Special Interest Group in Xen Project works on identifying and eliminating obstacles on the way to safety-certify Xen. In this video, Artem and Stefano will talk about their approach, progress so far and collaboration with other groups within Linux Foundation.

Click here learn more about the ELISA Project, here for the Working Groups and here to join our mailing list. 

The Safety Architecture Working Group: Achievements & Plans

By Blog, Workshop

The ELISA Project has several working groups each dedicated to a focus or use case. In today’s blog, we’ll take a look at the Safety Architecture Working Group, which aim’s to determine critical Linux subsystems and components in supporting safety functions, define associated safety requirements and scalable architectural assumptions, deliver corresponding safety analyses for their individual qualification and their integration into the safety critical system.

Gabriele Paoloni, Governing Board Chair for the ELISA Project, leads the Safety Architecture Working Group and recently gave an update about their mission, achievements and roadmap at the last ELISA Project Workshop. You can watch the presentation below.

ELISA Project Workshop May 2021: Safety Architecture Working Group Update

If you have questions or would like to join the Safety Architecture Working Group, they meet weekly on Tuesdays from 8-9 am ET (2-3 pm CET). Subscribe to the mail list here: https://lists.elisa.tech/g/safety-architecture.

We invite you to get your hands dirty with the Automotive Working Group!

By Blog

Written by Philipp Ahmann, ELISA Project Ambassador and Manager at ADIT

Where it all started – The automotive WG 

The ELISA Project was launched two years ago by the Linux Foundation. We had our first workshop in person at the BMW training center (Munich, Germany) and the majority of participants with automotive focuses were screaming, “Enable Linux in safety application within the car!” But what happened then?

Since then, the following workshops as well as our weekly meetings, had a strong focus on automotive use cases. There were a lot of participants and a lot of interest but not a lot of volunteers to help with tasks. We kept receiving requests from Toyota, Suzuki, BMW and Automotive Grade Linux (AGL)… In response to this, the Automotive Working Group was established a little more than a year after the launch of the ELISA Project.

From the beginning, while looking for datasheets, reference designs, documentation, and technical concepts, the words “NDA” and “IP” are something we always have in our minds. As a result, we approached the work cautiously as a group:

  • Concentrated on what ISO26262 showcased about functional safety;
  • Focused our work with a simulation that is open for everybody;
  • Stopped saying “could and should” and started using practical examples; and 
  • Pause lengthy discussions about problems that are not Linux specific.

Gaining momentum – The telltale use case

Following these principles, the Automotive Working Group started making progress.  We got a good mixture of safety expertise, Linux know-how and automotive backgrounds. We also frequently talk about new things with the curiosity and questioning mindset of a child, which has helped us create a healthy learning environment that is engaging and productive. 

Due to Suzuki’s and AGL’s introduced use case, we decided to concentrate on the enablement of telltales (often referred to also as tell-tale) based on a Linux instrument cluster. Thanks to AGL a demo and some high-level ideas were already available. 

As we continued our momentum as a group, we recognized that we were spreading our key learnings around in different formats – a bit of source code in a git, diagrams in PlantUML, PowerPoint, or other tools. Documentation was spread over presentations and google docs, so it was hard to create materials and engage interested participants outside the working group. We were determined to continue our momentum and began leveraging tools that would enable others to reproduce and understand our work.

Public means public – The tools

Functional safety projects typically have a very limited set of tools used in the development flow, which have run through a tool qualification. This is expensive because of the license fees and proprietary tools. Putting everything in plain text is good version control and a good baseline, which is key. But monolithic documents make it hard to maintain relationships and traceability – you may even find yourself lost in long text passages. 

To make documentation reviews easier and put them under proper version control, we changed from initial sketches in google docs to documentation in GitHub. While also taking requirements in GitHub, we saw they are hard to maintain, put in the relationship and maintain traceability. So the transition was done to maintain them in Freeplane with a plugin developed by Jochen Kall, who is the Automotive WG lead. This plugin also includes e.g. an export script that renders requirements in markdown. Also, the ReqIF exporter is under preparation.

Similar to text, we also had architectural diagrams that the working group converted. We worked to take initial sketches in slide decks and presentations into a storable format. In this case, PlantUML was efficient and easy for us to use.

After this, we recognized that the use case designs end up in the same issue – no relationship between elements within the single PlantUML diagrams, so it was time to change the tool again. The OSS tool we use now is Papyrus based on Eclipse. The files are stored in XML format and in this way can also be put under proper version control. 

In the end, all of this hard work has led us to a steady set of tools:

  • Github for all source code and documentation;
  • Freeplane to maintain requirements (storable in version control and exportable to text also stored in version control); and 
  • Papyrus for Eclipse. 

We are aware that our tools currently used will not survive a safety assessment out of the box, but this is not our intention. The generated artifacts should be shareable so that they can be re-used by others in their established infrastructure. Also, we are targeting to enable others to build safe Linux-based systems and follow the development process for safety integrity standards accordingly. However, in the end, our telltale example will remain an example. A fully qualified product is out of the scope of the ELISA project.

What’s next

So, here we are. Out of creativity and storming team spirit, we settle and start to standardize the tools we use. Version control, review, traceability became major elements of our work. 

The practical demo provided by AGL was enhanced to serve the fundamental demands of the telltale use case with a watchdog and a safety app as a codebase. The build can be reproduced with the help of a docker image and the binary can run on qemu. 

We still have a long way to go but our goals for the next quarter are:

  • The source code analysis and interaction with the ELISA Architecture Working Group will be enhanced; 
  • The use case will be benchmarked against Autosar Adaptive safety requirements and its demands on the operating system; and 
  • Documentation needs to reach a draft state good enough to share with an external audience and to stand critical questions.
  • The existing Kernel config will be cleaned up towards a slim config (by throwing out unused things) and feedback on our changes to AGL

To learn more about the Automotive Working Group, please subscribe to the mailing list, join our weekly calls and become an active member. Never underestimate what you can achieve with a group. We are happy to welcome additional contributors – get ready to get your hands dirty and have fun with a passionate group of people. 

A Recap of the 7th ELISA Workshop

By Blog, Workshop

Written by Gabriele Paoloni, Chair of the ELISA Project Governing Board and Lead Software Architect at Intel, and Paul Albertella, Contributor and Member of the ELISA Project and Consultant at Codethink

The latest ELISA workshop, hosted virtually on May 18-20, was a great reflection of how fast the community has grown and evolved over the last few months. Participation was almost double the previous workshop in February with 239 participants from 37 different countries. Additionally, we’ve seen more collaboration with other groups such as AUTOSAR and AGL. The existing working groups have been exploring an extensive range of topics and initiatives, and there are plans to add new working groups to help take some of these forward.

A number of presentations focused on the challenges of qualifying or certifying Linux for functional safety, and the limitations of the established routes presented in standards such as IEC62304, IEC61508 and ISO 26262, and innovative approaches to addressing these. One proposed strategy included a more comprehensive look at a Linux Architectural design, and using test and tracing techniques to verify system behaviour against a derived model. Another proposal, focused on top-down hazard analysis to define safety requirements, statistical analysis of tests on historical kernel versions to show where Linux satisfies these, and fault injection techniques to validate the safety mechanisms of the wider system.

There were also talks on how some of these ideas are being applied in the working groups, focussing on collaborative efforts in the Automotive, Safety Architecture and Development Process groups based on the Telltale use case. Other interesting sessions focused on technologies with possible applications for functional safety, including an introduction to real time configurations for Linux, and the use of authorisation hooking in security modules. 

Discussions during these sessions made it clear that the community has a lot of new ideas to explore over the coming months and a lot of new participants eager to get involved. Work continues on the ELISA technical strategy, which will provide an important direction to this work, but there’s also a need to consolidate the innovative ideas and methodologies for qualifying Linux into the current working group activities, and evaluate the need for new working groups. As ELISA becomes more mature we need to define and refine the publication strategy for the outputs of working groups. There are also plans to develop ‘onboarding’ material for the project to help enable new participants to start contributing more quickly.

You can view the some of the presentation materials here when you click on each session. Some of the videos will be accessible too in the next few weeks.  

Tuesday, May 18

Shuah Khan, the Chair of the ELISA Project Technical Steering Committee, kicked off the workshop with an overview of the project, the working group activities and the recent whitepaper summarizing their interactions and deliverables.

As the different working group updates were presented, it became clear that there is a great deal of collaboration between each group:

  • The Automotive WG refined the safety concept following feedback from the Safety Architecture WG and is working with the Tools Subgroup to optimize the active Kernel image footprint;
  • The Safety Architecture WG is working with the Development Process WG on safety analyses and on a new hybrid qualification approach;
  • The Medical Device WG is coming to a point where they need to hand over the safety requirements to the Safety Architecture WG for deeper Kernel analyses; 
  • The Tools WG released a static code analysis framework that can be used along the qualification activities of the different WGs.

Additionally, Artem Mygaiev and Stefano Stabellini gave an introduction and update about the Functional Safety Special Interest Group (SIG) in the Xen project. This session was engaging as we shared feedback and ideas about functional safety from different perspectives. 

Wednesday, May 19

Philipp Ahmann introduced the engagement between the Automotive WG and the Autosar Adaptive consortium. We have many common interests and goals that should easily help us build a solid foundation for future collaboration. 

Then Roberto Paccapeli and Vito Magnanimo presented the current limitation of ISO26262 in qualifying a complex pre-existing SW component, like Linux, and the need for overcoming such limitations.


Gabriele Paoloni and Daniel Bristot de Oliveira presented an innovative approach (Hybrid Approach) that could be used as a scalable way to qualify Linux to be used in automotive safety critical applications; hence a proposal to overcome the above mentioned limitations.

Elana Copperman and Gabriele Paoloni presented the out of context analysis of the Linux Watchdog subsystem as a practical example of applying the Hybrid Approach, and how this is beneficial in the context of the Automotive WG’s Telltale use case.

Finally, Thomas Gleixner introduced the Linux Real-Time project, the challenges that they faced to meet timing constraints and all the different solutions they put in place to overcome them. It was a really nice tour of the project with lots of possible intercepts with functional safety systems.

Thursday, May 20

On the last day, Shuah Khan and Elana Copperman presented the work done to analyze Kernel configuration parameters (Kconfig) and their impact on Functional Safety, starting from some similar work done for Security (CWE).

Chris Temple then presented an overview of the possible SW qualification routes in

Functional Safety ranging from ISO26262 to IEC61508 reinforcing the current limitations of safety standards with respect to the qualification of complex SW components already discussed in the previous day.

Following this, Paul Sherwood and Paul Albertella presented yet another approach to overcome such limitations: an in-context approach based on a mix of safety analysis, testing of historical kernel versions and fault injection. This approach sparked a lot of interest and a need to further consider and discuss it across the different ELISA WGs was widely agreed.

STPA diagram from New Approach presentation

The final day closed with some wrap-up sessions discussing future activities to advertise ELISA and encourage new members to join, ELISA goals for the next quarter and a few stats about the current workshop. 

It was wonderful to get together virtually as a community. With more than 200 participants, we hope that attendees were engaged in our work and welcome their thoughts and participating in any of our technical meetings and working groups. Click here learn more about the ELISA Project, here for the Working Groups and here to join our mailing list. 

Interview with Shuah Khan, Kernel Maintainer & Linux Fellow

By Blog
Shuah Khan, Kernel Maintainer, Linux Fellow and Chair of the ELISA Project Technical Steering Committee

Jason Perlow, Director of Project Insights and Editorial Content at the Linux Foundation, had an opportunity to speak with Shuah Khan about her experiences as a woman in the technology industry. She discusses how mentorship can improve the overall diversity and makeup of open source projects, why software maintainers are important for the health of open source projects such as the Linux kernel, and how language inclusivity and codes of conduct can improve relationships and communication between software maintainers and individual contributors. This blog originally ran on the Linux Foundation website. For more content like this, click here.

JP: So, Shuah, I know you wear many different hats at the Linux Foundation. What do you call yourself around here these days?

SK: <laughs> Well, I primarily call myself a Kernel Maintainer & Linux Fellow. In addition to that, I focus on two areas that are important to the continued health and sustainability of the open source projects in the Linux ecosystem. The first one is bringing more women into the Kernel community, and additionally, I am leading the mentorship program efforts overall at the Linux Foundation. And in that role, in addition to the Linux Kernel Mentorship, we are looking at how the Linux Foundation mentorship program is working overall, how it is scaling. I make sure the LFX Mentorship platform scales and serves diverse mentees and mentors’ needs in this role. 

The LF mentorships program includes several projects in the Linux kernel, LFN, HyperLedger, Open MainFrame, OpenHPC, and other technologies. The Linux Foundation’s Mentorship Programs are designed to help developers with the necessary skills–many of whom are first-time open source contributors–experiment, learn, and contribute effectively to open source communities. 

The mentorship program has been successful in its mission to train new developers and make these talented pools of prospective employees trained by experts to employers. Several graduated mentees have found jobs. New developers have improved the quality and security of various open source projects, including the Linux kernel. Several Linux kernel bugs were fixed, a new subsystem mentor was added, and a new driver maintainer is now part of the Linux kernel community. My sincere thanks to all our mentors for volunteering to share their expertise.

JP: How long have you been working on the Kernel?

SK: Since 2010, or 2011, I got involved in the Android Mainlining project. My first patch removed the Android pmem driver.

JP: Wow! Is there any particular subsystem that you specialize in?

SK: I am a self described generalist. I maintain the kernel self-test subsystem, the USB over IP driverusbip tool, and the cpupower tool. I contributed to the media subsystem working on Media Controller Device Allocator API to resolve shared device resource management problems across device drivers from different subsystems.

JP: Hey, I’ve actually used the USB over IP driver when I worked at Microsoft on Azure. And also, when I’ve used AWS and Google Compute. 

SK: It’s a small niche driver used in cloud computing. Docker and other containers use that driver heavily. That’s how they provide remote access to USB devices on the server to export devices to be imported by other systems for use.

JP: I initially used it for IoT kinds of stuff in the embedded systems space. Were you the original lead developer on it, or was it one of those things you fell into because nobody else was maintaining it?

SK: Well, twofold. I was looking at USB over IP because I like that technology. it just so happened the driver was brought from the staging tree into the Mainline kernel, I volunteered at the time to maintain it. Over the last few years, we discovered some security issues with it, because it handles a lot of userspace data, so I had a lot of fun fixing all of those. <laugh>.

JP: What drew you into the Linux operating system, and what drew you into the kernel development community in the first place?

SK: Well, I have been doing kernel development for a very long time. I worked on the LynxOS RTOS, a while back, and then HP/UX, when I was working at HP, after which I transitioned into  doing open source development — the OpenHPI project, to support HP’s rack server hardware, and that allowed me to work much more closely with Linux on the back end. And at some point, I decided I wanted to work with the kernel and become part of the Linux kernel community. I started as an independent contributor.

JP: Maybe it just displays my own ignorance, but you are the first female, hardcore Linux kernel developer I have ever met. I mean, I had met female core OS developers before — such as when I was at Microsoft and IBM — but not for Linux. Why do you suppose we lack women and diversity in general when participating in open source and the technology industry overall?

SK: So I’ll answer this question from my perspective, from what I have seen and experienced, over the years. You are right; you probably don’t come across that many hardcore women Kernel developers. I’ve been working professionally in this industry since the early 1990s, and on every project I have been involved with, I am usually the only woman sitting at the table. Some of it, I think, is culture and society. There are some roles that we are told are acceptable to women — even me, when I was thinking about going into engineering as a profession. Some of it has to do with where we are guided, as a natural path. 

There’s a natural resistance to choosing certain professions that you have to overcome first within yourself and externally. This process is different for everybody based on their personality and their origin story. And once you go through the hurdle of getting your engineering degree and figuring out which industry you want to work in, there is a level of establishing credibility in those work environments you have to endure and persevere. Sometimes when I would walk into a room, I felt like people were looking at me and thinking, “why is she here?” You aren’t accepted right away, and you have to overcome that as well. You have to go in there and say, “I am here because I want to be here, and therefore, I belong here.” You have to have that mindset. Society sends you signals that “this profession is not for me” — and you have to be aware of that and resist it. I consider myself an engineer that happens to be a woman as opposed to a woman engineer.

JP: Are you from India, originally?

SK: Yes.

JP: It’s funny; my wife really likes this Netflix show about matchmaking in India. Are you familiar with it?

SK: <laughs> Yes I enjoyed the series, and A Suitable Girl documentary film that follows three women as they navigate making decisions about their careers and family obligations.

JP: For many Americans, this is our first introduction to what home life is like for Indian people. But many of the women featured on this show are professionals, such as doctors, lawyers, and engineers. And they are very ambitious, but of course, the family tries to set them up in a marriage to find a husband for them that is compatible. As a result, you get to learn about the traditional values and roles they still want women to play there — while at the same time, many women are coming out of higher learning institutions in that country that are seeking technical careers. 

SK: India is a very fascinatingly complex place. But generally speaking, in a global sense, having an environment at home where your parents tell you that you may choose any profession you want to choose is very encouraging. I was extremely fortunate to have parents like that. They never said to me that there was a role or a mold that I needed to fit into. They have always told me, “do what you want to do.” Which is different; I don’t find that even here, in the US. Having that support system, beginning in the home to tell you, “you are open to whatever profession you want to choose,” is essential. That’s where a lot of the change has to come from. 

JP: Women in technical and STEM professions are becoming much more prominent in other countries, such as China, Japan, and Korea. For some reason, in the US, I tend to see more women enter the medical profession than hard technology — and it might be a level of effort and perceived reward thing. You can spend eight years becoming a medical doctor or eight years becoming a scientist or an engineer, and it can be equally difficult, but the compensation at the end may not be the same. It’s expensive to get an education, and it takes a long time and hard work, regardless of the professional discipline.

SK: I have also heard that women also like to enter professions where they can make a difference in the world — a human touch, if you will. So that may translate to them choosing careers where they can make a larger impact on people — and they may view careers in technology as not having those same attributes. Maybe when we think about attracting women to technology fields, we might have to promote technology aspects that make a difference. That may be changing now, such as the LF Public Health (LFPH) project we kicked off last year. And with LF AI & Data Foundation, we are also making a difference in people’s lives, such as detecting earthquakes or analyzing climate change. If we were to promote projects such as these, we might draw more women in.

JP: So clearly, one of the areas of technology where you can make a difference is in open source, as the LF is hosting some very high-concept and existential types of projects such as LF Energy, for example — I had no idea what was involved in it and what its goals were until I spoke to Shuli Goodman in-depth about it. With the mentorship program, I assume we need this to attract fresh talent — because as folks like us get older and retire, and they exit the field, we need new people to replace them. So I assume mentorship, for the Linux Foundation, is an investment in our own technologies, correct?

SK: Correct. Bringing in new developers into the fold is the primary purpose, of course — and at the same time, I view the LF as taking on mentorship provides that neutral, level playing field across the industry for all open source projects. Secondly, we offer a self-service platform, LFX Mentorship, where anyone can come in and start their project. So when the COVID-19 pandemic began, we expanded this program to help displaced people — students, et cetera, and less visible projects. Not all projects typically get as much funding or attention as others do — such as a Kubernetes or  Linux kernel — among the COVID mentorship program projects we are funding. I am particularly proud of supporting a climate change-related project, Using Machine Learning to Predict Deforestation.

The self-service approach allows us to fund and add new developers to projects where they are needed. The LF mentorships are remote work opportunities that are accessible to developers around the globe. We see people sign up for mentorship projects from places we haven’t seen before, such as Africa, and so on, thus creating a level playing field. 

The other thing that we are trying to increase focus on is how do you get maintainers? Getting new developers is a starting point, but how do we get them to continue working on the projects they are mentored on? As you said, someday, you and I and others working on these things are going to retire, maybe five or ten years from now. This is a harder problem to solve than training and adding new developers to the project itself.

JP: And that is core to our software supply chain security mission. It’s one thing to have this new, flashy project, and then all these developers say, “oh wow, this is cool, I want to join that,” but then, you have to have a certain number of people maintaining it for it to have long-term viability. As we learned in our FOSS study with Harvard, there are components in the Linux operating system that are like this. Perhaps even modules within the kernel itself, I assume that maybe you might have only one or two people actively maintaining it for many years. And what happens if that person dies or can no longer work? What happens to that code? And if someone isn’t familiar with that code, it might become abandoned. That’s a serious problem in open source right now, isn’t it?

SK: Right. We have seen that with SSH and other security-critical areas. What if you don’t have the bandwidth to fix it? Or the money to fix it? I ended up volunteering to maintain a tool for a similar reason when the maintainer could no longer contribute regularly. It is true; we have many drivers where maintainer bandwidth is an issue in the kernel. So the question is, how do we grow that talent pool?

JP: Do we need a job board or something? We need X number of maintainers. So should we say, “Hey, we know you want to join the kernel project as a contributor, and we have other people working on this thing, but we really need your help working on something else, and if you do a good job, we know tons of companies willing to hire developers just like you?” 

SK: With the kernel, we are talking about organic growth; it is just like any other open source project. It’s not a traditional hire and talent placement scenario. Organically they have to have credibility, and they have to acquire it through experience and relationships with people on those projects. We just talked about it at the previous Linux Plumbers Conference, we do have areas where we really need maintainers, and the MAINTAINERS file does show areas where they need help. 

To answer your question, it’s not one of those things where we can seek people to fill that role, like LinkedIn or one of the other job sites. It has to be an organic fulfillment of that role, so the mentorship program is essential in creating those relationships. It is the double-edged sword of open source; it is both the strength and weakness. People need to have an interest in becoming a maintainer and also a commitment to being one, long term.

JP: So, what do you see as the future of your mentorship and diversity efforts at the Linux Foundation? What are you particularly excited about that is forthcoming that you are working on?

SK: I view the Linux Foundation mentoring as a three-pronged approach to provide unstructured webinars, training courses, and structured mentoring programs. All of these efforts combine to advance a diverse, healthy, and vibrant open source community. So over the past several months, we have been morphing our speed mentorship style format into an expanded webinar format — the LF Live Mentorship series. This will have the function of growing our next level of expertise. As a complement to our traditional mentorship programs, these are webinars and courses that are an hour and a half long that we hold a few times a month that tackle specific technical areas in software development. So it might cover how to write great commit logs, for example, for your patches to be accepted, or how to find bugs in C code. Commit logs are one of those things that are important to code maintenance, so promoting good documentation is a beneficial thing. Webinars provide a way for experts short on time to share their knowledge with a few hours of time commitment and offer a self-paced learning opportunity to new developers.

Additionally, I have started the Linux Kernel Mentorship forum for developers and their mentors to connect and interact with others participating in the Linux Kernel Mentorship program and graduated mentees to mentor new developers. We kicked off Linux Kernel mentorship Spring 2021 and are planning for Summer and Fall.

A big challenge is we are short on mentors to be able to scale the structured program. Solving the problem requires help from LF member companies and others to encourage their employees to mentor, “it takes a village,” they say.

JP: So this webinar series and the expanded mentorship program will help developers cultivate both hard and soft skills, then.

SK: Correct. The thing about doing webinars is that if we are talking about this from a diversity perspective, they might not have time for a full-length mentorship, typically like a three-month or six-month commitment. This might help them expand their resources for self-study. When we ask for developers’ feedback about what else they need to learn new skill sets, we hear that they don’t have resources, don’t have time to do self-study, and learn to become open source developers and software maintainers. This webinar series covers general open source software topics such as the Linux kernel and legal issues. It could also cover topics specific to other LF projects such as CNCF, Hyperledger, LF Networking, etc.

JP: Anything else we should know about the mentorship program in 2021?

SK: In my view,  attracting diversity and new people is two-fold. One of the things we are working on is inclusive language. Now, we’re not talking about curbing harsh words, although that is a component of what we are looking at. The English you and I use in North America isn’t the same English used elsewhere. As an example, when we use North American-centric terms in our email communications, such as when a maintainer is communicating on a list with people from South Korea, something like “where the rubber meets the road” may not make sense to them at all. So we have to be aware of that.

JP: I know that you are serving on the Linux kernel Code of Conduct Committee and actively developing the handbook. When I first joined the Linux Foundation, I learned what the Community Managers do and our governance model. I didn’t realize that we even needed to have codes of conduct for open source projects. I have been covering open source for 25 years, but I come out of the corporate world, such as IBM and Microsoft. Codes of Conduct are typically things that the Human Resources officer shows you during your initial onboarding, as part of reviewing your employee manual. You are expected to follow those rules as a condition of employment. 

So why do we need Codes of Conduct in an open source project? Is it because these are people who are coming from all sorts of different backgrounds, companies, and ways of life, and may not have interacted in this form of organized and distributed project before? Or is it about personalities, people interacting with each other over long distance, and email, which creates situations that may arise due to that separation?

SK: Yes, I come out of the corporate world as well, and of course, we had to practice those codes of conduct in that setting. But conduct situations arise that you have to deal with in the corporate world. There are always interpersonal scenarios that can be difficult or challenging to work with — the corporate world isn’t better than the open source world in that respect. It is just that all of that happens behind a closed setting.

But there is no accountability in the open source world because everyone participates out of their own free will. So on a small, traditional closed project, inside the corporate world, where you might have 20 people involved, you might get one or two people that could be difficult to work with. The same thing happens and is multiplied many times in the open source community, where you have hundreds of thousands of developers working across many different open source projects. 

The biggest problem with these types of projects when you encounter situations such as this is dealing with participation in public forums. In the corporate world, this can be addressed in private. But on a public mailing list, if you are being put down or talked down to, it can be extremely humiliating. 

These interactions are not always extreme cases; they could be simple as a maintainer or a lead developer providing negative feedback — so how do you give it? It has to be done constructively. And that is true for all of us.

JP: Anything else?

SK: In addition to bringing our learnings and applying this to the kernel project, I am also doing this on the ELISA project, where I chair the Technical Steering Committee, where I am bridging communication between experts from the kernel and the safety communities. To make sure we can use the kernel the best ways in safety-critical applications, in the automotive and medical industry, and so on. Many lessons can be learned in terms of connecting the dots, defining clearly what is essential to make Linux run effectively in these environments, in terms of dependability. How can we think more proactively instead of being engaged in fire-fighting in terms of security or kernel bugs? As a result of this, I am also working on any necessary kernel changes needed to support these safety-critical usage scenarios.

JP: Before we go, what are you passionate about besides all this software stuff? If you have any free time left, what else do you enjoy doing?

SK: I read a lot. COVID quarantine has given me plenty of opportunities to read. I like to go hiking, snowshoeing, and other outdoor activities. Living in Colorado gives me ample opportunities to be in nature. I also like backpacking — while I wasn’t able to do it last year because of COVID — I like to take backpacking trips with my son. I also love to go to conferences and travel, so I am looking forward to doing that again as soon as we are able.

Talking about backpacking reminded me of the two-day, 22-mile backpacking trip during the summer of 2019 with my son. You can see me in the picture above at the end of the road, carrying a bearbox, sleeping bag, and hammock. It was worth injuring my foot and hurting in places I didn’t even know I had.

JP: Awesome. I enjoyed talking to you today. So happy I finally got to meet you virtually.

The ELISA Workshop: Functional Safety at Xen Project

By Blog, Workshop

Written by George Dunlap, Xen Project Advisory Board Chair

Tailored versions of Xen Hypervisor have been used in mission-critical systems for years, but this was never the case for Xen’s mainline. Starting 2019, a Xen Project Functional Safety Special Interest Group was formed to identify and eliminate obstacles to safety-certify Xen.

Safety certification is one of the essential requirements for software to be used in highly regulated industries. Besides technical and compliance issues (such as ISO 26262 vs IEC 61508) transitioning an existing project to become more easily safety certifiable requires significant changes to development practices within an open source project.

At the upcoming ELISA Workshop on May 18-20, Artem Mygaiev, Director, Technology Solutions, EPAM Systems and Stefano Stabellini, Principal Engineer, Xilinx, will lay out some challenges of making safety certification achievable in open source.  The talk, scheduled for May 18 at 7:30 am PDT, will primarily focus on the necessary processes, tooling changes, and community challenges that can prevent progress. Additionally, the talk will offer an in-depth review of how Xen Project is approaching this challenging goal and try to derive lessons for other projects and contributors.

This talk will provide real-life perspectives from open source community members on achieving safety certification. Audiences will have a clear understanding of what obstacles the group faced and how they are overcoming challenges, as well as how to set realistic expectations when embarking on this task. Add this talk to your schedule here: https://sched.co/j3SO.

The ELISA Workshop is free and open to the public. Check out the schedule and register today: https://events.linuxfoundation.org/elisa-workshop/.

ELISA Project Mentorships – Apply Today!

By Blog

The ELISA Project is sponsoring two part-time summer mentorships, which runs from June 1- November. ELISA Project Ambassador Lukas Bulwahn will be mentoring both projects.

Linux Kernel: Checkpatch Documentation

Previous mentees have been evaluating, re-visiting and improving the checkpatch script and its various rules. Towards the end of the mentorship, they have also started to document the rules and their rationales (with references to previous discussions and documentation), but not all rules are fully documented yet. The task in this mentorship is to continue evaluating the rules, identifying the known typical false positive cases, writing the documentation of the rules and explaining the rules’ rationales and known false positives. Apply here: https://mentorship.lfx.linuxfoundation.org/project/a6565ff5-b07c-4c04-98db-3a470917d497

Linux Kernel: Mining for Maintainers

Jonathan Corbet identified in his article MAINTAINERS truth and fiction [https://lwn.net/Articles/842415/] that about 2,800 files in the kernel repository have no dedicated maintainer in the MAINTAINERS file (see https://lwn.net/Articles/842606/ for the full list of files). Jonathan Corbet sets out the call for action: “the vast majority are header files under include/, most of which probably do have maintainers and should be added to the appropriate entries.” The task in this mentorship is to follow this call for action and add the header files under include/ to the appropriate entries. Apply here: https://mentorship.lfx.linuxfoundation.org/project/8f69e012-08d0-4e2b-baa7-9143b5f98823

The deadline to submit applications is Friday, May 14. Submit your application today!

Safety-related Software, Linux, and Certification

By Blog

Contributed by Jason R. Smith, Principal Engineer, UL LLC and ELISA Ambassador

In my nearly 16 years as a certification engineer focusing on safety-related software and functional safety, on many occasions I have found myself working with a client with safety-related software who is not only going through the certification process for the first time, but is also incorporating third-party software such as Linux into their application.  Even before I have to answer questions like “How long will this take?”, I’ll often have to answer an even more fundamental question: “Is it even possible to certify this application?”

 

Jason Smith,
Certification engineer, expressing doubt 

My typical response usually starts with the dubious phrase, “Yes, but…”

After first explaining that functional safety standards require the software to be developed in accordance with a software development life cycle like the V-Model, my attention then focuses on the third-party software: it wasn’t developed by the client, it wasn’t tested by the client, it doesn’t have its own certification, and the client doesn’t know much about its inner workings.  It is what some of us certification engineers call SOUP, i.e. Software of Unknown Provenance.

Soup

Also SOUP

So, what is required of SOUP?  Much of it depends on the application.  Standards intended to be applied to high complexity systems such as IEC 61508 and ISO 26262 require either proof of certification or submittal of evidence that more or less demonstrates an equivalent level of confidence as certification.  However, some standards used in the appliance or medical sectors such as UL 1998 or IEC 62304, generally systems of lower complexity, allow a different approach that effectively treats SOUP “as is”.

The SOUP Approach

The SOUP approach employed in standards such as UL 1998 or IEC 62304 focuses on a few topics:

  • Information about the SOUP such as a detailed description of its purpose, its function, its available interfaces, and its version are available and understood by the client;
  • The client has conducted a fault analysis that treats the SOUP as a component of the system, has analyzed how failures of the SOUP could impact the safety of the system, and has measures in place to address those failures;
  • The client has sought out information pertaining to any known issues or bugs related to the SOUP, has analyzed that information, and has shown that those known issues or bugs do not impact the safety of the system; and
  • The client has conducted and can show evidence of appropriate verification and testing activities, proving that the SOUP and any measures that have been implemented to address failures of the SOUP work correctly in the context of the application.

The ELISA project is currently working on a white paper that explains this approach in further detail and what resources are available to further facilitate this approach for applications that employ LINUX.  If you are interested in reading more or contributing to this white paper, it is located on GitHub here.

 

Making CodeChecker Ready for Kernel Developers

By Blog

Contributed by Jay Rajput, ELISA 2020-2021 Mentee

The following is a brief report of my project carried out as a part of the ELISA/LFX mentorship program. 

The primary goal of the mentorship is to extend the Codechecker report converter to support a variety of tools such as: 

  • Coccinelle
  • Smatch
  • Sphinx
  • Kernel-Doc
  • Sparse

Motivation

Many developers contribute to the Linux kernel. And these kernel developers are not exempted from typical programming errors in their patches, such as  null pointer referencing, array buffer overflow.  Thus, the kernel community has developed some code analyzers, such as sparse, coccinelle and smatch, for reporting such potential error patterns.

The tools mentioned above are some of the well-known tools for analyzing the code of Linux kernel. These tools however only print the warnings and errors on the command-line interface.Below is the output of some example: 

arch/x86/kernel/signal.c:338:9: warning: incorrect type in argument 1 (different address spaces)arch/x86/kernel/signal.c:338:9:    expected void const volatile [noderef] __user *ptrarch/x86/kernel/signal.c:338:9:    got unsigned long long [usertype] *arch/x86/kernel/signal.c:338:9: warning: cast removes address space ‘__user’ of expressionarch/x86/kernel/signal.c:338:9: warning: cast removes address space ‘__user’ of expressionarch/x86/kernel/pci-dma.c:20:26: warning: symbol ‘dma_ops’ was not declared. Should it be static?arch/x86/kernel/pci-dma.c:27:5: warning: symbol ‘panic_on_overflow’ was not declared. Should it be static?arch/x86/kernel/pci-dma.c:31:5: warning: symbol ‘iommu_merge’ was not declared. Should it be static?

It is tedious for developers to look up and keep track of all the errors through the text file/terminal. Manually searching the line of the error also becomes a tedious job.  Furthermore, sending the findings or comments to another developer is not feasible. Thus, it becomes very difficult for developers to keep track of all the errors. CodeChecker offers a nice and convenient web interface for viewing all the errors and even giving them some tags such as confirmed, false positive, etc. 

CodeChecker’s report converter tool provides an interface for viewing all the reports produced by the code-analyzing tools in a nice and simple web interface. It also provides the functionality to comment and mark the bugs as confirmed, false positive, etc. Thus, I wanted to extend the report converter to the tools mentioned above and also implement the functionality for importing and exporting the changes to and from CodeChecker. 

Report Converters for Analyzer Tools

My first and primary task was to create report converters for the tools mentioned above. These report converters would parse the output of tools, using regex into the format:  

File PathLine NumberColumnError MessageChecker Name

Once the report converter parses the output file of the tools it stores them into plist files which can be opened and viewed into the browser. All the plist files are stored in a folder specified by the user while running the report converter. 

CodeChecker also provides a feature to store these plist files into the CodeChecker server and then we can perform multiple operations on the reports like marking the status( confirmed, false positive, etc) and adding comments on each bug. 

Importer and Exporter for CodeChecker

The CodeChecker command-line interface provides a variety of options for listing, filtering all the runs/results present in the CodeChecker server. I have added two more commands to the CodeChecker CLI for Importing and Exporting the results to and from the CodeChecker Server.

The Export command lets the user export the findings i.e comments and review the status of one or more reports specified by the user. Below is the sample output of the reports: 

{    “comments”: {        “c54004ae9ecfb34b396b46d9e08c4291”: [            {                “id”: 7,                “author”: “Anonymous”,                “message”: “This is a confirmed Bug here”,                “createdAt”: “2020-11-28 00:05:02.034035”,                “kind”: 0            },            {                “id”: 6,                “author”: “Anonymous”,                “message”: “I am doubtful about this bug”,                “createdAt”: “2020-11-28 00:01:48.190914”,                “kind”: 0            }     },   “reviewData”: {        “00eab39f7bb399d446e0794025ab3958”: {            “status”: 1,            “comment”: “This is for the exporter function testing”,            “author”: “Anonymous”,            “date”: “2020-12-20 23:01:02.669476”        },    }}

The Importer command is used for importing the comments and review statuses sent by another user into CodeChecker Server. For importing the comments, for each comment, we check if the date, kind, and the message of the existing comment in the server and the incoming report. If any of them is different, we replace the existing comment with the incoming comment. Similarly, for review status, if the date of the review status is different, then we update the review status in the server with the incoming review. 

Use Case of Review Exchange

Consider there are two users of your System, John, and Maria. Both of them have the output of Coccinelle on the Linux kernel. 

They run the report converter of coccinelle on the output as:

report-converter -t coccinelle -o ./codechecker_coccinelle_reports ./coccinelle_reports.out

The details of how to use the report converter of all tools can be found in Report Converter Readme

They store the findings into the Codechecker server using the command:

CodeChecker store ./codechecker_coccinelle_reports -n coccinelle

Assume that there are 10 errors in coccinelle with report ids 1 to 10. John performs the following changes:

  1. Marks report 1 as false positive and comments “This is not an actual bug
  2. Marks report 2 as Confirmed.
  3. Comments on report 3 “I am not sure about the status of this bug”

Maria makes the following changes in this copy on his machine:

  1. Marks report 3 as  Confirmed and comments “This is a confirmed bug
  2. Comments on report 4 “This error needs to be handled

Now, John runs the export command and obtains the json with the name coccinelle.json

CodeChecker cmd export -n coccinelle 2>/dev/null | python -m json.tool > coccinelle.json

John sends the obtained file to Maria via email or any other communication medium. Maria downloads this file and imports the findings into his CodeChecker server:

CodeChecker cmd import -i coccinelle.json

Now, the reports in Maria’s server will be:

Report 1:Tag: False PositiveComment: This is not an actual bugReport 2:Tag: ConfirmedReport 3:Tag: ConfirmedComments: [I am not sure about the status of this bug, This is a confirmed bug]Report 4:Comment: This error needs to be handled

Pull Requests

  • Coccinelle Parser: Coccinelle report converter tool for parsing coccinelle output of kernel sources. 

https://github.com/Ericsson/codechecker/pull/2949

https://github.com/Ericsson/codechecker/pull/2955

https://github.com/Ericsson/codechecker/pull/2979

  • Smatch Parser: Smatch report converter tool for parsing Smatch output of kernel sources.

https://github.com/Ericsson/codechecker/pull/2968

https://github.com/Ericsson/codechecker/pull/2980

  • Kernel-Doc Parser: Kernel-Doc report converter tool for parsing Kernel-Doc output of kernel sources.

https://github.com/Ericsson/codechecker/pull/2981

  • Sphinx Parser: Sphinx report converter tool for parsing Sphinx output of kernel sources.

https://github.com/Ericsson/codechecker/pull/3017

  • Fix CodeChecker’s cmd results: Comments in the cmd results command were not fetched properly and even showed empty strings in some cases. Added a separate comments section to the details of the results command.

https://github.com/Ericsson/codechecker/pull/3075

  • Importer & Exporter command: Exporter command for exporting the comments and review statuses for given or all runs into a json file. Importer command for importing the findings sent by another developer in a json file.

https://github.com/Ericsson/codechecker/pull/3116

  • Sparse Parser: Sparse report converter tool for parsing Sparse output of kernel sources.

https://github.com/Ericsson/codechecker/pull/3160

Future Work

  • Currently, the import and export commands of CodeChecker are limited only to the Command Line Interface. I would like to implement a feature to make these available in the Web Interface as well. 
  • I would like to extend CodeChecker’s report converter tools to provide proper warning classes for all the report converters.
  • I would like to add support for multiple users within a single instance of CodeChecker coming to different assessments and then moderating or reviewing them in some controlled way.

Acknowledgment

I would like to thank my mentor Lukas Bulwahn for giving me this opportunity and helping me come up with workflows and ideas for fulfilling my goals. My heartfelt gratitude towards the maintainers of CodeChecker especially Márton Csordás for being patient with me during code reviews and providing his valuable feedback. 

A look back at the 6th ELISA Workshop

By Blog

By Elana Copperman at Mobileye, Philipp Ahmann at ADIT

More than 120 registered participants, half of them first time joiners, can look back to 3 days of the 6th ELISA workshop, again held virtually due to the pandemic. It was filled with various sessions focusing on, of course, Linux and safety, but also on medical and automotive use cases as well as which role testing, tooling and development processes play to achieve the ELISA deliverables. And these ELISA deliverables should lead to making it easier to enable Linux in safety applications. This is the actual mission of the ELISA project

As this is the 3rd time that ELISA held a virtual workshop you can see the learnings from the past. Virtual get together, multiple hosts and cloud recordings of sessions support the rich experience of the workshop. 

The virtual workshop also reduces the hurdle to participate, especially as workshops are open for all and free of charge. This led again to a higher number of average participants per session compared to our previous workshops and confirms the interest in a functional safety product based on LInux. 

During the workshop, besides the regular content such as working group updates and goal settings also completely new areas of interest were presented by members and external speakers. Topics included cybersecurity expectations in the automotive world, code coverage of glibc and Intel’s Linux test robot. The impact to the Linux (Kernel) community was addressed by talks about measuring code review in Linux Kernel, statistics on patch integration or the kernel testing reference process.

The Safety architecture and automotive work groups agreed on their communication interface by sharing requirements and concepts on the Linux architecture. This enabled the momentum these two groups needed to make progress on their goals. Finally, collaboration and contributions from all our ELISA members  resulted in publishing source code and documentation on ELISA github

Tuesday Feb 2 2021

The first day began with updates from the ELISA Working Groups. As ELISA continues to chart  new territory, the collaboration between WGs is being defined.  In addition to the new interfaces described by Philipp Ahmann between the Architecture and Automotive WGs, we are beginning a joint effort between the Architecture and Development Process WG to set up a database of kernel configurations/features amenable for safety analysis.

In addition, two presentations from Intel/Mobileye focused on static analysis for compliance with MISRA and proposed test strategies for safety qualification and FFI evidences.

Both talks were very insightful, with a lot of feedback from the audience and participants turning the end of the day really into a workshop feeling like a workshop. It includes exactly the discussion which is needed when working on a Linux system ready for safety applications.

Looking forward to an even more exciting day tomorrow!

Wednesday Feb 3 2021

Andreas Gasch and Joyabrata Ghosh kick started today’s sessions with a presentation on Cybersecurity Expectations in the Automotive World.  It certainly was interesting to see how the Cybersecurity community is coming around full circle to align with the standards, processes, documentation and management much closer to what we work with in the safety community.  Perhaps in the future, we will (eventually) see Cybersecurity and Safety join forces for risk management and qualification.

Eli Gurvitz then provided a report from the Code Coverage Metrics for GLibC mentorship project (by Ashutosh Pandey) on code coverage analysis for glbc, generating quite some ripples and interest in joining the “Fun and Happiness” group aka Tool Investigation and Code Improvement Sub-WG for further work in this area.   Kudos, Ashutosh, for a great job!

/var/www/render_temp/1192603/1466646175/slide4.png

Spirited discussions came up in the session on how to handle documentation in git and github. It could be clearly seen that the interest in a proper version controlled documentation repository is needed, but the standard format, if reST, markdown or LaTeX could not be concluded and reminded on the starting phase of ELISA. With the Automotive and Medical Device WGs starting to add their documents to the ELISA git repository, ELISA is taking a good iterative approach to making their work visible and structured to meet the expectations of safety experts as and the safety standards.

Day #2 was wrapped up with an extended session on defining kernel configurations for safety critical applications.  Focused on the “top down” alignment of the CONFIGs and their analysis within the context of Shuah’s work on CWE classifications for safety.  Medium term (~6 month) goal to establish a basic set of configurations and document effectively on how integrators can potentially use those configurations in safety analysis of their specific use case.  There are a lot of challenges in this area.

Thursday Feb 4 2021

The 3rd workshop day is typically focused on setting goals for the next 3 months. In addition, ELISA members spend time collaborating in long deeper dive Development Process WG’s working sessions. These sessions actually make the turn from the conference character to a workshop character. Also other sessions such as the goal settings for the next quarter leave enough room to have alignment among the active ELISA members including the Technical Steering Committee. These sessions see fewer first time attendees. We encourage new members to attend the goal setting sessions in the hope that they might be able to engage and collaborate with us on achieving our goals for the next quarter. We sincerely hope all the first time attendees will join us in our upcoming 7th workshop in May 2021.

Closing thoughts

Recapping the three days of this workshop, it is nice to see that the ELISA project is making steady progress and providing enough technical content so that the different working groups start to align and work together more effectively. The Development Process WG has become large enough to spin out smaller teams to focus on WG goals. The ELISA workshops are instrumental in collaborating on discussing current work and in collecting feedback to gain valuable insights and generate new ideas.