In this blog post, we will explain how we handle security vulnerabilities in our software products, because it can be interesting for users to see. We believe that transparency is important, and it could help other organizations implement a similar process, or improve their existing processes.
We make software products, and we have a process to handle security vulnerabilities. The current process had its beginnings when we discovered some security issues in 2019, where we decided we'd like to inform our customers about them, and create a public Common Vulnerability Enumeration (CVE) entry for it. Those steps were more properly formalized into a process in 2021, and since then we've handled 14 CVEs according to this process (about 3-4 per year across 2 products, CFEngine and Mender).
We strongly believe that it's better to actively try to find and fix security vulnerabilities, and be open about them, informing our users and customers in a detailed and structured way. Security-conscious customers understand and appreciate this, and for others it can have an educational effect, showing the reality of software security and importance of regular updates for networked software, which could be exploited by bad actors.
Step 0 - Discovery
For the purposes of this post, we won't go too much into detail on how to find security vulnerabilities, that's a huge topic in itself. But suffice it to say, we have several activities to help us find security vulnerabilities, including:
- Hiring 3rd party security researchers to perform penetration testing on our products.
- Inviting and rewarding bug bounty hunters to find and disclose vulnerabilities.
- Periodic tasks for developers / security team members to test our software and look for vulnerabilities or potential improvements.
- Running automated scanning tools, such as Acunetix, GitHub CodeQL, Cppcheck, Trivy, and others.
Step 1 - Disclosure
When a potential vulnerability has been discovered, disclosing it is the next step. For our internals / employees this can be done in different ways (creating a ticket, sending a Slack message), but for externals, such as bug bounty hunters, they need somewhere to contact to responsibly disclose the issue. We've chosen to use the standard security.txt format, to provide contact information and additional details around disclosing security vulnerabilities:
We've also set up redirects from other websites, so it should be easy for externals to find;
cfengine.com/.well-known/security.txt
In this file, externals can find instructions for how to contact us, which kinds of issues are likely to get a reward, etc.
Step 2 - Inbox and triage
Disclosures end up in the email list for the security team, where we respond that we've received the report and discuss it internally. We either create an internal ticket to handle the vulnerability, or we reject it.
Rejected submissions
When you have a program like this, with monetary rewards, you get various submissions which are not really highlighting exploitable security issues (sometimes dubbed "beg bounty submissions", often from anonymous gmail accounts). Thus, a good portion of the submissions are rejected, such as:
- When an issue has been disclosed before (only first disclosure gets a reward).
- Various configuration / headers for HTTP servers and email. Or other optional features / configurations we could use.
- Low impact information disclosure, especially when that information is easily available in other places (source code on GitHub, version number of HTTP server, etc.).
It's worth noting, even though you receive several low-value submissions, it's still been worth it, at least for us;
- External submissions are responsible for both the "worst" and the "best" (highest severity) issues.
- Bug bounty hunters have found more severe issues than the professional 3rd party security consultant firms we pay to find issues.
- There is no clear pattern for good vs bad submissions, i.e. we could not block all gmail addresses, or similar. Good submissions of important security issues come from gmail accounts, sometimes anonymous, and from different parts of the world.
- In the cases of more severe issues, they've often been submitted by individuals who are using basic tools (Google Docs, and a web browser, for example), and are not necessarily tied to professional security research firms, or don't have formal certifications or experience.
Based on this, it seems like it's been worth it to keep the submission as open as possible, to allow anyone to submit a potential issue, and then dealing with the bad submissions is a part of the job. The worst case scenario would be preventing someone from reporting a severe security issue which could damage us and our users and customers.
Accepted submissions
For the other submissions, which are accepted, we create an internal ticket, based on a template. The template has all the steps we need to perform (including some optional ones which only apply to externally reported issues, for example).
For the record, here is the overview of all the steps currently in the template, in our internal issue tracker:
Each step has additional instructions inside. We're not going to go through all of them here, but you might find the overview valuable or at least interesting.
Step 3 - Remediation
At this point we need to understand and fix the issue. We create an internal ticket for the development team to fix the issue, and also do some other activities in parallel (such as requesting a CVE ID from MITRE). When fixing security vulnerabilities, it is a good idea to communicate closely and ensure that:
- Both developers, and the security team understand the issue well.
- The fix is correct and adequate, it should remediate the vulnerability, for all the users (new users, upgrading users, on-prem users, Saas users, etc.).
- Appropriate followups are performed, such as trying to systemically prevent similar issues in the future (through new tests, tools, or process changes).
Step 4 - Customer communication
We communicate and work with our enterprise customers to give them a chance to upgrade or mitigate any security issues before they become publicly known. For each account we have at least one, preferably 2 or more security contacts to reach out to via email. The email contains similar information to the upcoming announcement, as well as a date when it will go public (normally in 1 month).
Step 5 - Public announcement
The public announcement is often identical, or very close to the email sent to security contacts. We typically use the same structure each time:
- Introduction / Description - Shows the type of vulnerability / weakness, versions affected, how it was discovered.
- Impact - Gives you a good idea about how serious it is, both in terms of what the consequence is (the actual risk) and in terms of what the conditions are for that situation to occur.
- Detection - Provides scripts, commands, or instructions, for checking if you are affected.
- Remediation - Provides instructions for how to fix / mitigate the issue. Normally, this involves upgrading, but sometimes there are other mitigations / manual workarounds available.
- Contact - Ensure the readers know how to contact us in case they have additional questions or need assistance.
As an example, you can see our most recent CVE announcement here: CVE-2024-55959.
Once the announcement is published online, we submit the URL to MITRE so the CVE database is updated, we also add references to the CVE to our changelogs, so users can easily see which versions fixed which vulnerabilities.
Results and learnings
You can see all the CVEs we've published for Mender and CFEngine on our blogs:
https://mender.io/blog/tag/cve
Since 2021, there's been 14 of them, which has meant that we can practice and improve our process over time. Here are some learnings to take away from this:
- Ensure it is easy for externals to report security vulnerabilities - we recommend using the security.txt standard.
- Paying rewards for responsibly disclosed security issues is a great way to incentivize external users and security researchers to find and report security vulnerabilities.
- Handling vulnerabilities in an organized and transparent manner is important for reducing security risks and building customer trust. Security conscious customers greatly appreciate this.
- The types of vulnerabilities you will see depends a lot on what kind of software you are writing, and will probably be different from the ones you read about online (i.e. issues in the Linux Kernel, curl, OpenSSL, etc.).
- Look for systemic ways to eliminate / prevent security issues, not just one-off fixes.
What's next
We are currently working with Hackerone, to start using their platform for bug bounties. With this move, we're hoping that the increased attention / talent and higher rewards than before, can help identify and fix previously undetected bugs and security issues. If there is interest, we can share some of our learnings on this later this year, or other aspects of security in further blog posts on the Northern.tech blog. If you're interested in reading more blog posts like this, follow us on Linkedin.