Publication policy

Disclaimer: The original of this page written in Dutch. This page has been automatically translated into other languages using DeepL. This may result in differences in nuance, tone and meaning. When in doubt, always consult the Dutch version first. Due to the high cost of translations, this page may lag behind the Dutch version in terms of content. We consider the Dutch version of this page to be leading.     

The Internet Cleanup Foundation has a code of conduct that lays out the basics about what measurements are and are not published.

This page explores this further with practical examples.

Public measurements with a public purpose

In principle, Basic Security measures security at the basic level. This can be freely published without increasing the risk of misuse beyond what it already is. Consider measuring where common security measures are lacking.

Basicsecurity.com’s database contains more than 10 million unique measurements. All measurements in the database can be fully published, or leaked, at any time without creating new risks that an attacker can exploit.

An organization with an adequate security policy will always be green on basic security: the measurements in themselves are not news or shocking, but the lack of application can sometimes be.

The measurements of basic security are in the area of ethical hacking and security testing. There are strong similarities to the first steps of professional security testing, such as mapping the attack surface. That covers domains, ports and software. This is public information that is easy to collect on a large scale. There are many companies that do and offer this at very low cost or even public and free.

Basic Security never publishes serious or critical vulnerabilities. It leaves this to other organizations, such as professional security testing companies like Secura, Zerocopter or volunteer organizations like the DIVD.

Publishable findings

The Internet Cleanup Foundation publishes information around vulnerabilities and missing security measures. It does this with the goal of getting these vulnerabilities fixed. On the basis of the code of conduct it is determined whether a vulnerability can be published. Here the public interest, proportionality and subsidiarity are considered.

Publishable vulnerabilities are publicly available to all on the Basic Security website. This can also be accessed automatically.

Publishable vulnerabilities are rated by risk. This is a different risk rating common in the security world because the spectrum of measurements is different. The spectrum on basic security has four grades that are scaled on what could potentially go wrong. This is in contrast to usual security tests, which contrast the same findings with things that are really going wrong right now. Where Basic Security rates something as “high,” a professional security test will put it down as “info,” “low” or in a special case as “middle” depending on the purpose of the test.

Gradations on basic security shift over time, to slowly raise the bar of basic security. Where a measurement is orange this year, it may be red next year to draw more attention to it. For example, adoption of security.txt is not mandatory until mid-2023, so around that time not all organizations will be adopting this standard. However, by mid-2024, this would be unacceptable because people will have had a year to apply it. There is more on this in the measurement policy.

These are the four gradations we apply:

Red: This represents a high-risk vulnerability. This is an imperfection that needs to be fixed. Consider, for example, weak encryption. The leakage of version information (read: telling an attacker what to execute) and the like.

Orange: this is often about institutions and administrative deficiencies. Also, new measurements that should eventually be red, but are scaled as orange for introduction: to make the measurement visible and actionable by the responsible organizations.

Green (low): A measurement involving a deviation on safety that has very little impact (yet) because there are few people who may experience this problem, and in that case are often already exposed to a wide range of more serious issues. This level is also used for measurements that may involve legal obligations in the future, but are not very common today.

Green (ok): A correct measurement: safety is ensured at this point.

Findings are always provided with documentation, references and where possible a second opinion link on which a retest can be performed. These are also in the measurement policy. This allows knowledge of the relevance of a new finding and immediate verification that the vulnerability has been fixed.

Examples of publishable vulnerabilities include: missing encryption, missing DNSSEC, missing security.txt, physical location of service, application of RPKI, public version information.

All information on the websites, including all measurements, are subject to our disclaimer. Acting on these findings is therefore always at your own expense and risk. We do our best to always publish correct and up to date measurements, read more about this in the measurement policy.

How are new measurements published

Before a new measurement is published, it is first announced to umbrella organizations so that organizations can take action on it. The measurement results themselves are often not yet available. At least one month later, the measurement becomes visible and public.

This is followed by a trial period in which the measurement is rated orange at most. During this trial period, everyone can react to the measurement and the measurement may be tightened. This period usually lasts a month or two. After this period, the measurement is re-scaled and possibly set to red.

Example scenario publishable vulnerability

An organization needs a Web site. This Web site is delivered by a development agency. This development agency uses standard software and focuses primarily on functionality and information.

The website is going live, but security has not yet been addressed. For example, the site is not yet accessible through an encrypted connection (1: https), there are no guarantees whether the domain name belongs to the ip address of the site (2: DNSSEC) and there is a management panel public (3: login panel) showing that it is about software version 7.0.2 (4: version information).

This case gives an attacker all kinds of handles that can easily be avoided. An attacker needs little or no effort to find them, and has long and widely automated this knowledge to do as much damage as possible this way.

In this case study, an attacker can exploit vulnerabilities in the following ways: they can read along with Internet traffic on the same network such as a Wi-Fi hotspot because https is missing (1: https), on the same network the attacker can redirect traffic to a fake page (2: dnssec), one can make attempts to log in to the management page anywhere in the world (3: login panel), and search and apply publicly known vulnerabilities anywhere in the world (4: version information).

Attackers already have this information, so it is important that the people who need to defend also have it. In all of these attacks, there is a basis in openness and good craftsmanship.

When known security frameworks and standards frameworks are skillfully followed, all these vulnerabilities are not present. Think of ISO27002, NEN7510, Baseline Information Security Government and the like. Note that such frameworks are not practical and let everyone measure and measure solutions themselves. As a result, in the same situation, a novice will write a different plan in terms of content than a professional. In basic security, we set a certain bar that is relevant in all cases. Some find this bar too high, others find it too low. In any case, it is the basics.

Non-publishable vulnerabilities

Basic security focuses on basic security requirements.

Foundation volunteers also sometimes find serious vulnerabilities. This is by-product and not the goal of the foundation. Here is how we handle this. Active searching for serious vulnerabilities is done by friendly organization DIVD. So base security never actively searches for serious vulnerabilities.

Should we accidentally find a serious vulnerability, below is how we handle it.

When serious vulnerability is suspected, this is where the approach that provides the greatest safety in the shortest termein is considered on a case-by-case basis. There are a number of options that are considered:

  1. If it is a new vulnerability that is not yet known (not a CVE), there are a number of options. It depends on where the vulnerability was identified and what the severity is. The following paths are considered:
  2. If it is a known vulnerability, then the ground rules around Coordinated Vulnerability Disclosure are followed. A check is made to see if the organization has such a process in place.
  3. Consideration is being given to forwarding the report through the Security Hotline.
  4. Communication surrounding the serious vulnerability will be considered according to CVD’s ground rules. Because finding and publishing new vulnerabilities is not our goal, communication about new vulnerabilities, if ever, will likely be through the party to whom it was reported, the Security Hotline, the DIVD or similar organization.

Examples of non-publishable vulnerabilities are: SQL injection, cross-site scripting, use of weak passwords, leaked passwords, remote code execution, buffer overflows and CVEs.

Sample scenario severe vulnerability

This example is based on a true scenario and a known vulnerability: disabling a module that allowed files containing passwords to be directly readable on the front page of a site (disabling PHP). This is a configuration error.

A volunteer is browsing information on basicsecurity.com. The volunteer sees a site: “oud.belangrijkeoganization.nl,” which may therefore have weak security. The volunteer becomes interested and visits the site.

Upon visiting the site, it appears that there is something wrong with the configuration: the site’s source code is on display, instead of a beautiful website. This source code contains a database password and links to sensitive files.

The volunteer knows this is not the intention and consults with other volunteers on what is wise to do. This is about an individual and known vulnerability; CVD rules are followed.

The volunteer contacts the security organization belonging to the website. Information about the vulnerability is exchanged and the organization follows the internal processes associated with CVD and vulnerability resolution.

The organization resolved the vulnerability within the foreseeable future. Sometimes the organization chooses to reward the reporter with a goodie or entry in the hall of fame.