TLDR: It depends.

People who discover bugs and security vulnerabilities and want to improve security by publishing about their findings generally have a substantial task managing competing interests in the process. Publishing your findings can help others learn from that single mistake by installing a known patch, learning what mistakes not to make when building systems, knowing what vendors to avoid or taking other measures. However publishing can also introduce risks by informing people how to abuse vulnerabilities. It’s generally considered good practice to inform those who you know are vulnerable before publication to allow them to take steps to prevent harm. That process can be more time consuming than finding the bugs themselves and can even present a risk for the one reporting the bugs, for example by companies threatening legal action.

To improve the situation I started in 2011 to try and convince some big, risk-averse organizations to publicly adopt a policy that makes it easy to know where to report issues, make some basic promises about the reporting process and provide some trust that no legal action will be taken when reporting issues. My thought was that if I could convince one or two banks, telco’s or the government to adopt such policies I could use that to convince others that smart and risk averse people have concluded this is actually a good idea. I had no idea that this would be adopted in 2012 as official government policy, the policy of the six largest consumer telecom companies, and basically the entire banking sector in the Netherlands adopted what was then called responsible disclosure.

This idea was later improved upon as what’s now called Coordinated Vulnerability Disclosure (CVD). Rightfully correcting the wrong suggestion (never the intention!) that the main responsibility lies upon the reporter of the vulnerability, formalizing the quite common process of multi stakeholder vulnerability disclosure and many other things. The process even has a formal standard now: ISO/IEC/TR 5895:2022. Both the official government policy and the ISO standard are really clear: the assumption is that the reporting of vulnerabilities to vulnerable organizations is a method to reduce risk when publishing information about the vulnerabilities.

Somewhat in parallel bug bounties started to gain popularity. There are many variations of bug bounties, but it generally provides some rewards for people reporting bugs under certain conditions. Those conditions can be used to steer people to look for and report more valuable vulnerabilities with higher frequency. But other rules such as inviting only select people to participate or by requiring non-disclosure agreements for participation are also common. I tend to explain (oversimplify) bug bounties as no cure, no pay pentests. In practice they can be really valuable in some situations such as as a backup for issues your normal pentesters have missed.

Things started to change when bug bounties and CVD started to mix. Mixing sounds reasonable on the surface. Why not also provide rewards to those reporting vulnerabilities? And if you have outsourced the bug bounty process they can also help with handling CVD reports. But that’s not all that happens when these two concepts get mixed. Many organizations have also adopted requirements like non-disclosure agreements before being allowed to report a vulnerability. That is madness! If someone spots a vulnerability in your systems and you place requirements on being allowed to warn you, what do you want them to do if they don’t want to be burdened by those requirements? Not warn you? Publish the findings without warning you in advance? Introducing incentives and limitations to the reporting process that might make sense in a bug bounty can hurt the CVD process. I’m not agreeing to an NDA to be allowed to help you fix an issue I already know about. That doesn’t mean I don’t want to give you time to fix your issues if you’re reasonable about it.

To find out how common this issue is I browsed four big bug bounty platforms (Intigriti, HackerOne, Bugcrowd and Zerocopter in no particular order) to check a sample of their programs to see how high the bar is to be allowed to report a vulnerability. In this sample 73% of the organizations listed by the platforms did not allow people who find bugs to report them without having to agree to some form of non-disclosure agreement. With a positive exception for Zerocopter, organizations listed on other platforms as having a CVD program still require some form of NDA for reporting. While there are some nuances to be made, such as some platform promoting a lighter NDA where reporters are (by default) allowed to publish after a reported bug is fixed, I did count that as an NDA because it’s possible for reports to remain unresolved indefinitely. Organizations refusing to fix vulnerabilities can actually increase the need for publishing to warn others who might be harmed by the unfixed vulnerability.

So, what does that look like in practice?

NamePlatformContract requiredNDA required
Visma alternatiefIntigrityYesYEs
Universitätsspital ZürichintigritiYesYes
HERE TechnologiesintigritiYesYes
UZ LeuvenintigritiYesYes
Fireblocks MPCHackerOneYesYes
Standard NotesHackerOneYesNo (Request instead of aggreement)
AdeVintaZeroCopterYes (but anonymous)No
Air France/KLMZeroCopterYes (but anonymous)No
AutotraderZeroCopterYes (but anonymous)No
BunqZeroCopterYes (but anonymous)No
BUXZeroCopterYes (but anonymous)No
Cars GuideZeroCopterYes (but anonymous)No
CatawikiZeroCopterYes (but anonymous)No
CEVA LogisticsZeroCopterYes (but anonymous)No
CMA CGMZeroCopterYes (but anonymous)No
DataswitcherZeroCopterYes (but anonymous)No


  • I did not sample any non-public programs on the platforms so the results might be biased towards more open programs.
  • Bugcrowd has a default disclosure policy that customers can overrule. However the default disclosure policy includes the term “Program Owners commit to allowing researchers to publish mutually agreed information about the vulnerability after it has been fixed.”. I have classified this default policy as a non-disclosure term because it does not allow publication of bugs that are not (completely) fixed.
  • For Zerocopter there are two statements that contradict each other. When submitting a report you have to agree to the CVD statement that includes a request to not “Reveal the vulnerability or problem to others until it is resolved.”. However a reporter also has to agree to Zerocopter’s Terms & Conditions for Researchers that includes the exception: “The Confidentiality clause does not apply to researchers that report vulnerabilities as part of a Responsible Disclosure/Coordinated Vulnerability Disclosure process. We however kindly ask you to use common sense and take into account the sensitive nature of (publicly disclosing) the information.”. I read that as overruling the earlier non-disclosure requirement. This could be clarified by not requiring the reporter to agree with the CVD policy. A CVD policy should not be a contract, but a one-sided statement about the organization’s own policy.

When it comes to lowering the threshold for reporting vulnerabilities only Zerocopter seems to be doing a decent job. Many programs on most other platforms seem to confuse bug bounties and CVD at the cost of actual CVD.

The most important feedback I have gotten when addressing this to some of the platforms is that it’s up to their customers how to set the program rules. I guess that’s partially right, but given what Zerocopter has done that is obviously not the entire truth. I hope the platforms will step up and protect their workforce against unreasonable working conditions. Protecting your workforce doesn’t just mean paying out bounties, but also includes being allowed to complain if they are treated unfairly or publishing about their findings to make society a bit more secure.