Google has presented a new vulnerability rewards program to pay researchers who discover security flaws in its open-source software or the building blocks that its software is created on.
It’ll spend anywhere from $101 to $31,337 for info about bugs in projects like GoLang, Angular, and Fuchsia or susceptibilities in the third-party dependencies included in those projects’ codebases.
While Google needs to fix bugs in its projects, perhaps the most exciting part is the bit about third-party dependencies. Programmers often use code from open-source assignments, so they don’t constantly have to reinvent the same wheel.
But since developers often instantly import that code and any updates, that introduces the possibility of supply chain attacks. That’s when hackers don’t target the code now controlled by Google but instead go after these third-party dependencies.
SolarWinds showed that this attack type isn’t limited to open-source projects. We’ve seen several reports where big businesses have put their security at risk thanks to dependencies. There are modes to mitigate this sort of attack vector — Google has begun vetting and distributing a subset of popular open-source programs. Still, it’s almost impossible to correspond overall project code. Incentivizing the community to check via dependencies and first-party code helps Google cast a wider net.
According to Google’s management, payouts from the Open Source Software Vulnerability Rewards Program will rely on the bug’s severity and the importance of the project it was discovered in. There are also some rules around bounties for supply chain exposures — researchers will have to notify whoever’s really in charge of the third-party project before telling Google. They also have to prove that the issue affects Google’s project; if there’s a bug in a part of the library the company’s not using, it won’t be eligible for the program.
Google also says it doesn’t want people poking around at third-party services or platforms it uses for its open-source projects. So if you find an issue with how its GitHub repository is configured, that’s fine; if you find a problem with GitHub’s login system, that’s not covered.

Open-source software brings challenges for testers, particularly in the world of Web3, whether it is integrating with libraries and systems beyond the test team’s control or attempting to reproduce complex and energy-intensive webs like public blockchains in a staging environment.
Open-source testing in the state of bug bounties can support broaden the scope of your testing and deliver specialist support.
Bounty programs are not replaced by professional testing; they are tools in the test team’s armory. However, they can count specialist skills and geographically localized expertise that would be challenging to get on your team.
Bounty programs are most likely to be victorious when testers are involved in determining scope, triaging bugs, and operating with the community.
They are also valuable tools for upskilling and developing mobile and safety testing skills.
Open-source software has transformed the way we work as testers and developers. We are more likely to utilize open-source libraries and packages than ever before, which means bugs can be introduced via dependencies the teams cannot control.
And now, we are penetrating a world of open-source testing, too. Increasingly, open-source projects (and numerous closed-source ones) are creating bug bounty programs and requesting people outside the organization to engage in the quality and security strategy.
The growing significance of the Web3 ecosystem based on blockchains indicates how vital community test programs are, with recent instances of bugs being discovered by open-source testers who have saved tens of millions of dollars.
Some within the testing community notice this trend as a threat. However, it is a possibility. Bug bounties and open-source test contributions are excellent tools for test teams, and there is every justification for testers to adopt this new trend rather than worry about it.
There are two primary challenges: one around decision-making and another around integrations. Concerning decision-making, the process can vary according to the project. For instance, if you are talking about something like Rails, an accountable group of people agrees on a release timetable. However, within the decentralized ecosystem, the community may make these decisions. For example, DeFi protocol Compound discovered itself in a circumstance last year where, to agree to have a specific bug fixed, token-holders had to vote to support the proposal.
These integrations often cause problems for testers, even if their product is not open-source. For example, developers include packets or modules that are written and supported by volunteers beyond the firm, where there is no SLA in force and no procedure for claiming compensation if your application is on holiday because a third-party open-source library has not been corrected or if your body script pulls in a later version of a package that is not compatible with the application under test. In addition, packages that facilitate connection to a database or an API are particularly vulnerable points.
Bug bounty programs are a mode of crowd-sourcing testing. The author James Surowiecki popularised the concept in his book The Wisdom of Crowds that the more individuals who have their eyes on a particular problem, the more likely they are to locate the right solution. In the case of very complicated systems with multiple dependencies and integrations, a small loophole can induce the loss of millions of dollars. Furthermore, it becomes increasingly unlikely that a single tester or test team will have the specialist knowledge and predictive ability to determine every potential case. So financially incentivizing the broader community to search for bugs is becoming increasingly popular.
You can financially incentivize bug searches by publishing the terms and conditions and the reward table on your website. But more commonly, outlets like BugCrowd, HackerOne, and ImmuneFi handle the strategy for you and deliver a one-stop store for testers and security researchers keen to demonstrate their prowess in earning rewards.
The judgment to run a program and require particular rewards is created centrally for commercial software. However, the process is distinct for open source, especially within the Web3 ecosystem. At this point, the foundation or DAO that drives the protocol will vote on a precise proportion of the treasury being released to finance a bug bounty.

In disparity, the Boson Protocol program targets all the intelligent contracts – with a comparable bounty of $50,000 – but bans all associated websites and non-smart contract assets. In this instance, the bounty program is delivered directly rather than via an intermediary.
The benefit of open-sourcing testing, even on closed-source projects, is that it broadens the bug-catching net and permits a much larger number of people. In addition, it donates to the security of a system rather than relying on a project’s formally employed test team to wrap all bases. A popular open-source project is usually supported by a core development team, including testers. Still, like most closed-source projects, they may not contain the specialist skills that are sometimes required now and again in the software expansion lifecycle.
Many businesses will already hire specialist services, for instance, to do penetration testing. So you can think of a bug bounty as a sort of ongoing penetration test, where you only spend the time and expertise of the professional if they encounter a vulnerability.
But more than anything, and no matter your project, crowd-sourcing testing leads to various approaches, ways of thinking, and skill sets that would be inconceivable to locate in a single person or team. A thriving product or application will have tens of thousands – perhaps millions – of users. They will utilize it differently and take distinct routes through it using additional hardware. Access to a larger pool of skills and opinions is a practical resource when channeled correctly.
The disadvantages primarily lie in the extra time and endeavor marketing your bounty program to those with the relevant skills. And if you are not attentive to defining the scope of the bounty in advance, your business, foundation, or project may end up spending out for bugs you have already found.
It is challenging to replicate the conditions of a production environment in Staging; as in a Production situation, you have thousands of validators and users who may interact with the system in ways you have not thought of. Therefore, it is impossible to replicate. If you glance at the Bitcoin blockchain, for instance, it would cost millions of dollars in electricity independently to run an accurate simulation of the live network.
Web3 systems are composable, so they all fit together like Lego bricks. For example, the ERC20 token standard developed for the Ethereum blockchain can be repositioned into any wallet, as can the ERC721 NFT token benchmark. As a result, a developer can compose a smart contract that makes a derivative on a decentralized exchange and then utilize that derivative to develop income on a separate savings protocol and then employ the generated income as collateral on yet another protocol. This interdependency can bear risk, especially if one key component goes wrong.
The fact that millions of dollars are sitting in these open-source protocols is also a risk: it acts as a honeypot. The rewards on offer can sometimes look absurdly high if you look at existing bug bounty programs. Still, the cost-benefit ratio makes sense if a successful bounty hunter can find a bug before it is exploited.
Testers should be interested in defining the scope of successful bounty programs and determining how they should run. The primary thing is to either take charge of the program as a team or work very closely with the people in your organization who set it up. You must also agree on who will triage the tickets and how bounty hunters will interact with your team. Testers must help define the scope of any program so that rewards are not offered for unimportant issues. That area can be excluded where the test team chooses to have responsibility for bug reports. It makes more logic to ring-fence bug bounties for areas with likely edge cases or where a specific type of expertise is needed.
It had some state of vulnerability reward program for more than a decade. But it’s good to see that the company’s taking action on a problem it’s been raising the alarm about. Earlier during the year, in the wake of the Log4Shell, exploit found in the popular open-source Log4j library, Google said the US government needs to be more involved in finding and dealing with security issues in critical open-source projects. Since then, as BleepingComputer notes, the company has temporarily bumped up payouts for people who find bugs in specific open-source projects like Kubernetes and the Linux kernel.