I really enjoy doing casual security research and bug hunting for unique network positioned targets, things like security appliances or massive enterprise software suites that are normally difficult for people to get experience with. Because of this I try to keep my eye out for interesting targets that I generally dub “secretly run the world”. Some groups choose targets based on how many users they have or surface area, but I really enjoy finding unresearched targets that are critical to business functionality and are mostly business-to-business software which can be hard to obtain as an individual (this is one of my favorite parts of penetration testing). I also always follow a general standardized disclosure process and work on publishing vulnerabilities or exploits and attempt to stay in line with industry practice for responsible disclosure.
Recently a security appliance was in the news for being used in places with heavy censorship by ISPs or governments because it was found that they were being used specifically to target and prevent access to “alternative lifestyles” and LGBTQ+ content. Almost immediately after it was found that this was being widely deployed there were a slew of pretty comical unauthenticated code executions that really caught my attention for how comically simple they were. That’s when I decided to also look and see if I could poke at the target that does large scale data collection and interception of web traffic.
It didn’t take me long, with about a few hours of work with a friend we had uncovered a small handful of unauthenticated time-based blind SQL injections, a huge amount of authenticated remote code executions, and some pretty trivial ways to bypass the censorship engine. Neat! I got all my notes, proof-of-concepts in order, and started setting up the disclosure.
But now the real dilemma hit me… Do I really want to get a couple of resume bullet points and help secure a product that actively attempts to censor and potentially silence marginalized groups?
For me the choice was an obvious no.
This is a sort of call to arms to adopt a much more nuanced approach to disclosure and to take into account about whether it is even ethical to give free security reviews to companies and organizations actively making unethical choices.
I have started not only taking into account the targets I want to research, but whether I’m accidentally giving free security audits, advice, and labor to systems actively causing harm in the world. This doesn’t stop me from attempting to find vulnerabilities in these systems, but it does mean that I have decided not to report issues either via the traditional disclosure policy or not to collect money for bug bounties.
It’s up to us not to help protect systems used to hurt our world and communities. So for now, these vulnerabilities will probably exist eternally and deciding not reporting them is the ethical thing to do.