The ethical dilemmas at the heart of Cybersecurity
Marcus’s story
Marcus stared at the terminal, his hands hovering over the keyboard like a pianist about to perform.
He’d found it by accident, a SQL injection vulnerability in his university’s student portal. One misplaced quote mark in a search field, one poorly sanitized input, and suddenly he was looking at database tables he shouldn’t be able to see. student_grades
The cursor blinked.
Marcus was a senior computer science major. 3.2 GPA. Good, not great. His friend Jamie, brilliant but unfocused, was failing calculus for the second time, one more F and he’d lose his scholarship. Marcus knew Jamie’s student ID. He knew the table structure now. Changing a grade would take maybe thirty seconds. A single UPDATE query.
That’s all it would take.
“You’re not actually going to do it, are you?” Priya asked from across the computer lab. She’d been watching him for the past ten minutes, recognized that look.
“I found a vulnerability,” Marcus said quietly. “A bad one. Anyone could access any student’s records. Change grades. View financial aid info. Everything.”
“So report it.”
“I am. I’m just… thinking.”
But he wasn’t just thinking about reporting it.
Jamie’s scholarship paid for everything, tuition, housing, books. Without it, Jamie would have to drop out, move back home, probably never finish his degree. All because of one failing grade in a class that had nothing to do with his major.
The system was unfair. Marcus had the power to fix it.
Just one query.
“Don’t,” Priya said, reading his mind. “Whatever you’re thinking, don’t.”
Marcus’s fingers moved to the keyboard.
Terrence’s story
Terrence had worked as a teller at his local bank for three years. Three years of “Thank you for banking with us” and “How can I help you today?” Three years of watching rich people’s accounts while his own checking balance hovered around $127.
The vulnerability was laughably simple.
The bank’s internal system had a search function for customer accounts. It was supposed to be restricted, tellers could only view accounts for customers they were actively helping. But someone had misconfigured the access controls. If Terrence modified the URL parameters in the address bar, he could search any account. View any balance. See routing numbers, account numbers, transaction histories.
Everything.
He’d discovered it two months ago while troubleshooting a customer issue. Typed in the wrong account number, hit enter, and suddenly he was looking at someone else’s information. An account he had no business accessing.
He should have reported it immediately.
He didn’t.
Instead, Terrence started documenting. Screenshots. Account numbers. Routing numbers. Balances. He told himself he was just… cataloging the vulnerability. Building a case to show management how serious it was.
But that wasn’t why he’d created the encrypted USB drive.
Or why he’d been talking to James from IT Solutions LLC, a man who’d slid him a business card at a bar in Federal Hill three weeks ago. A man who’d said he “worked with data brokers” and “paid well for quality information.”
“Banking information is premium,” James had said over whiskey. “Account numbers, routing numbers, balances. That kind of data? We’re talking $50 to $100 per account, depending on the balance. You give me a thousand accounts? That’s fifty grand, minimum. Maybe more.”
Fifty thousand dollars.
Terrence made $36,000 a year.
He looked at the USB drive on his desk. 2,847 accounts documented. Organized by balance. The highest account belonged to a hedge fund manager in his town; $4.2 million in checking alone.
His phone buzzed. Text from James: Still interested? Meeting tonight. Cash upfront.
Terrence’s daughter needed braces, $4,500 the insurance wouldn’t cover. His car was making a grinding noise he couldn’t afford to fix. His rent had gone up $200 a month.
The bank wouldn’t miss the information. The accounts wouldn’t be emptied, just… shared. And besides, if the bank cared about security, they wouldn’t have left such an obvious vulnerability, right?
His phone buzzed again: Last chance. Need your answer.
Terrence picked up the USB drive.
The ethical labyrinth
These two stories,one fiction, one slightly more sinister, illustrate the grey areas that define cybersecurity ethics. The uncomfortable truth is that capability doesn’t equal permission, and good intentions don’t erase legal consequences.
Let’s break down what’s really at stake.
The power-responsibility gap
Marcus found a real vulnerability, the kind that responsible disclosure protocols are designed to address. In cybersecurity, responsible disclosure means discovering a security flaw and reporting it privately to the affected organization, giving them time to fix it before making the vulnerability public.
The standard timeframe? 90 days from disclosure.
But Marcus’s dilemma wasn’t just about reporting, it was about whether he should use the vulnerability first. And here’s where ethics and law diverge sharply.
Ethically: Some might argue helping a friend in need is justifiable. The system failed Jamie. Marcus has the power to correct an injustice.
Legally: Under the Computer Fraud and Abuse Act (CFAA), accessing a computer system without authorization or exceeding authorized access is a federal crime. First-time offenders face up to 5 years in prison, even if they don’t cause damage or steal anything.
The law doesn’t care about your intentions. It cares about authorization.
The grey hat dilemma
Marcus operates in what the security community calls “grey hat” territory, the space between ethical white hat hackers who work with explicit permission and malicious black hat hackers who exploit systems for personal gain.
Grey hat hackers discover vulnerabilities without permission but typically report them rather than exploit them maliciously. They see themselves as doing a public service.
The problem? Grey hat hacking is still illegal. As the Electronic Frontier Foundation notes, “Despite the value computer security professionals provide by testing software and networks for exploitable vulnerabilities, research activities can violate a number of complicated or obscure regulations and statutes.”
Even if Marcus only intended to report the vulnerability, the moment he considered using it to change grades, he crossed a line. Unauthorized access is unauthorized access, regardless of outcome.
Terrence’s darker path
Terrence’s situation is less ambiguous but more insidious. He’s not a grey hat, he’s contemplating becoming a black hat, motivated by financial desperation.
Let’s be clear about what Terrence is considering:
- Unauthorized access: He’s systematically accessing customer data he has no legitimate business reason to view
- Data theft: He’s copying private financial information to an external drive
- Trafficking: He’s planning to sell that information to criminals
This isn’t a grey area. This is federal wire fraud, identity theft facilitation, and CFAA violations that carry sentences of up to 10 years for first-time offenders, potentially 20 years if the data is used to cause serious harm.
The individuals whose data Terrence sells could face:
- Identity theft
- Unauthorized account access
- Drained accounts
- Destroyed credit
- Years of financial recovery
Financial desperation is real. Medical bills, rent increases, inadequate wages, these are systemic problems that deserve systemic solutions. But Terrence’s proposed solution creates new victims while enriching actual criminals.
The responsible disclosure framework
So what should Marcus do? Let’s walk through proper responsible disclosure:
Step 1: Stop testing The moment Marcus confirmed the vulnerability existed, he should have stopped. Responsible disclosure policies emphasize minimal testing, just enough to confirm the flaw, not enough to gather unnecessary data.
Step 2: Document carefully Marcus should document:
- What the vulnerability is (SQL injection in student portal)
- Where it’s located (specific URL and input field)
- Potential impact (unauthorized access to student records)
- Steps to reproduce (sanitized proof-of-concept that doesn’t include real data)
Step 3: Report privately Most organizations have established channels:
- Vulnerability disclosure policies
- Bug bounty programs (though universities rarely have these)
Step 4: Wait Give the organization time to patch. The standard is 90 days, though this can be negotiated.
Step 5: Consider public disclosure After the vulnerability is patched (or after the agreed timeline expires), security researchers may publish details to help the wider community learn from the flaw.
What Marcus should not do:
- Use the vulnerability for personal benefit
- Access more data than necessary to confirm the flaw
- Share the vulnerability publicly before the organization has time to patch
- Hold the vulnerability “hostage” in exchange for money
Legal protections and safe harbor
Some organizations offer safe harbor provisions that protect good faith security researchers from legal action; if they follow responsible disclosure procedures.
HackerOne’s Gold Standard Safe Harbor statement protects ethical hackers from liability when hacking in good faith, provided they:
- Make good faith efforts to comply with the disclosure policy
- Do not access more data than necessary
- Do not harm the organization or its users
- Report the vulnerability responsibly
But here’s the catch: These protections only apply if you have authorization. They don’t retroactively excuse unauthorized access.
The “I was helping” defense doesn’t work
Let’s address the common justification: “I was doing them a favor by finding the vulnerability!”
In United States v. Kane (2011), a man exploited a software bug in a poker machine. He argued he wasn’t hacking because he was just pressing buttons. The court disagreed with the hacking charge for technical reasons, but he still faced wire fraud charges.
In the case of Ahmed Al-Khabaz, a Montreal student accessed a system without authorization to verify a vulnerability had been patched. His college suspended him. Security professionals debated whether his actions were justified, but the legal reality was clear: unauthorized access is unauthorized, regardless of motivation.
The infamous case of Aaron Swartz highlights how prosecutors can weaponize the CFAA. Swartz downloaded academic articles from JSTOR using MIT’s network. He faced 13 felony counts carrying up to 35 years in prison and $1 million in fines. The case ended tragically with Swartz’s suicide in 2013, sparking widespread calls for CFAA reform that have largely gone unheeded.
What about Terrence?
For Terrence, there’s no ethical justification. None.
His financial struggles are real, but his proposed solution:
- Violates federal law
- Betrays the trust of thousands of customers
- Enables identity theft and fraud
- Enriches actual criminals
- Will likely end with him in federal prison
What Terrence should do:
Immediately:
- Stop accessing unauthorized accounts
- Secure or destroy the USB drive (don’t sell it, don’t keep it)
- Report the vulnerability to the bank’s security team or IT department
Long-term:
- Seek financial counseling about legitimate options for his situation
- Look into hardship programs for medical expenses
- Consider reporting the data broker’s solicitation to law enforcement
What will likely happen If he sells:
- The bank will eventually discover the data breach through fraud patterns
- Internal audit logs will trace access back to Terrence’s employee ID
- He’ll face federal charges: computer fraud, wire fraud, identity theft conspiracy
- He’ll lose his job, face criminal prosecution, and likely serve time
- His daughter’s future will be damaged by having a father in federal prison
The irony? The money he thought would save his family will destroy it.
The broader questions
These stories force us to confront uncomfortable questions:
Is security research without permission ever justified? Legally, no. Ethically, the community is divided. Many argue that coordinated disclosure strikes the right balance, but it requires the researcher to stop the moment they find something and report it immediately, without further exploration.
Should good intentions matter? The law says no. Ethics says… maybe. But intentions are hard to prove and easy to fake. A consistent legal standard protects both organizations and legitimate researchers.
What about corporate negligence? If an organization leaves glaring vulnerabilities unpatched, are they partly responsible? Ethically, perhaps. Legally, that doesn’t give anyone permission to exploit those vulnerabilities. Two wrongs don’t make a right, and all that.
Where’s the line between testing and exploitation? This is the million dollar question. Generally:
- Testing: Confirming a vulnerability exists using minimal access
- Exploitation: Using the vulnerability to access, modify, or exfiltrate data
The moment Marcus thought about changing grades, he crossed from testing to exploitation.
The hard truth
Just because you can hack something doesn’t mean you should.
Marcus has the technical skill to change grades. Terrence has access to valuable data. Neither has the right to use these capabilities.
The cybersecurity field desperately needs ethical researchers who find vulnerabilities and report them responsibly. But that work must be done within legal and ethical boundaries. Otherwise, we’re just criminals with better justifications.
For Marcus: Report the vulnerability. Don’t change the grade. Help Jamie study for the makeup exam or appeal to the professor or academic committee through legitimate channels. Jamie’s scholarship is at risk because of Jamie’s grades, not because of an unjust system Marcus needs to hack.
For Terrence: Report the vulnerability. Delete the USB drive. Walk away from the meeting. The short term cash isn’t worth the long term consequences. And those consequences will come.
The next time you find yourself with access you shouldn’t have, or the ability to do something technically possible but morally questionable, remember: capability isn’t permission.
And if you’re building security systems, remember: people will be tempted. Build systems that make it hard to do the wrong thing and easy to do the right thing. Implement proper access controls, audit logs, and separation of duties. Don’t put your bank tellers in Terrence’s position. Don’t build student portals with Marcus’s vulnerability.
Because in cybersecurity, the question isn’t just “Can we stop the hackers?”
It’s “Can we stop ourselves?”
Both Marcus and Terrence’s stories are fictional, but the ethical dilemmas they face are very real. If you work in cybersecurity or have access to systems and data, you’ll face these choices. Choose wisely. The decision you make in that moment defines who you are, and could define the rest of your life.

