It's More Than Saying No
What to do when leadership asks your offensive security team to do work it was never designed for.
One of the strange side effects of building a strong offensive security program is that people start bringing it all kinds of security work. Not because that work belongs there but because they trust the team will do it right.
That may sound great, but this work disrupts the mode of thinking that makes offensive security valuable in the first place. The kind of work a high-impact offensive team produces depends on long stretches of exploration, curiosity, and investigation. When those stretches are repeatedly broken by reactive tasks, the team gradually stops operating like an offensive capability and starts operating like a general security utility.
Instead of discovering unknown unknowns, the team begins focusing on known unknowns, or worse, validating known knowns. Instead of following instincts about where risk might actually live, the team responds to tasks framed by someone else’s model of the system.
This doesn’t happen through one bad decision. It happens through accumulation. And it happens because once a team builds a reputation for thoughtful, high-impact work, leadership trusts it enough to route nearly anything its way.
The “Saying No” Misconception
Over the years, I’ve developed a bit of a reputation for being willing to say no to work that leadership wants my teams to take on. But the more I thought about that framing, the more I realized it wasn’t quite right.
I rarely actually say no.
What I usually do instead is reshape the work.
When requests come in that don’t quite belong with my offensive team, I work with the stakeholder and with my own team to reframe the problem so the work still delivers the outcome they need while aligning with how the team actually creates value. The goal isn’t to reject the work. It’s to protect the conditions that allow the offensive team to operate in the way it was built to.
A big part of this is proactive engagement. I spend a fair bit of time interacting with my leadership and the teams most likely to bring requests. Sometimes that’s explicit. Often it’s not. Through the way we engage and the work we deliver, the team continually reinforces what it is optimized to do and the value that comes from operating that way.
At the same time, I’m fairly stubborn about protecting the team’s time and the environment we’ve built—so when it comes down to it, I will say no. The kind of work an offensive research team does is fragile. Once it gets replaced by reactive tasks, it’s very difficult to rebuild.
Intuition-Driven Work Requires Space
In a previous post I wrote about intuition-driven offensive security and why some of the most important discoveries don’t begin with a perfectly defined scope or checklist. They begin with a feeling. A suspicion. Something that doesn’t quite sit right.
Experienced security engineers and researchers develop instincts about these things, others have it natively. They understand where systems tend to break and where assumptions are most likely to fail.
But those instincts only become valuable if people have the space to pursue them.
Real offensive work is rarely linear. Researchers follow threads, explore strange behaviors, and dive into rabbit holes that sometimes reveal whole warrens of issues. That kind of exploration doesn’t happen easily when every other week is driven by inbound requests framed by someone else’s urgency and mental model.
When that happens, the team is no longer operating on its own adversarial hypotheses. It’s operating on someone else’s idea of where risk is supposed to be or what “good” looks like. Over time, that produces a more reactive and ultimately less effective offensive capability.
If the goal is intuition-driven truth finding, then protecting the environment that allows it to thrive becomes part of the job.
Shape the Work Before the Request Arrives
The lowest friction way to avoid misaligned work is to avoid it showing up in the first place.
By the time a request lands on your plate, the expectation has already been framed. The scope reflects someone else’s picture of what the work should look like. The risk model reflects what they currently understand, and more importantly, what they don’t. If you start engaging at this stage, you’re already reacting.
A more effective approach is to get ahead of it.
This means paying attention to where leadership and partner teams are likely to feel uncertainty. What are they about to release? Where are new trust boundaries being introduced or assumptions changing? What news cycles or trends are they about to hear about?
Once you see those signals, you don’t wait. You start looking.
You can form an initial point of view on where risk might actually live. Pull in a team leader or two and decide whether there’s real risk worth exploring. If so, align the appropriate team member(s) and let them loose. By the time someone reaches out with “can you take a look at this?”, you’re already in a position to respond with, “we’ve been digging into that area, and here’s what we’re seeing.” Or, “we’ve been tracking this and don’t see meaningful risk because…”
That shifts the interaction. Instead of inheriting a task, you’re bringing insight. Instead of accepting someone else’s framing, you’re offering a more informed view of where risk actually is. And importantly, the team stays anchored in the kind of work it was built to do.
Be Clear About What the Team Is Built to Do
Another reason misaligned work shows up is that many organizations don’t have a clear mental model of what offensive security is optimized for.
If that model isn’t made explicit, the team gradually gets pulled into adjacent roles. It becomes the place for QA support, rushed penetration tests, control validation, or general security cleanup.
Those things can be important, but they are fundamentally different from the work an intuition-driven offensive capability is designed to perform.
Offensive security at its best is optimized for depth, not coverage. It follows signals rather than strict scopes and seeks to understand systems deeply enough to identify where assumptions break.
Being explicit about that helps prevent drift. It clarifies where the team creates the most leverage and where other functions are better positioned to help. Redirecting work becomes less about refusing requests and more about aligning the problem with the right capability.
Translate the Request, Not Just the Task
Even with proactive engagement and clear expectations, requests will still arrive that don’t quite fit.
Most of these requests are imperfect expressions of a real concern. If you respond only to the surface wording, you risk solving the wrong problem. Let’s look at this with a simple example.
Leadership asks, “Can your team take a look at X before it releases?”
At face value, that sounds like a request for a traditional penetration test: a shallow, time-boxed validation of a specific component. But that’s not how my teams typically operate. Our focus is on escaped risk—issues that have made it through the SDLC and are exposed in production.
So instead of immediately saying yes or no, I start by asking questions.
What is actually worrying you about this feature?
Is there a particular class of attack you’re concerned about?
Did something about the design trigger someone’s spidey sense?
Once those concerns are clearer, the work can be reframed.
Instead of a narrow pre-release test, the engagement becomes an investigation into the trust boundaries around that feature or a deeper exploration of how it could be abused. The scope shifts from validating a component to understanding the underlying risks. The timeline the stakeholder expects may not change, but you can be explicit that your view on the date isn’t your final word. “On that date I will provide our latest opinion, but that won’t necessarily mean we’re done exploring or that our opinion won’t change the next day.”
That approach aligns with how the team creates value, and it tends to address the original concern more directly than the initial request would have.
The irony is that this almost always gives the stakeholder a better answer than they originally asked for.
Over time, this changes how stakeholders engage. They stop bringing vague “can you just look at this?” requests and start arriving with specific concerns to work through together. That’s a healthier pattern, because offensive security is most valuable when it helps make sense of the unknown, not when it acts as a broad-coverage safety net.
Protect the Conditions for Exploration
There’s a dynamic that leaders sometimes underestimate: the work a team repeatedly accepts quietly shapes what that team becomes.
If offensive security consistently performs validation work, it will be seen as a validation function. If it consistently produces adversarial insight into systemic risk, that becomes its identity instead.
Offensive research requires a particular environment to sustain that identity. Researchers need time to understand systems deeply, freedom to pursue unusual signals, and space to follow threads that may or may not lead somewhere important.
Those conditions disappear when every other week is dominated by reactive work. Curiosity gives way to responsiveness, and the instinct to explore weakens.
So the real challenge isn’t getting better at saying no. It’s shaping how the team engages with and is seen by the organization so that the work arriving is more likely to align with its purpose.
When that happens, saying no becomes a small correction inside a system that is already mostly aligned.
An offensive security program shouldn’t exist simply to absorb whatever security work appears. Its role is to discover what matters most, especially when those things aren’t obvious, convenient, or already sitting in someone else’s queue.
At its core, offensive security isn’t a task execution function.
It’s a truth-finding one.

