Intuition-Driven Offensive Security
Building a program for critical risk discovery
Thinking Outside the Timebox
Most organizations still treat offensive security like a transaction: scope a test, set a date, get a list of bugs, check a box. Everyone knows the game—limited time, limited scope, limited depth. The result is shallow findings packaged as progress while real risks stay hidden. But attackers don’t care about your scopes or deadlines. They care about how systems actually work and where assumptions break.
When I left consulting to build an offensive security program, I wanted something different. No checklists, no arbitrary constraints, but intuition, deep understanding, and technical truth. The objective is to identify significant risks, moving beyond superficial bug counts and report generation.
This program is not intended to be the standard “product security” function. There is already a team responsible for SDLC coverage: threat models, design reviews, SAST/DAST, security QA, and security sign-off before release. That is critical work, but, out of necessity, it tends to be checklist-driven with tight deadlines regardless of the talent and capabilities of the team. My team aims to bring a different perspective and focus: finding high-impact issues by going deep rather than trying to find everything or achieve a sense of coverage. We do not want to miss critical truths just because a release was scheduled for next week or we have to satisfy a fixed methodology checklist.
But you need more than philosophy to achieve this. To guide the team—and reassure leadership—I base this approach on three principles. They may sound simple, but taken together they change how offensive security is done.
Principle #1: Deep Understanding of the Target
The starting point is simple: before trying to break a system, I want my team to understand parts of it better than its developers. This involves interacting with the product as a typical user would, digging through the code, observing direct input controls, and identifying areas of indirect system influence. We look for the seams—where code meets configuration, where assumptions stretch thin, where convenience outweighs caution.
Deep understanding is not about memorizing architecture diagrams. It is about observing behavior in practice. A login screen is never just a login screen. It’s a bundle of authentication logic, backend APIs, cookie handling, error states, and user flows that developers assume will always be used correctly. Our job is to look for where those assumptions break down.
When offensive security engineers take the time to explore like this, they begin to see patterns that are invisible in a 40-hour test. The real attack surface is not the obvious entry points but the forgotten interactions and edge cases no one designed for.
Principle #2: Seeking Technical Truth
Security work often defaults to assumptions. Vendors say their product encrypts data, so you believe it. A code review checklist says “sanitize input,” so you assume it’s happening. The problem is that attackers do not operate on assumptions; they operate on what is true.
Truth-finding means verifying claims until they collapse or hold up under scrutiny. Instead of accepting “data is encrypted,” we need to know exactly how, when, and with what key management. Instead of assuming access controls are in place, read the code and test what happens when the system is under stress or misused.
Some of the most impactful discoveries come not from exploiting obvious bugs but from uncovering where the system’s supposed guarantees don’t hold. That is the difference between bug-hunting and truth-seeking. A bug may be patched tomorrow. A truth, once revealed, often changes how the organization thinks about its security model.
Principle #3: Risk Hunting, Not Bug Counting
Traditional penetration tests are incentivized to produce long reports. More findings look like more value, even if many of them are low-impact. This creates a perverse outcome where teams spend time cataloging minor misconfigurations while the real risk hides deeper down.
A critical risk hunting program flips that equation. We care less about how many issues we find and more about whether we uncover the ones that matter. A single insight into how authentication can be bypassed is worth more than a dozen cross-site request forgery reports.
This requires discipline. Skipping over shallow findings feels uncomfortable, especially when stakeholders expect a thick report. But the payoff is real: when you focus on overall severity, you shift the organization’s attention to where attackers would focus. You’re no longer feeding a metrics machine; you’re helping harden against real adversaries.
How We Engage
Building this program meant not only shaping principles, but also creating an operating model that lets them thrive. In practice, the team engages in three main ways:
Engineering-Driven: When engineering teams build something new and high-risk—whether it’s a new product, a major feature, or a sensitive integration—we evaluate it for a deep-dive assessment. We operate outside the release process, avoiding the deadline pressures that force shallow reviews and false sense of coverage.
Business-Driven: Sometimes a disclosure against another company, a public hacking competition like pwn2own, or even a news article raises a business concern. We turn those into focused assessments: how would this attack manifest here, and what’s the true impact if it did?
OffSec-Driven: This is where the bulk of the work comes from, and where I see the most value. The team is full of experienced hunters. I want them chasing their instincts, identifying and going down rabbit holes, and exploring areas of the code or product they believe could yield critical issues. In practice, this is where the deepest truths about our systems are uncovered.
Each of these engagement models gives the team the space to do what I consider real offensive security: follow intuition, find the truth, and focus on impact.
Shifting the Narrative
The industry has spent too long treating offensive security like a series of discrete engagements—a netpen here, an application assessment there. The result is a cycle of shallow testing, shallow findings, and shallow defenses. Intuition-driven, truth-seeking, risk-hunting programs offer a way beyond.
By pursuing deep understanding of systems, verifying technical truths, and focusing on impactful risks, we give defenders something they can actually act on. We stop pretending that a clean pentest report equals security and start producing insights that align with how real attackers operate.
The ultimate goal is simple: to replace security theater with security truth. If you are still measuring security assurance by report thickness, start counting truths instead.


Hello from an old NCC Group colleague! I found this via Clint's tl;drsec newsletter.
Initially I think the premise of engineering-driven is a fallacy, in some cases. Engineering teams are really not (often) given that much control, at least in nontechnical companies. The concept of engineering-driven may apply well if the companies product is in the technical sector. When the company is not in the tech sector, cost often overrides engineering discipline. Awesome article.