Planting a Prairie That Nobody Lives In

A natural history diorama is a sealed glass box containing a world that doesn't exist — and yet, for a second, your brain says place. Deception operations work the same way. This is a post about the craft of building worlds convincing enough that attackers forget they're standing in a museum.

There’s a hall in the Field Museum in Chicago where you can stand three feet from a bull elk and feel the pine needles under your boots. The elk is dead, of course — has been for decades. His hide is real but draped over a sculptured form, like a Sunday shirt hung on a wire hanger. The backdrop behind him is flat acrylic on plaster, but the light is painted so carefully that your eye reads it as distance, as atmosphere, as the particular amber of late afternoon in the northern Rockies. The grasses at his feet are dried and pinned to plywood with museum-grade adhesive. Nothing in that box moves. Nothing breathes. And yet — for a second or two, before your rational mind catches up — something in you says place. Not “display.” Not “exhibit.” Place.

The last time I stood in front of that diorama, something caught wrong. A seam in the backdrop, maybe, where two panels of painted sky met at a slightly different hue. Or the glass eye — that milky, too-perfect sphere that reflects the overhead track lighting instead of the painted sun. I don’t remember what specifically broke the spell. I just remember the moment it broke: one second I was in a forest, and the next I was in a room, looking at dead animals and paint.

That’s what happens when an attacker fingerprints your honeypot.

One wrong timestamp. One default Cowrie banner still reading Debian 5.0. One domain registered three days ago with no Wayback Machine history and a WHOIS record that smells like fresh printer ink. And the environment stops being a place and becomes a display. The attacker surfaces from the flow state we talked about in my last piece — she stops thinking about where she’s going and starts thinking about where she is. The spell breaks. The seam shows. The glass eye catches the light.

In “The Sam Reich Problem,” I wrote about why deception works — about the cognitive architecture that makes human beings so beautifully susceptible to constructed environments. Flow states, choice architecture, the way a well-designed world flatters your sense of agency even as it narrows your options. That post was about psychology. This one is about craft. How you actually build the diorama. How you stretch the hide over the form, paint the backdrop, pin the grasses to the floor. Two levels of construction: the individual specimen — a deceptive persona — and the full habitat — a deceptive enterprise. And — critically — how you keep the whole thing alive long enough to matter, at a scale that doesn’t eat your team from the inside out.

I’ve been building a framework for generating these environments programmatically — a project I’ve been calling Ossian, after the legendary Gaelic poet who turned out to be an elaborate fabrication himself. James Macpherson published his “translations” of Ossian’s third-century verse in the 1760s, and it took decades for the literary establishment to accept that the bard never existed. The poems were Macpherson’s own invention, dressed in the authority of antiquity. It’s one of history’s great deception operations, and the name felt right.

But frameworks are infrastructure. What I want to talk about here is the thinking — the design philosophy that makes the difference between a deception environment that captures an attacker for weeks and one that gets fingerprinted in minutes.


The Specimen

Or: How to Build a Person Who Doesn’t Exist

MITRE’s D3FEND taxonomy has a formal definition for a decoy persona — a synthetic identity created to support defensive operations. But “synthetic identity” makes it sound clinical, like something grown in a petri dish under fluorescent light. What we’re actually doing is closer to method acting. We’re building a character who needs to survive scrutiny from an audience that wants to catch her lying.

What the Audience Actually Checks

Here’s the thing about an attacker running OSINT on a person: she’s not going down a checklist. She’s building a mental model. She encounters a name — say, on a phishing lure, or in a LinkedIn connection request, or in the employee directory of a target she’s reconning — and she starts assembling a picture. Every new data point either coheres with the picture forming in her head or it contradicts it. Her process isn’t verification so much as falsification. She’s testing a hypothesis she’s already forming, and the hypothesis is: this person is real.

Her first move is cross-platform username correlation. Tools like Sherlock, SpiderFoot, and Maltego can trace a name across hundreds of platforms in seconds. And this is where most deception personas die in the first five minutes: they exist on LinkedIn and nowhere else. No GitHub. No Twitter. No Stack Overflow. No comments on Hacker News. No old Tumblr account from 2014 that they clearly forgot about. No half-abandoned personal site with a blog post from three years ago that trails off mid-paragraph because they got busy and never came back.

The thing about real people is that we leave trails. Not curated presences — trails. Scattered, inconsistent, sometimes embarrassing digital breadcrumbs that accumulate over years like sediment. Think of it like mycelium — the vast underground fungal network that connects what looks like separate mushrooms on the surface. One mushroom might be a LinkedIn profile. Another might be a GitHub contribution graph. Another might be an email address that shows up in a data breach from 2018. Above ground, they look like separate organisms. Below ground, they’re one network, one identity, one root system. A persona that’s a single mushroom with no mycelium underneath it is immediately suspect to anyone who knows how forests actually work.

Then there’s the face problem. For a while, StyleGAN-generated portraits seemed like a gift to deception operators — photorealistic faces of people who don’t exist, available at the click of a button. That window has closed. Hany Farid’s lab at UC Berkeley and others have demonstrated automated detection of GAN-generated faces at accuracy rates exceeding 99%, keying on centering artifacts and the uncanny symmetry that generators produce. Newer diffusion models avoid some of these tells, but the detection arms race is moving faster than the generation. The safest face, it turns out, is one that was never generated at all: a photograph of a landscape, a pet, an abstract avatar, a company logo. Plenty of real human beings don’t use headshots on professional platforms. The absence of a face is less suspicious than the presence of a fake one.

Temporal analysis is quieter but just as lethal. A “Senior Security Engineer with 8 years of experience” whose LinkedIn account was created last month doesn’t pass the smell test. Neither does a GitHub profile with a long commit history but an account creation date from six weeks ago. These aren’t things an attacker consciously checks in every case — but when something else feels slightly off, when the hypothesis starts to wobble, these are the threads she pulls.

Writing style is one most operators forget entirely. Stylometry tools — computational analysis of vocabulary, sentence structure, and linguistic fingerprints — achieve 80–85% accuracy linking anonymous text to known authors across relatively small sample sizes. If every persona you create writes with the same cadence, the same slightly formal tone, the same fondness for em dashes and semicolons, that’s a fingerprint as readable as a watermark. Your personas need distinct voices, not just distinct names.

And then there’s the social graph. Clusters of fake accounts that only connect to each other form visible islands in network analysis — neat, self-referential little archipelagos floating in the ocean of organic social connections. Real people have messy, sprawling networks that cross industries and geographies and make no particular topological sense. Your personas need to connect outward, into the real graph, or they’ll read as what they are: a diorama full of taxidermied animals all facing the same glass.

The Sixty-Two Dimensions (and Why You Need Maybe Twelve)

The framework I’ve been prototyping tracks sixty-two-plus attributes per persona. Name, email, employment history, technical skills, writing voice, personality parameters, location, timezone, hobbies, social connections — the full biography of a person who doesn’t exist. And for internal consistency tracking, that level of detail matters. But here’s the diorama principle at work: the visitor sees the front of the elk. The back can be plaster and wire. The anatomy has to be consistent — if the antlers belong to a mule deer and the hide belongs to a whitetail, a hunter standing at the glass will notice. But the internal armature, the bolts holding the form to the platform, the pipe carrying the support rod through the belly? Nobody sees those. They just have to not break.

In practice, an attacker performing an OSINT pass on a persona will check maybe eight to twelve things. Everything else matters only insofar as it prevents contradictions in the surfaces she actually examines.

The load-bearing attributes — the ones you get right or you don’t bother — are these: name, email, and profile photo consistency across every platform the persona touches. Employment history that doesn’t contain impossible overlaps or unexplained gaps. Technical skills that match the claimed role — a “Senior ML Engineer” who has no Python repositories and follows zero ML researchers on any platform is a ghost in a lab coat. Location coherence — timezone of activity, area code of phone number, city on LinkedIn, all telling the same story. And activity cadence — the persona has to show signs of life at intervals that feel human, not metronome-regular and not dead silent.

Below that are the medium-visibility attributes that matter for sustained engagement: writing voice that shifts appropriately by platform (LinkedIn is more polished than Slack; commit messages are terse; personal emails are casual), interaction patterns that draw from diverse sources rather than just other personas in the same operation, and work schedule realism. Commits at 3 a.m. on Christmas morning are suspicious for someone claiming to be a nine-to-five developer in Denver. People have rhythms. Your personas need rhythms too.

And below that are the low-visibility attributes — personality parameters useful for prompting an LLM to maintain voice consistency, hobbies and interests useful for generating natural-looking activity gaps (she didn’t post last week because she was hiking in Glacier, and her Strava says so). These are the plaster and wire inside the form. The attacker never sees them. But if you don’t track them, the contradictions in the visible surfaces will accumulate like compound interest.

Scaling the Repertory Company

One persona, maintained by one dedicated analyst, can be exquisitely detailed. She can have a backstory that holds up under cross-examination, a LinkedIn network that sprawls convincingly, a GitHub contribution graph that tells a story of career growth. She can be a masterwork.

Twenty personas across five deceptive organizations, maintained by a team of three, will develop inconsistencies within weeks.

This is the scaling problem, and it’s less like a technical challenge than like running a repertory theater company. One actor can improvise a character for a two-hour show. But put twenty actors on the same stage, each playing a different role in a different production running simultaneously, and somebody is going to walk into the wrong scene wearing the wrong costume. You need a script. You need a stage manager. You need systems.

The consistency engine I’ve been prototyping uses each persona’s full attribute profile as a system prompt, generating emails, social posts, commit messages, and Slack responses in a consistent voice. The key constraint is that the LLM must never break character — which means the system prompt needs explicit guardrails, and a validation layer needs to check every output for consistency violations before it reaches any surface an attacker might touch. A persona who suddenly starts sounding like a different person — or worse, like an AI chatbot with its default politeness intact — is a glass eye catching the track lighting.

Cross-platform consistency validation runs automated checks across every touchpoint: if LinkedIn says Stanford but the personal website says MIT, that’s an alert. If the claimed timezone is Pacific but the commit history shows activity patterns centered on Eastern, that’s an alert. These contradictions are invisible individually but compound over time, especially when different team members update different surfaces without checking each other’s work.

And then there’s the bus factor. If the only analyst who knows “Sarah Chen’s” full backstory leaves the team, that persona becomes an uncontrolled asset — a character still on stage with no one remembering her lines. Centralized persona registries with full attribute records, interaction logs, and platform credentials aren’t optional. They’re the script that lets the show go on when the original actor leaves the company.

A note on platform Terms of Service, because I think the community undersells this risk: creating fake profiles on LinkedIn, GitHub, Twitter, and every other major platform violates their Terms of Service. LinkedIn alone removed twenty-one million fake accounts in the first half of 2022. Meta has explicitly told law enforcement that fake accounts violate ToS even when created for investigative purposes. The Supreme Court’s Van Buren v. United States (2021) narrowed the Computer Fraud and Abuse Act in ways that suggest ToS violations alone may not constitute federal crimes, but platforms retain civil enforcement options and will ban aggressively. My recommendation leans toward owned-infrastructure personas as the default — email addresses on your own domains, profiles on your own web properties, repositories on self-hosted GitLab instances — supplemented by minimal third-party presence only where legally defensible and operationally necessary. Flag the risk. Let your legal counsel make the call. Don’t let your deception operation become somebody else’s case study in platform abuse.


The Habitat

Or: How to Build a Company That Nobody Works For

If building a persona is taxidermy — stretching a convincing skin over a carefully shaped form — then building a deceptive enterprise is constructing the entire diorama. The backdrop, the lighting, the terrain, the ecosystem. Not just one elk but a whole meadow: the grass, the wildflowers, the distant treeline, the suggestion of a creek just out of frame. A world that reads as place.

A skilled threat actor validating a target organization before committing resources runs something like a due diligence process. She checks surfaces roughly in order of effort, and the early ones matter most.

Domain registration is the foundation — WHOIS records, DNS configuration, MX records, SPF and DKIM. A domain with no mail exchange record in 2026 is like a house with no mailbox: technically possible, immediately odd. The website behind that domain needs content depth, a technology stack that makes sense for the claimed industry (BuiltWith and Wappalyzer will betray a WordPress site pretending to be a custom SaaS platform), and an SSL certificate with some history in the Certificate Transparency logs. A LinkedIn company page needs followers, an employee count that coheres with the website’s “About” section, and a posting cadence that suggests someone in marketing still has a pulse. Google search depth matters — does the company appear in results beyond its own properties? A real company leaves citations in other people’s blog posts, directory listings, the occasional press mention. And the Wayback Machine is the Rosetta Stone: a company claiming to be founded in 2023 with no archived web presence before last month is a fiction wearing a date stamp like a borrowed watch.

These first five surfaces — domain, website, LinkedIn, Google footprint, and Wayback history — are the load-bearing walls. The research consistently shows that roughly ninety percent of adversary investigations stop here. Most attackers aren’t filing FOIA requests or calling registered agents or cross-referencing state secretary of state databases. They’re running fast OSINT passes, building a gut-level read, and deciding whether the target is worth the investment. Get these five right, and you’ve built a diorama that will fool most visitors. The remaining surfaces — business registration, GitHub presence, cloud infrastructure on Shodan, job postings, social media, press coverage, partner ecosystems — are trim work. They increase fidelity, and they matter against the most meticulous adversaries, the nation-state APTs running months-long reconnaissance operations. But they have diminishing returns, and for most threat models, they’re the painted mountains in the distance rather than the elk in the foreground.

The Aging Problem

This is the single biggest operational constraint in enterprise deception, and it deserves a moment to sit with, because it’s one of those problems that’s simple to state and genuinely hard to solve: you can’t build a company yesterday.

Aging a deceptive enterprise is like tending a stretch of land you’ve been asked to make look like it’s been a meadow for twenty years. You can seed it. You can water it. You can accelerate some processes — Wayback Machine snapshots can be requested manually, SSL certificates can be issued months ahead of operational deployment, LinkedIn pages can be pre-populated with a year of back-dated activity. But there is an irreducible temporal floor below which the footprint reads as freshly minted. The soil needs time to develop structure. The root systems need seasons to interlock. Seeds need time to grow.

A domain with six months of DNS history, three months of indexed content, and a handful of Wayback snapshots is the minimum viable aged presence. A year is comfortable. Two years is luxurious and nearly bulletproof against anything short of a state intelligence service with a grudge.

SSL certificate history in Certificate Transparency logs is permanent and public — a first-issued date that contradicts the company’s claimed founding is a seam in the backdrop that can’t be painted over. Social media account creation dates are visible on some platforms, which means accounts need to exist during the aging window. And the SEO footprint — domain authority, backlinks, directory listings — takes months to build organically. A site with zero backlinks and a domain authority of one after supposedly two years of operation is an empty suit walking through a crowded room, and everyone in the room can tell.

The implication is strategic: deception operations need lead time. If you’re standing up infrastructure in response to an active threat, you’ve already lost the aging game. The organizations that do this well are the ones that plant seeds before they need the harvest — that maintain a portfolio of aged domains, pre-positioned personas, and dormant-but-credible enterprise shells that can be activated when the operation requires them. It’s infrastructure gardening. You tend it before you need it, or you don’t have it when you do.

Convincing Imperfection

Here’s where the staged-home metaphor earns its keep. When a real estate agent prepares an empty house for showing, she doesn’t furnish it with museum pieces. She furnishes it with IKEA and coffee-table books and a half-read novel on the nightstand, because “someone lives here” is a feeling, not a fact. The buyer doesn’t examine the provenance of the bookshelf. She registers, at a level below conscious analysis, that the space feels inhabited. The coffee mug by the sink. The sneakers by the door. The refrigerator that has actual food in it instead of a single bottle of sparkling water and a decorative lemon.

Real companies have mess. A blog with a broken image link on a post from eight months ago. A careers page listing a role that was clearly copy-pasted from another posting and still has the wrong team name in the job description. A GitHub repo with a stale pull request that nobody’s reviewed since October. An About page that mentions an advisor whose LinkedIn profile doesn’t exist yet, because she just hasn’t made one, because she’s a real person in a small company and small companies are informal like that. These artifacts of human imperfection are texture. They’re the dried grass pinned to the diorama floor. They don’t just avoid suspicion — they actively generate the feeling of place.

What you don’t fake: press coverage from real publications, awards from real organizations, partnerships with real companies. Fabricating a TechCrunch article is fraud and trivially falsifiable. Claiming a partnership with a named vendor creates liability and can be debunked with a single email. The rule is clean: your deceptive enterprise’s entire external narrative should be unfalsifiable, not falsely verifiable. “Featured in leading industry publications” is fine. “Winner, TechCrunch Disrupt 2025” is a tripwire you’ve laid across your own path.

Code repositories need the same texture. Merge conflicts resolved messily. A TODO: refactor this comment from four months ago. Dependency versions one minor release behind current. Pristine code with perfect formatting and zero technical debt is as uncanny as a house with no dust. Nobody lives like that. No codebase survives contact with real developers and comes out clean.

Feeding the Sourdough

Now we arrive at the hardest truth in this entire discipline: the maintenance cost dominates the creation cost.

Spinning up a deceptive enterprise in six weeks is achievable with good automation and a well-resourced team. Keeping it alive for six months — generating blog posts, updating LinkedIn, making GitHub commits, refreshing job postings, responding to the occasional email inquiry from a confused recruiter or an automated sales bot — is where operations collapse. Every surface you create is a surface you have to feed. And a blog that published twice a month for three months and then went silent is worse than a blog that never existed, because the silence is itself a signal. It’s a sourdough starter left on the counter too long: the culture died, and anyone who opens the jar can tell.

LLM-powered content generation is the most viable path to sustainable cadence — but it introduces its own detection surface. AI-generated blog posts carry stylistic signatures. AI-generated commit messages tend toward an unnatural consistency, a politeness and completeness that no human developer under deadline has ever exhibited. The mitigation is what I think of as variation injection: deliberately introducing inconsistency in style, quality, and timing. Some blog posts should be better than others. Some commit messages should be lazy one-liners. One month should have four posts and the next should have one. The imperfections aren’t bugs. They’re the texture that makes the diorama breathe.

The minimum viable enterprise — for most deception operations, against most threat models — is more modest than you’d think: a credible domain and website, three to five employee personas with consistent LinkedIn and email presence, a LinkedIn company page, and one technical surface like a GitHub organization or published API documentation. That’s the load-bearing structure. Everything else is fidelity enhancement, scaled to the sophistication of the adversary you expect to face. A ransomware gang running opportunistic operations won’t check Crunchbase. A nation-state APT conducting targeted pre-compromise reconnaissance will check Crunchbase, and the business registration, and the registered agent, and whether anyone in the company has ever presented at a conference. Design for your threat model, not for your anxiety.

And shared infrastructure patterns make this scalable: multiple deceptive enterprises can share backend systems — monitoring, LLM consistency engines, content generation pipelines, scheduling — while presenting completely distinct external surfaces. Same stage crew, different shows. The audience never sees the rigging.


The Glass Eyes

Or: The Things That Break the Spell

Come back to the museum with me. The thing that broke the spell — the glass eye, the seam in the sky — was always specific. It wasn’t that the whole diorama looked fake. It was that one element, one singular detail, caught wrong. And once you see it, you can’t unsee it. The elk is dead again. The forest is paint again. You’re standing in a room.

Honeypot fingerprinting works the same way. The attacker isn’t looking at the whole environment and thinking “fake.” She’s moving through it, building her model, and she trips over one thing that doesn’t belong. This section is a catalog of glass eyes.

Default configurations are the single most common failure mode. Cowrie ships with a Debian 5.0 filesystem that hasn’t been current in over a decade. Conpot’s default web page contains the word “Technodrome” and a Last-Modified header from 1993 — a timestamp old enough to have its own kids in college. VulnCheck’s research found roughly 236,000 supposed Confluence honeypots sharing identical session IDs, which is less “hiding in a crowd” and more “wearing the same name tag at a party of a quarter million.” Using defaults is building a diorama from a kit and leaving the instruction manual on the floor inside the glass case.

Protocol fingerprinting is subtler and harder to avoid. Cowrie’s SSH implementation uses Python’s Twisted Conch library, which handshakes differently from OpenSSH in ways that are invisible to casual connections but immediately obvious to purpose-built detection tools. Vetterl and Clayton’s research, presented at USENIX WOOT, found 2,844 honeypots by sending a single non-standard SSH packet and watching how the other side responded. JA3 and JA3S TLS fingerprinting reveals implementation differences at the cryptographic layer. These tells are the equivalent of the diorama’s glass eye — invisible from across the room, unmistakable up close.

Timing anomalies reveal the scaffolding underneath. Low-interaction honeypots respond too fast — there’s no disk seek, no process scheduling, no realistic I/O latency, because there’s no real system doing real work. Or they respond too uniformly, without the natural jitter that comes from a genuine operating system juggling competing processes. LLM-powered honeypots solve some of these problems and introduce new ones: the two-to-five-second inference latency of a language model responding to a shell command has a distinct temporal signature that looks nothing like a human typing and nothing like a scripted response. It’s a new kind of glass eye — convincing in content, uncanny in timing.

The Christmas tree problem: a single host exposing SSH, FTP, HTTP, HTTPS, SMTP, MySQL, RDP, and Telnet simultaneously is either a honeypot or a catastrophically misconfigured server. Either way, a careful attacker walks away. Real systems specialize. A web server is a web server. A mail server is a mail server. Hanging every ornament on one tree is a signal, not a strategy.

Shodan’s Honeyscore system and historical IP tagging add another layer: IPs previously identified as honeypots are flagged in the database even after reconfiguration. The internet has a memory, and that memory is indexed. IP rotation isn’t optional for long-running deception operations — it’s hygiene.

And at the social layer, the glass eyes are patterns rather than individual artifacts: identical writing style across supposedly independent personas, connection graphs that form closed loops instead of sprawling organically, LinkedIn endorsements that only come from other accounts created the same week, GitHub stars that appear in synchronized bursts from profiles with no other activity. Each individual tell is minor. The pattern is a neon sign.

The countermeasure philosophy is the diorama philosophy: customize everything, introduce realistic imperfection, and populate with the evidence of lived-in life. Bash history files. Half-written emails in the drafts folder. Browser bookmarks that include a mix of work resources and recipe blogs. A .bash_history that contains a typo and a sudo command the user clearly ran twice because she forgot the first one didn’t have the right flags. Deploy on production-grade infrastructure, in the same cloud providers and ASNs and naming conventions as your real assets. Mix decoys with real systems. Rotate IPs. Feed the sourdough.

Detecting a staged home that nobody lives in works the same way: you open the medicine cabinet and it’s empty. The books on the shelf are arranged by color instead of any reading logic anyone would actually follow. The refrigerator has nothing in it but that bottle of sparkling water and that single decorative lemon. The individual elements are all “correct.” The pattern is wrong. The absence of mess is its own kind of evidence.


The Ethics of Building Worlds for Other People to Get Lost In

In the Sam Reich piece, I wrote about Sam’s attention to consent — the moral seriousness of designing environments for other human beings to inhabit. I want to extend that argument here, because it matters, and because I think the deception community sometimes treats the ethics section of a talk as the slide you rush through to get to the demo.

Deception operations weaponize cognition. The same theory-of-mind work that makes a persona convincing — modeling how another human being will perceive, evaluate, and react to a constructed identity — is the same theory-of-mind work that makes a spear-phishing campaign effective. The tools are morally symmetric. The craft is morally symmetric. The ethics live entirely in the application, and that means you have to hold them consciously, because the craft itself won’t hold them for you.

The legal landscape is straightforward in broad strokes and deeply uncertain in the details. Honeypots deployed on your own infrastructure are generally legal under CFAA § 1030(f), which explicitly exempts lawful monitoring of computer systems. Fake accounts on third-party platforms violate Terms of Service on every major platform, and while Van Buren suggests ToS violations alone may not constitute federal crimes, the civil exposure and the operational risk of platform-initiated removal make this a space where conservatism is the only prudent posture. The Active Cyber Defense Certainty Act — which would have provided clearer legal authorization for certain defensive deception activities — was introduced in Congress and never enacted. GDPR treats IP addresses as personal data, which complicates honeypot deployments in European jurisdictions in ways that aren’t fully resolved.

I want to be honest about this: I’m not a lawyer, and the honest summary of the legal landscape is that the law hasn’t caught up to the practice. Until it does, practitioners need to stay conservative, keep legal counsel close, and document every decision with the assumption that someone will eventually ask them to justify it in writing.

But the risks that keep me up at night aren’t the legal ones. They’re the operational ones.

Friendly fire is the most immediate: an employee stumbles into an internal honeypot and gets flagged as an insider threat. A SOC analyst sees the alert, doesn’t recognize it as a deception asset, and initiates an investigation that damages a career and erodes trust. The mitigation is deconfliction — a centralized registry of every deception asset in the environment, accessible to every analyst who might encounter one, with clear escalation procedures that distinguish “someone triggered a canary” from “someone is exfiltrating data.” This sounds obvious. It is not always done.

Deception bleed is subtler: a fictitious entity becomes entangled with real operations. A deceptive domain starts receiving legitimate business inquiries. A persona gets quoted in a real article. A fake company appears in a vendor management system because someone copied a URL without checking. The boundary between the diorama and the museum floor starts to dissolve, and suddenly you’re not sure which elk is real. The mitigation is strict naming conventions, network isolation, and documented exit strategies — a plan for how you dismantle every piece of the operation when it’s served its purpose, down to the last DNS record and the last LinkedIn connection.

And compromised honeypots are a liability problem hiding inside a security tool: if an attacker roots your honeypot and uses it as a command-and-control node or a botnet staging ground, the traffic originates from your infrastructure. Honeywall architectures, egress filtering, and continuous monitoring aren’t enhancements — they’re prerequisites.

The proportionality question is the one I think about most. Zhu’s “Doctrine of Cyber Effect,” published in 2023, defines five ethical principles for defensive deception: goodwill, deontology, no-harm, transparency, and fairness. Neil Rowe’s utilitarian argument from the Naval Postgraduate School adds that defensive deception is often morally justified precisely because defenders are operating at a structural disadvantage — the attacker chooses the time, the place, and the terms of engagement, and deception is one of the few tools that shifts some of that asymmetry back. Just war theory requires proportionality and necessity. These frameworks don’t give you clean answers, and I’d distrust anyone who claimed they did. But they give you the right questions to be asking, which is what ethical frameworks are actually for.

The diorama metaphor one last time: a natural history museum builds dioramas to educate and preserve. The elk behind the glass is a representation — it serves a purpose, it’s built with care, and nobody confuses it for the wild. The museum’s intention is legible. Its methods are documented. The visitors understand, on some level, that they’re looking at a construction, and they value it because of the craft involved, not despite it. Deception operations should aspire to the same clarity of purpose: built with intention, documented thoroughly, bounded by ethics, and never confused — by the organization deploying them — for the real thing.


The Craft

I’ve been thinking about the people who built those Field Museum dioramas. Not the curators or the fundraisers but the artists — the ones who spent months in the field in Montana and Tanzania and the Yucatán, sketching landscapes, pressing plants between sheets of wax paper, recording the exact quality of light at four in the afternoon in September. They didn’t build fictions. They built translations — careful, painstaking reproductions of real places, compressed into a glass box, optimized to trigger recognition in a visitor who might never set foot in the original landscape. The craft was in the faithfulness. The magic was in the compression.

The best deception operations work the same way. You’re not building fantasies. You’re building translations of real organizational life — compressed, simplified, instrumented — optimized to trigger recognition in an adversary who’s seen a thousand real networks and knows what place feels like. The persona doesn’t need to be a real person. She needs to feel like one. The enterprise doesn’t need to be a real company. It needs to have the texture of one — the mess, the rhythm, the particular quality of half-finished work and human imperfection that distinguishes a living organization from a hollow shell.

The craft is in the details. The craft is in the consistency. The craft is in the imperfection.

And the craft, like the sourdough, requires feeding.


This post draws on the MITRE Engage framework and D3FEND taxonomy, empirical cyber deception research from the Tularosa Study (Ferguson-Walter et al., 2021), interpersonal deception theory (Buller & Burgoon, 1996), cognitive deception models (Cranford & Gonzalez, 2021), Rowe’s honeypot fidelity metrics (Naval Postgraduate School), honeypot fingerprinting research (Srinivasa et al., 2023; Vetterl & Clayton, 2018), the ethics of cyber deception (Rowe, NPS; Zhu, “Doctrine of Cyber Effect,” 2023), SANS sock puppet methodology (Gill), the Coalition of Cyber Investigators’ Socket Theory framework, and practitioner writing from Lesley Carhart and CounterCraft. The diorama comparison is the author’s own conceit. The Ossian reference is to James Macpherson’s 1760s literary fabrication, and any resemblance to ongoing projects is entirely intentional.