What DARE Got Wrong About Behavior Change (And What Security Awareness Should Steal From Public Health)

DARE failed for the same reason most security awareness training fails — both treat behavior change as an individual problem. Public health figured out decades ago that it's a community one.

If you went to school in the United States at any point between 1983 and roughly 2015, you probably remember it: the police officer in your classroom, the red-and-black logo, the pledge you signed. D.A.R.E. — Drug Abuse Resistance Education — was in more than 75% of American schools at its peak, lavishly funded, politically bulletproof, and beloved by administrators who liked having something to point to.

It also didn’t work. Not even a little.

The 1994 meta-analysis by Ennett, Tobler, Ringwalt, and Flewelling — the one that effectively ended DARE’s credibility in the research community, even as the program chugged along in schools for another two decades on political momentum — found a weighted mean effect size of .06 on drug use behavior across eight rigorous studies. For all outcomes considered, DARE’s effect size means were substantially smaller than those of programs emphasizing social and general competencies and interactive teaching strategies. West and O’Neal’s 2004 follow-up was worse: the overall weighted effect size came in at Cohen d=0.023 — statistically nonsignificant, a hair above zero. By conventional standards, that number would need to be twenty times larger just to qualify as small.

The program wasn’t just ineffective. According to economist Emile Shepard’s cost analysis, it was costly, ineffective, and possibly counterproductive — students receiving no measurable benefit while schools spent real money and real classroom hours on delivery. Every hour in a DARE session was an hour not spent on something that might have actually helped.

Here’s what I want to sit with for a moment, though — because “DARE failed” is old news, and that’s not really what this post is about.

The more interesting question is why it failed. And when you look at that answer honestly, something uncomfortable comes into focus: the security awareness industry is running the same play, and making the same foundational error.

Not just using the wrong emotion. Operating at the wrong unit of analysis entirely.


The DARE playbook, in five moves

DARE had a theory of change. It went like this: if you give young people accurate information about the dangers of drugs, delivered by a credible authority figure, they’ll be scared enough to say no when the moment comes. Knowledge plus fear plus a rehearsed refusal script equals behavior change.

It’s intuitive. It’s logical. It’s wrong.

The program’s failure wasn’t about insufficient funding or poor implementation. It was a theory-of-change failure at the root. DARE assumed that behavior is primarily an individual-level phenomenon — that if you load the right facts into a person’s head, the right choices will follow. What the research actually shows, across four decades of prevention science, is that behavior is primarily social and community-level. It’s shaped by what people around you are doing, by what you believe those people expect from you, by whether your community treats safe behavior as normal, and by whether the social infrastructure exists to make safe choices feel natural rather than effortful.

When those community conditions are in place, information helps. When they’re not, information — even well-designed, fear-free, competence-building information — hits a ceiling.

DARE gave kids facts about drugs and asked them to say no to their friends using a script they’d practiced in a classroom. It gave them the what and skipped the who — the social identity, the community belonging, the sense that the people around them shared the norm they were being asked to uphold.


Sound familiar?

Here’s the security awareness version of the DARE playbook:

  • Deploy annual compliance training
  • Use breach statistics to establish threat severity (“your organization faces X attacks per day”)
  • Show employees footage of what ransomware looks like
  • Walk them through phishing indicators
  • Make them click “I understand” at the end
  • Mark them complete in the LMS

Bada, Sasse, and Nurse synthesized the awareness literature in 2019 and found that most cybersecurity awareness campaigns are ineffective at changing behavior because they’re built on fear appeals and compliance-based models rather than motivational models. That reads like a description of DARE transposed into IT. The mechanism is the same: load the threat information, arouse the fear, expect the behavior to follow.

It doesn’t follow. Not reliably. Not durably.

And here’s the deeper structural reason why.


Fear is a terrible driver — but that’s almost beside the point

Fear is not inert. It does things to us. The problem is that what it does isn’t always what you want.

The research on Protection Motivation Theory (PMT) makes a crucial distinction between two cognitive processes that activate when someone encounters a fear appeal. The first is threat appraisal: how bad is this thing, and how likely is it to happen to me? The second is coping appraisal: is there something I can actually do about it, and can I do it?

Fear appeals tend to be very good at the first one. They’re optimized, by design, to make threats feel vivid and real and personal.

But research consistently shows that efficacy is a stronger predictor of protective action than threat severity. Knowing the threat is real matters less than believing you’re capable of responding to it. And when the behavior that reduces fear is avoidance, disengagement, or denial — rather than the secure behavior you want — fear has done the opposite of its intended job. It has motivated the wrong response. People who feel overwhelmed by threats they don’t have the skills or support to address don’t double down on security hygiene. They mentally leave the building.

I’ve watched this happen. You send a phishing simulation, a non-zero percentage of employees click, and instead of becoming a learning moment it becomes a shame spiral. The security team announces another month’s click rate. Leadership is briefly alarmed. Nothing structurally changes. The same simulation runs next month.

That’s just DARE. It has the same energy as DARE, right?

But here’s the thing: even if you fix the fear problem — even if you build skill-based, competence-forward training that does everything right at the individual level — you’re still playing a limited game. Because the research that eventually buried DARE didn’t just point toward better individual interventions. It pointed toward a different unit of intervention entirely.


The community is the mechanism

In 1986, H. Wesley Perkins and Alan Berkowitz published findings that would quietly reshape how prevention scientists thought about behavior change. Studying drinking behavior on college campuses, they found that most students significantly overestimated how much their peers drank and how comfortable those peers were with heavy drinking. The perceived norm — “everyone does this” — was dramatically more permissive than the actual norm. And that misperception, not the absence of information about alcohol’s harms, was what drove risky behavior.

The implication was almost too clean: you don’t have to change people’s values. You don’t have to scare them. You just have to correct the misperception of what’s normal. When students learned that their peers were actually more moderate than they’d assumed, their own behavior shifted toward the actual norm.

This is social norms theory, and its effect sizes on substance use prevention consistently dwarfed DARE’s.

Perkins and Berkowitz were observing something that Albert Bandura had been building theoretical scaffolding around for years — the distinction between self-efficacy and collective efficacy. Self-efficacy is “I can do this.” Collective efficacy is “we can do this together, and we have a shared expectation that we will.” Communities with high collective efficacy don’t just produce individuals who make better choices. They produce social environments in which better choices are the path of least resistance — where the community norm does the behavioral work that individual willpower alone cannot sustain.

Robert Sampson, Stephen Raudenbush, and Felton Earls’ landmark 1997 paper in Science showed this operating at the neighborhood level: collective efficacy predicted violent crime rates better than concentrated poverty, residential instability, or individual risk factors. The community was the unit of intervention. Not the person.

Public health eventually absorbed this lesson and built around it. Community health worker models — where people from within a community deliver health education and support rather than outside experts descending to lecture — consistently outperform top-down delivery, precisely because the messenger is part of the mechanism. Trust is not a delivery vehicle for the real intervention. Trust is the intervention.


What security awareness looks like when it takes this seriously

Most security awareness programs are structured as individual-to-individual transactions: security expert delivers information to employee, employee is expected to update behavior. The community layer — if it exists at all — is treated as ambient background, not as the primary lever.

The programs that work differently share a recognizable shape. They operate on the norm, not just the individual. They build peer educator networks — security champions programs, Slack channels where the curious ask questions and get human answers, blameless postmortems that make vulnerability-reporting feel safe rather than punishing. They treat the organization as a social fabric with its own norms, and they invest in shaping those norms rather than trying to install correct behavior one employee at a time.

Information Sharing and Analysis Centers are, structurally, a community health model: threat intelligence exchanges that work because members trust each other enough to disclose sensitive breach data. That trust doesn’t come just from contracts. It’s built, slowly, through relationships and reciprocity — the same way community health networks are built.

At my org, I run a recurring security awareness event called “Hacky-Hour”. The format matters less than what it’s trying to do: it’s a norm-shaping intervention. Security is positioned as something people in this organization are curious and capable about, not anxious and compliant about. Games instead of slides. Scenarios that put people in the analyst role, not the victim role. The experience of collective competence rather than collective exposure.

“Falling in Love with Digital Privacy” — the February edition, built around privacy and the skeptical approach we encourage — was explicitly designed around the social norms insight. The goal wasn’t to make individuals more afraid of Social Media. It was to make security curiosity feel normal, to give people shared language and shared reference points, to make “I asked the security team about that thing I saw” a natural thing to say to a colleague.

That’s the community mechanism at work. Not in a grand public health infrastructure sense, but in the way it’s actually available to security teams inside organizations: building the social conditions in which secure behavior propagates through relationships rather than through mandate.


Security awareness is a public health problem

The HHS 405(d) program and the CyberGreen Institute have both argued explicitly for a public health framing of cybersecurity — and the structural analogy holds. Just as a doctor controls only a fraction of the conditions that determine a patient’s health outcomes, a security team controls only a fraction of the risk flowing through a network of human beings making decisions every day. Individual-level interventions — even excellent ones — are bounded by the social environment in which they operate.

Public health figured out the DARE problem. It learned that information campaigns don’t change behavior at scale. What changes behavior at scale is operating on the community: making the safe choice the visible choice, correcting misperceived norms, building peer-to-peer trust networks, delivering continuous micro-interventions woven into the social fabric of daily life rather than one annual downpour.

Security awareness is about twenty years behind on this lesson. The cost is measurable — in breached healthcare systems, in credential-stuffed accounts, in the slow-burn cynicism of employees who have clicked “I understand” on the same training for the fifth year running and know, better than anyone, that it’s just theater.


The close

DARE’s organizational response to the research that revealed its failure was instructive: denial, litigation threats, accusations of bias, attempts to suppress publication. It took years for the evidence to catch up with the program’s political momentum.

I think about that when I watch annual compliance training roll out for another year at organizations where the CISO already knows it doesn’t work very well, because there’s budget for it and a vendor relationship and a checkbox somewhere that needs checking.

We are not stuck with that. Prevention science already did the hard work of figuring out what moves people. The answer isn’t just better individual training. It’s building the community conditions in which secure behavior is the norm — visible, expected, reinforced by relationships rather than enforced by policy.

The locked door treats people as threats to be managed.*The open hand treats them as the community they actually are.

Security awareness is a public health problem. Public health figured this out. So can we.


If you’re building or rebuilding a security awareness program and want to think through the behavior science together, reach out — this is the work I find most alive.