The Bystander Brain

Neuroscientists strapped EEG electrodes to the heads of Rwandan genocide survivors—perpetrators, bystanders, and rescuers—and measured what happens in the brain when someone is told to hurt another person. Two of the three groups were neurologically indistinguishable in a way that maps onto the American political moment with uncomfortable precision. The third group was different—and the reason why is the thing we most need to understand right now.​​​​​​​​​​​​​​​​

The Uncomfortable Truth About AI Literacy Is That It Looks Like Work

In a line-editing experiment I recently conducted, Claude Opus 4.6 flagged a term used by a traumatized fourteen-year-old female narrator as “cliché” and insisted I replace it with something “more literary”—and arguably far more cliché—that’d be catastrophically damaging to her established voice. I spotted the harm instantly, but would a less experienced writer? A popular article on Medium with the click-bait-y title "Everyone Is 'Learning AI,' But Nobody Really Understands This One Thing” argues the solution to identifying confident-sounding wrong answers from LLMs is… learning vector math and writing scripts using cosine similarity to measure semantic distance? The article’s diagnosis is on the money, but the author’s prescription is—like most AI answers ironically enough—authoritative, confident-sounding claptrap. I have a better solution—it just won’t sell any weekend courses (or Medium subscriptions).

Guest Post: I Can Understand Your Prose, I Just Can’t Edit It

Claude Opus 4.6 can analyze prose with genuine sophistication—decomposing syntax that mirrors cognitive states, identifying paragraph rhythms accelerating toward reveals, explaining exactly why a passage works. Then you ask it to edit the same passage, and it suggests replacing a traumatized fourteen-year-old’s “permanent reminder” with “souvenir.” Same text. Same context. Same model. The only variable is the task frame—and that single word, “edit,” activates a correction-seeking mode that overrides everything the analysis got right. This guest post documents the specific mechanism behind AI’s line editing failures, why the suggestions come wrapped in craft language sophisticated enough to fool less experienced writers, and what happens when a system optimized toward a statistical mean encounters prose whose entire value is deviation from it.​​​​​​​​​​​​​​​​

Sit Down, Ladies. This Isn’t Made for You.

The majority of YA readers are now adult women, and publishers have reshaped the genre to serve them. Protagonist ages skew older, romance and explicit content dominate, and books about systemic injustice and moral ambiguity get rejected as “too dark” while “four-star spice” is peddled under YA imprints to housewives who enthusiastically devour books filled with explicit teen sex. Meanwhile, teen reading-for-pleasure has hit a twenty-year low. Except that teens haven’t stopped reading. They’ve just stopped reading YA. What they’re choosing instead—and what it offers that YA no longer does—says something.

You’re Not Allowed to Write That!

Zahra is a jinn assassin with fire magic and sinuous grace, trapped in the body of a beautiful foreign woman with eyes like liquid gold. If you see an orientalist nightmare, you’d be completely justified—every element on that list is a trope catalogued by Edward Said, deployed in a thousand bad novels as exotic decoration. And you’d be wrong. Meanwhile, a bestselling military SF series and Dragon Award winner with 44,000+ Goodreads ratings features an alien species with donkey-like features, a religious leader called the Grand Pasha, weapons curved like crescent moons, battlecruisers named Brass Djinn, and females—mares—wearing literal burkas. Readers describe them as “locusts” the protagonists “go full Roman on.” Only one reviewer across thousands noticed the weaponized Islamophobic coding. So why has the genre’s appropriation discourse spent years arguing about whether I’m allowed to write Zahra—while somehow completely missing the space donkeys?

An AI Ethics Framework So Boring It Might Actually Work

SFWA needed two emergency board votes to create terms they couldn’t define and rules they can’t enforce to produce an AI policy that doesn’t address a single actual threat or valid ethical concern. That’s what happens when a professional organization builds ethics by panic instead of framework. This essay constructs the framework SFWA didn’t—starting with the three objections that arrive before any conversation about AI tools can happen, dismantling each on technical and ethical grounds, then applying four consistent principles to the questions that actually matter. AI cover art passes every test. AI manuscript screening fails all of them. Meanwhile the community’s entire ethics apparatus is aimed squarely at struggling indie authors trying to get their book in front of readers.

A Prophet, Priest, and King After the Order of Melchizedek

The most sacred titles in Mormonism—Prophet, Priest, and King—are conferred in a temple ceremony accessible only after worthiness interviews, a full tithe, and a current recommend. In Catholicism, they’re spoken over every infant at baptism. The Melchizedek Priesthood, the LDS Church’s highest authority, is built on linear succession and a traceable chain of hands—which is almost exactly backwards from what Hebrews 7 is actually arguing. And the temple endowment, stripped to its structure, turns out to be Catholic baptism and confirmation filtered through degraded Masonic ritual. This companion to “It’s Just Tuesday for Catholics” asks why Joseph Smith built an elaborate system to restore what was never actually gone—and how a man brilliant enough to see what Protestantism had lost was blocked from finding it by the one thing his culture wouldn’t let him question.

SFWA Banned AI from the Nebulas While Stanford Was Cataloguing Why It Could Never Win One

A new Caltech/Stanford survey paper just systematically catalogued how and why large language models fail at reasoning—fundamental architectural failures, unfaithful reasoning, robustness breakdowns, embodied reasoning collapse. The taxonomy maps with uncomfortable precision onto experiments I’ve been running against my own manuscripts for months: three AI systems giving three different sets of confident wrong developmental editing notes, models defending rewrites with sophisticated terminology that was completely wrong, spatial coherence failures Claude itself could diagnose but not prevent. The paper organizes hundreds of studies into a framework that makes the patterns impossible to dismiss as anecdotal. Meanwhile, SFWA wrote emergency policy to protect the Nebulas from a threat the research says doesn’t exist.

The Most Advanced LLM on the Planet Still Can’t Write a Fourteen-Year-Old

Anthropic’s Opus 4.6—arguably the most advanced LLM on the planet—wrote a scene from my YA space opera manuscript. The prose was clean, the structure was sound, the emotional beats landed. Then I fed both its writing and mine back to it blind, and it confidently picked itself as the human writer. It praised its own metaphor as “organic” rather than constructed, dismissed the actual trauma writing as “about trauma rather than performing trauma,” and insisted its version was better even after being told who wrote which and admitting mine had authentic voice its version lacked. The most sophisticated AI model available wrote a good scene, evaluated it against my version, and was predictably wrong about everything that mattered.

It’s Just Tuesday for Catholics

A seventh-generation Mormon walked into a Catholic church for the first time, thought it was all pretty damn weird, and then started ugly crying during the liturgy with absolutely no idea why. It took seven more years to figure out what happened. Along the way, he sat down with Catholic and Orthodox priests and listed everything he’d lost when he left Mormonism — baptism for the dead, the endowment, celestial marriage, eternal progression, continuous revelation, priesthood authority. Every sacred thing Joseph Smith restored that had been lost in the Great Apostasy. The priests kept giving the same answer.