The Clojure Datastar Experiment: When Language Loyalty Becomes a Trap

I spent weeks building what I thought would be the obvious next step: a Clojure-native version of Datastar. Same architecture, but with Clojure expressions instead of JavaScript, Transit instead of JSON, hiccup helpers instead of raw HTML attributes.

It was a waste of time. Here's why.

The Siren Song

Datastar is a ~10KB JavaScript library for server-driven UI. The server owns all state. The client is a thin rendering layer. Click a button, POST to the server, receive SSE updates, morph the DOM. No React, no Redux, no client-side state management.

For a Clojure developer, there's a natural next question: What if we had this, but in Clojure?

The pitch writes itself:

  • Clojure expressions instead of JavaScript ((toggle! :_open) instead of $_open = !$_open)
  • Transit for richer data types
  • Hiccup helpers for ergonomic server rendering
  • ClojureScript on the client for consistency

I called the experiment Aleth and got to work.

What I Built

The core took about 2,000 lines across client and server modules:

Client modules:
  core.cljs    482 lines  (bindings, discovery)
  eval.cljs    308 lines  (expression evaluator)
  local.cljs   229 lines  (local signals)
  signals.cljs 164 lines  (signal store)
  sse.cljs     194 lines  (SSE client)
  morph.cljs   160 lines  (DOM morphing wrapper)

Server modules:
  core.clj     117 lines
  hiccup.clj   290 lines
  sse.clj      211 lines

Total: 2,155 lines

The expression evaluator alone - allowing Clojure syntax in DOM attributes - required:

  • A whitelist of ~50 safe functions
  • Special form handling (if, when, let, do, and, or, cond)
  • Symbol resolution against signal maps
  • Reactive expression watching

When the "server owns truth" principle made simple UI patterns sluggish (every dropdown toggle required a server round-trip), I implemented local signals - Datastar's solution to the same problem:

;; Aleth's attempted local signals
[:div (a/local {:_open false})
 [:button (a/on-local :click '(toggle! :_open)) "Toggle"]
 [:div (a/show-expr '_open) "Content"]]

Four hundred lines later, it worked. I felt accomplished.

Then I looked at the bundle size.

The Numbers Don't Lie

LibrarySize (gzipped)Ratio
Datastar~10.76 KB1x
Aleth80 KB7.4x

The gap is structural, not fixable. Aleth includes:

  • ClojureScript core (~50KB alone)
  • Transit encoding/decoding
  • cljs.reader for parsing expressions
  • Custom evaluator
  • Malli schemas

Even stripping everything optional, ClojureScript's baseline makes parity impossible.

The Fundamental Problem

Every Aleth expression goes through:

String -> cljs.reader/read-string -> AST -> tree-walk evaluation -> result

Every click, every reactive update, every signal change pays this parsing tax. I'm interpreting an interpreter.

Datastar's expressions are native JavaScript:

$_open = !$_open

Evaluated by the browser's JavaScript engine via Function() constructor. Zero parsing overhead. Battle-tested. Every edge case handled by decades of browser development.

You cannot beat native JavaScript at being JavaScript.

This should have been obvious from the start. I was so focused on the elegance of unified syntax that I ignored the fundamental constraint: the browser already has an expression language. It's optimized. It works. Adding a layer on top is pure overhead.

The Honest Comparison

When I forced myself to answer "What does Aleth offer over Datastar?", the answer was deflating:

AspectAlethDatastar
DOM morphingIdiomorphIdiomorph (same)
SSE protocolCustom eventsCustom events (same)
Declarative attributesYesYes (same)
Local signalsYesYes (same)
Bundle size80KB10KB
Expression parsingCustom interpreterNative browser
CommunityJust meGrowing ecosystem
Backend SDKsClojure onlyGo, Python, PHP, Java, etc.

The differentiator is "Clojure syntax for expressions." That's it. And that differentiator adds complexity, size, and overhead without benefiting users.

The Trap Pattern

I fell into a trap I've seen before. Call it "language loyalty syndrome."

The pattern:

  1. Discover a tool that works well
  2. Notice it's not in your preferred language
  3. Conclude the solution is to rewrite it
  4. Spend weeks reimplementing what already exists
  5. End up with a worse version that you now have to maintain

The justification sounds reasonable: "We'll have Clojure all the way down!" But the justification conflates two different things:

  • Server code, where language choice matters (you write a lot of it, it's complex, types and tooling matter)
  • Client expressions, which are trivial one-liners ($count++, $_open = !$_open)

Nobody writes complex logic in Datastar expressions. They're not meant for that. The server handles complexity. The client handles $visible = true.

Optimizing for "Clojure syntax" in the client is optimizing for something that doesn't need optimization.

What I Should Have Built

The valuable part of Aleth is the server side:

;; This is actually useful
(defn counter-view [count]
  [:div {:data-signals (json/encode {:count count})}
   [:button {:data-on-click "$count++"} "+"]
   [:span {:data-text "$count"} count]])

;; SSE helpers are useful
(a/sse-response
  (fn [sse]
    (a/patch! sse "#counter" (counter-view new-count))))

A Clojure SDK for Datastar would be:

  • Hiccup helpers that emit Datastar-compatible attributes
  • Ring middleware for SSE responses
  • Transit encoding if you want richer data types

Use Datastar's 10KB client as-is. Don't rewrite it. Don't wrap it. Include the CDN script and move on.

The Lesson

Before rewriting an existing tool in your preferred language, ask:

  1. Where is the value? (Server logic vs. client expressions)
  2. What am I actually gaining? (Syntax consistency? Is that worth 8x bundle size?)
  3. What is the maintenance cost? (Tracking upstream changes, fixing edge cases, security audits)
  4. Who benefits? (You as the developer, or actual users?)

The value of server-driven UI is in the architecture, not the syntax. Datastar already nailed the architecture. Wrapping it in Clojure syntax adds complexity without improving the architecture.

The Hard Part

Abandoning the experiment was harder than I expected. I had working code. I had solved interesting problems. The expression evaluator was elegant in its way.

But "I built a thing" is not the same as "I built a thing worth using."

The honest answer to "what if Datastar, but in Clojure?" is: "Use Datastar. Write your server in Clojure. The expressions on the client are JavaScript, and that's fine."

Sometimes the right answer is to not build the thing.


The Aleth experiment is preserved at github.com/parenstech/aleth. The server-side SDK approach - hiccup helpers and SSE middleware for Datastar - is what I'll build next.

Published: 2026-01-01

Tagged: clojure experiment datastar clojurescript

Aleth: Server-Driven UI Without Client Lies

The ancient Greeks had a word for truth: aletheia - literally "un-concealment." Truth wasn't something you asserted; it was something you revealed by stripping away what hid it.

This is the premise behind Aleth, a new server-driven UI library for Clojure. The core bet: no client-side computation. The server owns all truth. The client is a pure projection - it shows what it's told, nothing more.

In an age of increasingly complex frontend frameworks, this sounds almost naive. But there's a specific context where it makes profound sense: systems where humans supervise and AI writes the code.

The Architectural Bet

Modern web UIs are distributed systems hiding inside your browser. State lives in Redux stores, component local state, URL parameters, localStorage, and server databases. When something goes wrong, you triangulate across all of them.

Aleth eliminates this by making a radical constraint: the client cannot compute.

Where Datastar allows $count + 1 in attributes, Aleth has no expressions. Where React components derive state locally, Aleth demands the server send exactly what to display. The client becomes a terminal - it receives instructions and renders them.

;; Server computes everything
(defn increment-handler [req]
  (sse-response
    (fn [sse]
      (let [{:keys [count]} (signals req)
            new-count (inc count)]
        (signals! sse {:count new-count})
        (patch! sse "#counter" [:div {:id "counter"} [:span (str new-count)]])
        (close! sse)))))

The client receives two things: a signal update ({:count 6}) and a DOM patch. It applies both. No logic, no decisions, no opportunities to diverge from truth.

How It Works

Aleth uses Transit+JSON over Server-Sent Events. The wire protocol has three operations:

  • patch - Update DOM via hiccup
  • signals - Update reactive client state
  • execute - Run JavaScript (the escape hatch, discussed later)

The server sends hiccup, the client morphs it into the DOM using Idiomorph. Signal changes trigger reactive bindings. That's the entire runtime.

;; Server renders initial page
(defn counter-page [count]
  [:html
   [:body
    [:div (a/signals {:count count})
     [:span (a/text :count) (str count)]
     [:button (a/action "/increment") "+"]
     [:button (a/action "/decrement") "-"]]]])

The a/signals, a/text, and a/action helpers emit data-* attributes. When Aleth's JavaScript loads, it discovers these attributes and wires them up. Click the button, POST to /increment, receive SSE response, update DOM. The HTML works before JavaScript loads; Aleth progressively enhances it.

What's Good About This

Determinism. state -> UI is a pure function. Same signals, same render. No "it works if you refresh," no race conditions between client and server state.

Testability. You can property-test your entire UI:

(defspec render-is-deterministic 100
  (prop/for-all [state (mg/generator signals-schema)]
    (= (render state) (render state))))

Visual regression testing becomes trivial - render HTML, snapshot, compare. No client timing issues, no flaky tests.

Observability. Everything is inspectable. Aleth includes devtools (SSE inspector, signal viewer, schema panel) that show every state transition. Debug by reading the event stream, not by reproducing timing-dependent bugs.

Schema validation. Malli validates signals on both ends. Invalid states are rejected, not silently accepted.

Clean API. The library exports a single entry point with sensible helpers. Hot reload works correctly (using WeakSet to prevent duplicate bindings). The devtools use Shadow DOM for style isolation.

What's Concerning

This is an early-stage library with some serious issues.

No tests. For a library whose core value proposition is correctness and determinism, this is ironic. The spec includes property-based test examples, but the implementation has none.

Memory leaks. Signal watchers are never cleaned up. In a long-running session, this will accumulate. Multiple locations in the codebase add watchers without corresponding removal logic.

The execute escape hatch. The wire protocol includes an execute operation that runs arbitrary JavaScript via js/eval. This is a security risk in any production system, even if intended only for debugging. It's the kind of backdoor that gets forgotten about.

No recovery after SSE failure. The connection has retry logic with exponential backoff, but there's no recovery mechanism once retries exhaust. The client just stops.

Stale DOM references. After morphing, references to old DOM nodes aren't invalidated. This can cause silent failures in bindings.

URL injection. The redirect handler doesn't sanitize URLs, creating a potential vector for malicious redirects.

Who Should Consider This

Aleth is right for:

  • Admin panels and internal tools - where latency to the server is low and instant UI response isn't critical
  • AI-supervised development - where you want the simplest possible model for an LLM to reason about
  • Forms-heavy applications - where most interactions are "submit and wait for server response"
  • Dashboards with real-time data - the SSE broadcast pattern handles this cleanly

Aleth is wrong for:

  • Consumer applications requiring instant responsiveness
  • Offline-first or PWA - the library explicitly doesn't support offline (server owns truth)
  • Complex interactions like drag-and-drop, real-time drawing, gaming
  • Production use - the current implementation has too many gaps

The spec is honest about this: "For offline-first, consider Fulcro."

The Latency Trade-off

Every click goes to the server and back. On a local network, this is imperceptible. Over a 200ms round-trip, it's noticeable. The library's answer is "optimize the server," not "add client computation."

This is philosophically consistent but practically limiting. There are interactions where even 50ms of latency feels broken - typing in a search box, dragging to reorder a list, hovering to preview. Aleth doesn't try to solve these cases.

The comparison to Phoenix LiveView is instructive. LiveView makes the same server-centric bet but in Elixir's ecosystem where lightweight processes and low-latency WebSockets are first-class. Aleth is swimming upstream against browser realities.

Conclusion

Aleth represents an interesting point in the design space: what if we maximally simplified the client at the cost of server round-trips? For the right use cases - internal tools, AI-assisted development, admin interfaces - this trade-off makes sense.

The ideas are sound. The implementation needs work.

If you're building something in the sweet spot (low-latency server, forms-heavy workflow, prioritizing correctness over responsiveness), Aleth is worth watching. If you need production-ready today, wait for the tests, the memory leak fixes, and the security audit.

The name promises truth as unconcealment. The library isn't there yet - but the architecture points in an interesting direction.

Published: 2025-12-31

Tagged: ui clojure server-driven clojurescript

Building Heretic: From ClojureStorm to Mutant Schemata

Heretic

This is Part 2 of a series on mutation testing in Clojure. Part 1 introduced the concept and why Clojure needed a purpose-built tool.

The previous post made a claim: mutation testing can be fast if you know which tests to run. This post shows how Heretic makes that happen.

We'll walk through the three core phases: collecting expression-level coverage with ClojureStorm, transforming source code with rewrite-clj, and the optimization techniques that keep mutation counts manageable.

Phase 1: Coverage Collection

Traditional coverage tools track lines. Heretic tracks expressions.

The difference matters. Consider:

(defn process-order [order]
  (if (> (:quantity order) 10)
    (* (:price order) 0.9)    ;; <- Line 3: bulk discount
    (:price order)))

Line-level coverage would show line 3 as "covered" if any test enters the bulk discount branch. But expression-level coverage distinguishes between tests that evaluate *, (:price order), and 0.9. When we later mutate 0.9 to 1.1, we can run only the tests that actually touched that specific literal - not every test that happened to call process-order.

ClojureStorm's Instrumented Compiler

ClojureStorm is a fork of the Clojure compiler that instruments every expression during compilation. Created by Juan Monetta for the FlowStorm debugger, it provides exactly the hooks Heretic needs. (Thanks to Juan for building such a solid foundation - Heretic would not exist without ClojureStorm.)

The integration is surprisingly minimal:

(ns heretic.tracer
  (:import [clojure.storm Emitter Tracer]))

(def ^:private current-coverage
  "Atom of {form-id #{coords}} for the currently running test."
  (atom {}))

(defn record-hit! [form-id coord]
  (swap! current-coverage
         update form-id
         (fnil conj #{})
         coord))

(defn init! []
  ;; Configure what gets instrumented
  (Emitter/setInstrumentationEnable true)
  (Emitter/setFnReturnInstrumentationEnable true)
  (Emitter/setExprInstrumentationEnable true)

  ;; Set up callbacks
  (Tracer/setTraceFnsCallbacks
   {:trace-expr-fn (fn [_ _ coord form-id]
                     (record-hit! form-id coord))
    :trace-fn-return-fn (fn [_ _ coord form-id]
                          (record-hit! form-id coord))}))

When any instrumented expression evaluates, ClojureStorm calls our callback with two pieces of information:

  • form-id: A unique identifier for the top-level form (e.g., an entire defn)
  • coord: A path into the form's AST, like "3,2,1" meaning "third child, second child, first child"

Together, [form-id coord] pinpoints exactly which subexpression executed. This is the key that unlocks targeted test selection.

The Coordinate System

To connect a mutation in the source code to the coverage data, we need a way to uniquely address any subexpression. Think of it as a postal address for code - we need to say "the a inside the + call inside the function body" in a format that both the coverage tracer and mutation engine can agree on.

ClojureStorm addresses this with a path-based coordinate system. Consider this function as a tree:

(defn foo [a b] (+ a b))
   │
   ├─[0] defn
   ├─[1] foo
   ├─[2] [a b]
   └─[3] (+ a b)
            │
            ├─[3,0] +
            ├─[3,1] a
            └─[3,2] b

Each number represents which child to pick at each level. The coordinate "3,2" means "go to child 3 (the function body), then child 2 (the second argument to +)". That gives us the b symbol.

This works cleanly for ordered structures like lists and vectors, where children have stable positions. But maps are unordered - {:name "Alice" :age 30} and {:age 30 :name "Alice"} are the same value, so numeric indices would be unstable.

ClojureStorm solves this by hashing the printed representation of map keys. Instead of "0" for the first entry, a key like :name gets addressed as "K-1925180523":

{:name "Alice" :age 30}
   │
   ├─[K-1925180523] :name
   ├─[V-1925180523] "Alice"
   ├─[K-1524292809] :age
   └─[V-1524292809] 30

The hash ensures stable addressing regardless of iteration order.

With this addressing scheme, we can say "test X touched coordinate 3,1 in form 12345" and later ask "which tests touched the expression we're about to mutate?"

The Form-Location Bridge

Here's a problem we discovered during implementation: how do we connect the mutation engine to the coverage data?

The mutation engine uses rewrite-clj to parse and transform source files. It finds a mutation site at, say, line 42 of src/my/app.clj. But the coverage data is indexed by ClojureStorm's form-id - an opaque identifier assigned during compilation. We need to translate "file + line" into "form-id".

Fortunately, ClojureStorm's FormRegistry stores the source file and starting line for each compiled form. We build a lookup index:

(defn build-form-location-index [forms source-paths]
  (into {}
        (for [[form-id {:keys [form/file form/line]}] forms
              :when (and file line)
              :let [abs-path (resolve-path source-paths file)]
              :when abs-path]
          [[abs-path line] form-id])))

When the mutation engine finds a site at line 42, it searches for the form whose start line is the largest value less than or equal to 42 - that is, the innermost containing form. This gives us the ClojureStorm form-id, which we use to look up which tests touched that form.

This bridging layer is what allows Heretic to connect source transformations to runtime coverage, enabling targeted test execution.

Collection Workflow

Coverage collection runs each test individually and captures what it touches:

(defn run-test-with-coverage [test-var]
  (tracer/reset-current-coverage!)
  (try
    (test-var)
    (catch Throwable t
      (println "Test threw exception:" (.getMessage t))))
  {(symbol test-var) (tracer/get-current-coverage)})

The result is a map from test symbol to coverage data:

{my.app-test/test-addition
  {12345 #{"3" "3,1" "3,2"}    ;; form-id -> coords touched
   12346 #{"1" "2,1"}}
 my.app-test/test-subtraction
  {12345 #{"3" "4"}
   12347 #{"1"}}}

This gets persisted to .heretic/coverage/ with one file per test namespace, enabling incremental updates. Change a test file? Only that namespace gets recollected.

At this point we have a complete map: for every test, we know exactly which [form-id coord] pairs it touched. Now we need to generate mutations and look up which tests are relevant for each one.

Phase 2: The Mutation Engine

With coverage data in hand, we need to actually mutate the code. This means:

  1. Parsing Clojure source into a navigable structure
  2. Finding locations where operators apply
  3. Transforming the source
  4. Hot-swapping the modified code into the running JVM

Parsing with rewrite-clj

rewrite-clj gives us a zipper over Clojure source that preserves whitespace and comments - essential for producing readable diffs:

(defn parse-file [path]
  (z/of-file path {:track-position? true}))

(defn find-mutation-sites [zloc]
  (->> (walk-form zloc)
       (remove in-quoted-form?)  ;; Skip '(...) and `(...)
       (mapcat (fn [z]
                 (let [applicable (ops/applicable-operators z)]
                   (map #(make-mutation-site z %) applicable))))))

The walk-form function traverses the zipper depth-first. At each node, we check which operators match. An operator is a data map with a matcher predicate:

(def swap-plus-minus
  {:id :swap-plus-minus
   :original '+
   :replacement '-
   :description "Replace + with -"
   :matcher (fn [zloc]
              (and (= :token (z/tag zloc))
                   (symbol? (z/sexpr zloc))
                   (= '+ (z/sexpr zloc))))})

Each mutation site captures the file, line, column, operator, and - critically - the coordinate path within the form. This coordinate is what connects a mutation to the coverage data from Phase 1.

Coordinate Mapping

The tricky part is converting between rewrite-clj's zipper positions and ClojureStorm's coordinate strings. We need bidirectional conversion for the round-trip:

(defn coord->zloc [zloc coord]
  (let [parts (parse-coord coord)]  ;; "3,2,1" -> [3 2 1]
    (reduce
     (fn [z part]
       (when z
         (if (string? part)      ;; Hash-based for maps/sets
           (find-by-hash z part)
           (nth-child z part)))) ;; Integer index for lists/vectors
     zloc
     parts)))

(defn zloc->coord [zloc]
  (loop [z zloc
         coord []]
    (cond
      (root-form? z) (vec coord)
      (z/up z)
      (let [part (if (is-unordered-collection? z)
                   (compute-hash-coord z)
                   (child-index z))]
        (recur (z/up z) (cons part coord)))
      :else (vec coord))))

The validation requirement is that these must be inverses:

(= coord (zloc->coord (coord->zloc zloc coord)))

With correct coordinate mapping, we can take a mutation at a known location and ask "which tests touched this exact spot?" That query is what makes targeted test execution possible.

Applying Mutations

Once we find a mutation site and can navigate to it, the actual transformation is straightforward:

(defn apply-mutation! [mutation]
  (let [{:keys [file form-id coord operator]} mutation
        operator-def (get ops/operators-by-id operator)
        original-content (slurp file)
        zloc (z/of-string original-content {:track-position? true})
        form-zloc (find-form-by-id zloc form-id)
        target-zloc (coord/coord->zloc form-zloc coord)
        replacement-str (ops/apply-operator operator-def target-zloc)
        modified-zloc (z/replace target-zloc
                                 (n/token-node (symbol replacement-str)))
        modified-content (z/root-string modified-zloc)]
    (spit file modified-content)
    (assoc mutation :backup original-content)))

Hot-Swapping with clj-reload

After modifying the source file, we need the JVM to see the change. clj-reload handles this correctly:

(ns heretic.reloader
  (:require [clj-reload.core :as reload]))

(defn init! [source-paths]
  (reload/init {:dirs source-paths}))

(defn reload-after-mutation! []
  (reload/reload {:throw false}))

Why clj-reload specifically? It solves problems that require :reload doesn't:

  1. Proper unloading: Calls remove-ns before reloading, preventing protocol/multimethod accumulation
  2. Dependency ordering: Topologically sorts namespaces, unloading dependents first
  3. Transitive closure: Automatically reloads namespaces that depend on the changed one

The mutation workflow becomes:

(with-mutation [m mutation]
  (reloader/reload-after-mutation!)
  (run-relevant-tests m))
;; Mutation automatically reverted in finally block

At this point we have the full pipeline: parse source, find mutation sites, apply a mutation, hot-reload, run targeted tests, restore. But running this once per mutation is still slow for large codebases. Phase 3 addresses that.

80+ Clojure-Specific Operators

The operator library is where Heretic's Clojure focus shows. Beyond the standard arithmetic and comparison swaps, we have:

Threading operators - catch ->/->> confusion:

(-> data (get :users) first)   ;; Original
(->> data (get :users) first)  ;; Mutant: wrong arg position

Nil-handling operators - expose nil punning mistakes:

(when (seq users) ...)   ;; Original: handles empty list
(when users ...)         ;; Mutant: breaks on empty list (truthy)

Lazy/eager operators - catch chunking and realization bugs:

(map process items)    ;; Original: lazy
(mapv process items)   ;; Mutant: eager, different memory profile

Destructuring operators - expose JSON interop issues:

{:keys [user-id]}   ;; Original: kebab-case
{:keys [userId]}    ;; Mutant: camelCase from JSON

The full set includes first/last, rest/next, filter/remove, conj/disj, some->/->, and qualified keyword mutations. These are the mistakes Clojure developers actually make.

With 80+ operators applied to a real codebase, mutation counts grow quickly. The next phase makes this tractable.

Phase 3: Optimization Techniques

With 80+ operators and a real codebase, mutation counts get large fast. A 1000-line project might generate 5000 mutations. Running the full test suite 5000 times is not practical.

Heretic uses several techniques to make this manageable.

Targeted Test Execution

This is the big one, enabled by Phase 1. Instead of running all tests for every mutation, we query the coverage index:

(defn tests-for-mutation [coverage-map mutation]
  (let [form-id (resolve-form-id (:form-location-index coverage-map) mutation)
        coord (:coord mutation)]
    (get-in coverage-map [:coord-to-tests [form-id coord]] #{})))

A mutation at (+ a b) might only be covered by 2 tests out of 200. We run those 2 tests in milliseconds instead of the full suite in seconds.

This is where the Phase 1 coverage investment pays off. But we can go further by reducing the number of mutations we generate in the first place.

Equivalent Mutation Detection

Some mutations produce semantically identical code. Detecting these upfront avoids wasted test runs:

;; (* x 0) -> (/ x 0) is NOT equivalent (divide by zero)
;; (* x 1) -> (/ x 1) IS equivalent (both return x)

(def equivalent-patterns
  [{:operator :swap-mult-div
    :context (fn [zloc]
               (some #(= 1 %) (rest (z/child-sexprs (z/up zloc)))))
    :reason "Multiplying or dividing by one has no effect"}

   {:operator :swap-lt-lte
    :context (fn [zloc]
               (let [[_ left right] (z/child-sexprs (z/up zloc))]
                 (and (= 0 right)
                      (non-negative-fn? (first left)))))
    :reason "(< (count x) 0) is always false"}])

The patterns cover boundary comparisons ((>= (count x) 0) is always true), function contracts ((nil? (str x)) is always false), and lazy/eager equivalences ((vec (map f xs)) equals (vec (mapv f xs))).

Filtering equivalent mutations prevents false "survived" reports. But we can also skip mutations that would be redundant to test.

Subsumption Analysis

Subsumption identifies when killing one mutation implies another would also be killed. If swapping < to <= is caught by a test, then swapping < to > would likely be caught too.

Based on the RORG (Relational Operator Replacement with Guard) research, we define subsumption relationships:

(def relational-operator-subsumption
  {'<  [:swap-lt-lte :swap-lt-neq :replace-comparison-false]
   '>  [:swap-gt-gte :swap-gt-neq :replace-comparison-false]
   '<= [:swap-lte-lt :swap-lte-eq :replace-comparison-true]
   ;; ...
   })

For each comparison operator, we only need to test the minimal set. The research shows this achieves roughly the same fault detection with 40% fewer mutations.

The subsumption graph also enables intelligent mutation selection:

(defn minimal-operator-set [operators]
  (set/difference
   operators
   ;; Remove any operator dominated by another in the set
   (reduce
    (fn [dominated op]
      (into dominated
            (set/intersection (dominated-operators op) operators)))
    #{}
    operators)))

These techniques reduce mutation count. The final optimization reduces the cost of each mutation.

Mutant Schemata: Compile Once, Select at Runtime

The most sophisticated optimization is mutant schemata. Instead of applying one mutation, reloading, testing, reverting, reloading for each mutation, we embed multiple mutations into a single compilation:

;; Original
(defn calculate [x] (+ x 1))

;; Schematized (with 3 mutations)
(defn calculate [x]
  (case heretic.schemata/*active-mutant*
    :mut-42-5-plus-minus (- x 1)
    :mut-42-5-1-to-0     (+ x 0)
    :mut-42-5-1-to-2     (+ x 2)
    (+ x 1)))  ;; original (default)

We reload once, then switch between mutations by binding a dynamic var:

(def ^:dynamic *active-mutant* nil)

(defmacro with-mutant [mutation-id & body]
  `(binding [*active-mutant* ~mutation-id]
     ~@body))

The workflow becomes:

(defn run-mutation-batch [file mutations test-fn]
  (let [schemata-info (schematize-file! file mutations)]
    (try
      (reload!)  ;; Once!
      (doseq [[id mutation] (:mutation-map schemata-info)]
        (with-mutant id
          (test-fn id mutation)))
      (finally
        (restore-file! schemata-info)
        (reload!)))))  ;; Once!

For a file with 50 mutations, this means 2 reloads instead of 100. The overhead of case dispatch at runtime is negligible compared to compilation cost.

Operator Presets

Finally, we offer presets that trade thoroughness for speed:

(def presets
  {:fast #{:swap-plus-minus :swap-minus-plus
           :swap-lt-gt :swap-gt-lt
           :swap-and-or :swap-or-and
           :swap-nil-some :swap-some-nil}

   :minimal minimal-preset-operators  ;; Subsumption-aware

   :standard #{;; :fast plus...
               :swap-first-last :swap-rest-next
               :swap-thread-first-last}

   :comprehensive (set (map :id all-operators))})

The :fast preset uses ~15 operators that research shows catch roughly 99% of bugs. The :minimal preset uses subsumption analysis to eliminate redundant mutations. Both run much faster than :comprehensive while maintaining detection power.

Putting It Together

A mutation testing run with Heretic looks like:

  1. Collect coverage (once, cached): Run tests under ClojureStorm instrumentation, build expression-level coverage map
  2. Generate mutations: Parse source files, find all applicable operator sites
  3. Filter: Remove equivalent mutations, apply subsumption to reduce set
  4. Group by file: Prepare for schemata optimization
  5. For each file:
    • Build schematized source with all mutations
    • Reload once
    • For each mutation: bind *active-mutant*, run targeted tests
    • Restore and reload
  6. Report: Mutation score, surviving mutations, test effectiveness

The result is mutation testing that runs in seconds for typical projects instead of hours.


This covers the core implementation. A future post will explore Phase 4: AI-powered semantic mutations and hybrid equivalent detection - using LLMs to generate the subtle, domain-aware mutations that traditional operators miss.

Previously: Part 1 - Heretic: Mutation Testing in Clojure

Published: 2025-12-30

Tagged: mutation-testing testing clojure clojurestorm

Archive