March 2026

Spring Infrastructure Development Update

Spring Check-In

It’s March in Taylorsville. The first parsley and pepper seedlings are up in the dome, tomato trays went in this week, and the milkweed for monarchs is germinating. Spring is the greenhouse’s argument — the thing either works or it doesn’t, and right now it’s working. There will be a separate update about the growing season as it unfolds.

This post is about the other half of the project — the infrastructure that supports the growing, and everything else. In January I wrote about switching to Claude and building the first server. That was twenty-two sessions, one machine, a proof of concept for a methodology. I was cautious: maybe this holds together. Now it’s five weeks later, and enough has changed that it’s worth taking stock. Not just what got built, but what building it revealed about how this kind of work actually goes.

There’s a long tradition of people thinking about how technology can serve human life rather than dominating it — and a lot of them don’t get cited together often enough. Thistlebridge draws on that tradition broadly, but two threads and one piece of speculative fiction have been especially useful as design specifications.

Ivan Illich wrote about convivial tools — technology that enhances human capability rather than replacing it, that remains under the user’s control rather than demanding specialized expertise to operate. E.F. Schumacher argued for appropriate technology — fitted to context, locally maintainable, skill-building rather than skill-replacing. Together they provide a philosophy of technology: what it should do, who it should serve, how to tell when it’s working.

Ursula K. Le Guin imagined what that philosophy might actually look like. In Always Coming Home, the Kesh people have access to a vast computer network called the City of Mind — essentially an AI system with access to all accumulated human knowledge, accessible through terminals in every village. They’re not anti-technology. They’re choosy. The network exists to serve human purposes without colonizing human life. They use it for specific things, when needed, and otherwise live without it.

Thistlebridge is a proof of concept at the scale of a single-family home: can emerging AI capabilities — frontier reasoning models, local inference, natural language interfaces — build appropriate technology in Illich and Schumacher’s sense? A small-scale City of Mind — not the vast network Le Guin imagined, but the same relationship: intelligence you can consult when useful, that develops your capability rather than substituting for it. Not just the AI itself, but the AI as an enabler of analog skills and experience. The servers matter because of what they make possible: a person learning to grow food with accumulated knowledge at their fingertips, a cook developing real understanding of fermentation rather than following instructions, a woodworker with maintenance histories and technique documentation embedded in their shop. The technology is appropriate when it produces people who are more capable in the physical world, not more dependent on the digital one.

The goal is embedded intelligence that gives ordinary households the kind of support that wealthy people get from assistants and concierges — someone who knows what’s in the pantry, when the last maintenance was done, what the schedule looks like, who can handle logistics so you can focus on the work you actually care about. That capability has never been available to most people. Now it could be, if the infrastructure is built right.

But “built right” matters enormously, because the dominant model points the other direction. The smartphone has become the default interface to everything. It’s also become the primary mechanism by which attention gets harvested and sold. Centralized services, algorithmic feeds, engagement metrics — the whole model produces dependency, not capability. People spend hours on devices designed to keep them on the device. The promise was connection; the result is a population staring at rectangles, increasingly unable to do things with their hands, increasingly disconnected from the physical communities they live in.

The appropriate technology tradition offers a different test: does this tool leave the person more capable, or less? A cooking app can develop your judgment about flavor and technique or it can reduce you to following instructions you don’t understand. A language model can help you reason through a problem or it can hand you an answer you can’t evaluate. The technology is the same; the relationship determines the outcome.

Thistlebridge is testing whether AI infrastructure can be built so that the answer is consistently more capable. Embedded in the environment — kitchen, greenhouse, workshop, front room — rather than concentrated in a phone. Meeting people where they are and how they work, not pulling them into screens. Running on local hardware, owned and controlled by the people it serves, without cloud dependencies or data extraction.

The strategy: use Claude, the most capable reasoning tool available right now, to bootstrap this infrastructure at one house. Document everything honestly — what works, what doesn’t, what we’re uncertain about. Then replicate the pattern in other communities through a planned nonprofit called Nymphaea — named after the water lily in the dome pond. A frontier model building the local systems that eventually reduce dependence on frontier models.


The Physical Site

Thistlebridge is a house in Taylorsville, Utah. The 26-foot geodesic dome greenhouse produces plant starts for the neighborhood — tomatoes, peppers, herbs, milkweed for monarchs. There’s a Nymphaea caerulea in the dome pond. Spring plant sales are the first revenue test.

In January, there was one server — a Dell R730, bought used, with two Tesla P40 GPUs. By March, there were two: a personal-core for creative and development work, and a site-core for property intelligence — knowledge storage, local model inference for domain-specific mentors, sensor ingestion as it comes online. Both bought used, like most of the hardware here. The two domains are separated at the VM and IP level, with VLAN enforcement being configured on the network hardware — not yet fully hardened, but the architecture is in place. Ninety-six gigabytes of VRAM across four Tesla P40 GPUs — 24GB each — deprecated enterprise cards that cost about $150 apiece because the AI hardware market moved past them. They run language models, voice transcription, image generation, and audio synthesis locally — no cloud, no subscriptions, no data leaving the property.

The personal-core is also becoming a complete personal computing platform. Knowledge base, structured documentation, project tracking — all in plain files I control. The vision is moving the rest — calendar, task management, the things currently scattered across cloud products — into the same local, portable, private infrastructure. When you can configure and produce your own tools via natural language, the whole computing life can come under your control. That’s Illich’s convivial tool at the scale of one person’s digital life — and it’s the proof of concept before we try to replicate it for communities.


Embedded Intelligence, Not More Screens

The I/O strategy is the appropriate technology principle applied to interface design. Different tasks and different people need different interaction styles. Hands in soil need voice. A workbench needs a foot pedal. A kitchen needs a wall display you can glance at. Someone who prefers reading needs text. Someone who prefers listening needs audio. None of them need a phone app. Le Guin’s Kesh access the City of Mind through terminals in every village — but the key is what they do with it: consult it when useful, then go back to living.

We’re building toward roughly half a dozen purpose-built interfaces around the property, each fitted to its context:

Kitchen and pantry management — a wall-mounted kiosk running a cooking mentor. This is the first mentor that’s actually running — a service on the site-core backed by a 14-billion-parameter local language model with five conversational modes: exploration (vivacious experimental cooking — what happens if we try this?), body maintenance (regular healthy eating — practical, nutritious daily meals, the kind of cooking that sustains a household without drama), preservation (fermentation, canning, drying), dinner party planning (coordinating a large meal for guests), and larder (pantry management, inventory, shopping). At session start, it loads whatever knowledge base files exist — cookbook references, pantry inventory, garden state — into its system prompt. The goal — in Illich’s terms — is a convivial tool for cooking: one that cultivates understanding rather than dispensing instructions. Help someone learn why a roux works, not just hand them a recipe. In the longer vision, this kind of mentor enables participation in community kitchens: people with different backgrounds and skill levels working together to prepare culturally appropriate food. That matters now; it will matter more as climate displacement accelerates and new kinds of communities form around what may be the largest human migration in history. People arriving in unfamiliar places need to eat, and the communities receiving them need ways to cook together across cultural boundaries. The mentor is a test of that idea running entirely on local hardware. It works, but it’s honestly limited — the pantry inventory is mostly empty schema, there’s no real retrieval yet (citations come from the model’s training data rather than actual scanned books), and responses sometimes truncate mid-thought at the token cap. A book scanner is in the mail; when it arrives, real retrieval becomes possible. That limitation is the next problem to solve.

Morning station and site administration — a front room kiosk that surfaces a daily briefing: schedule, priorities, system status, what needs attention on the property. The concierge function — the capability that wealthy households have always had and ordinary ones haven’t.

Electronics workbench — a kiosk at the bench where sensor nodes, mesh hardware, and automation components get assembled and tested. Documentation, test procedures, component inventory. The goal is to build as much of the site’s monitoring infrastructure as possible at this bench.

Shop and fabrication — the ground floor has woodworking, metal fabrication (both practical and artistic), equipment maintenance, and general repair. A kiosk there would surface documentation, maintenance histories, material inventory, and procedures. What was the torque spec? When was this last serviced? What alloy is that? Schumacher’s technology that’s understandable and maintainable, applied to the analog skills of keeping a working property running and making things with your hands.

Tissue culture bench — a kiosk for plant propagation work: sterile technique protocols, media recipes, contamination logs, culture tracking.

A rotary phone on a PBX — this one’s actually working. An analog rotary phone plugged into a telephone adapter, registered to an Asterisk PBX on the development server. Dial a code, leave a voice note, it’s in the inbox. From there it can surface on the morning dashboard as a to-do, get transcribed and ingested into the knowledge base, or just sit as a recording until it’s needed. Foot switches at workbenches are specced but not yet purchased — the idea is the same: step on a pedal, speak, release. No hands required, no screen, no phone. The whimsy of a rotary phone is intentional — it signals what kind of project this is.

All of these run on the same standardized pattern: minimal Linux machines — mostly used hardware from eBay — with auto-login, a bare window manager, and a browser in kiosk mode pointed at a station server. Same OS, same deployment method, different content for each room. Two are running now (front room and kitchen). The rest are being stood up as the stations they serve get built.


Toward a Digital Twin

A Wi-Fi HaLow mesh network at 908 MHz provides property-wide coverage — the radio layer is built and running. What’s not built yet is most of what rides on top of it. The sensor nodes for greenhouse monitoring — temperature, humidity, soil moisture — are designed but not deployed. Cameras and aerial surveys are planned, not operational. The time-series database on the site-core is provisioned but empty.

I’m describing this honestly because the vision is important even though the implementation is early. The goal is a digital twin of the site: a computational model that tracks what’s growing where, what conditions it’s experiencing, what maintenance has been done and what’s overdue.

This serves three functions when it arrives. First, food production: correlating plant observations with environmental data so the knowledge base is grounded in measurement, not just notes. When did the temperature dip? What was the humidity when that mold appeared? The grower still makes the judgment calls — the digital twin provides the memory and the measurement that human observation alone can’t sustain. Second, efficient maintenance: the property has a greenhouse, gardens, house systems, workshop equipment. A twin that tracks maintenance state can answer “how long can that weed infestation wait?” or “when was the ventilation fan last serviced?” — prioritization intelligence for a working property. Third, citizen science: structured environmental data from a home site contributes to understanding local growing conditions, microclimate patterns, pollinator activity. Data that’s useful beyond this property, gathered as a byproduct of the work rather than as a separate research effort. Right now, the property’s “digital twin” is a DXF survey rendered to SVG with a planning overlay — a static map, not a live model. The infrastructure to make it live exists. The sensors and data pipelines don’t yet.


Creative Infrastructure and the Centralization Problem

Digital media has become centralized in a way that’s paradoxical: everyone can publish, but the algorithm determines who gets heard. The platforms that promised to democratize creative expression have produced a de facto centralization where a handful of viral formats dominate and most voices drown. Independent creators compete for algorithmic attention on platforms they don’t control, optimizing for engagement metrics that have nothing to do with the quality or meaning of what they make. The result is a monoculture that looks like diversity — millions of pieces of content, all shaped by the same algorithmic pressures toward the same engagement patterns.

We can’t fully solve that. But we can make sure communities have lots of people making creative media — music, visual art, games, films, writing, documentation — using tools they own, published on infrastructure they control. The creative side of Thistlebridge exists for this reason: appropriate technology applied to creative production.

A recording studio built around a digital audio workstation (DAW) running Ableton Live 12 with a library of professional virtual instruments — synthesizers, orchestral samples, drum machines, bass. An audio workbench on the development server generates MIDI compositions and rendered audio using a disposition engine that maps musical influences to parameter spaces. Claude Code is being used experimentally to bridge the two — remote control of the DAW from the development terminal, with the longer-term goal of an AI-assisted administrative layer for recording sessions: managing takes, organizing tracks, handling the bookkeeping that eats up creative time in the studio. The tool supports the musician; it doesn’t replace the musicianship.

A media production workstation with a dedicated GPU and pen display, plus a craft studio pipeline that’s actually built and tested end-to-end: scan physical artwork, color correct, remove backgrounds via a local AI service on the GPU server, crop, tag, and export — including templates for Godot game engine. The camera equipment is largely used or second-hand. The capability is professional; the price point is accessible. The same appropriate technology principle: you don’t need a RED camera and a Creative Cloud subscription to make something real.

The documentation angle: all of this creative infrastructure is also for documenting the work itself in compelling ways. Plain-language case studies about what was tried and what worked are necessary, but so is the ability to produce the media — audio, visual, interactive — that makes documentation engaging rather than dry. Communities that want to share their work need the tools to make that sharing vivid. The documentation needs to be vivid, not just accurate.


How Claude Code Actually Works Here

Everything described in this post was built in Claude Code sessions — forty-four structured session logs and counting, across nine distinct domains. Not “Claude built it for me” — I made every architectural decision. But the scope and pace of what got built is directly a function of how Claude Code works as a development methodology.

I should qualify: I’ve been messing with computers since I was a kid, and I studied agriculture in college. I’m not starting from zero in either domain. What Claude Code gives me is the ability to work at the intersection of all the domains this project touches — networking, service deployment, web development, audio synthesis, agriculture, fabrication — without having to be a deep specialist in each one individually. I bring the domain knowledge and the architectural judgment. Claude handles the translation to working configurations, the syntax I don’t have memorized, the debugging when systems interact in unexpected ways.

The development environment is a headless Ubuntu VM. I SSH in from a laptop and work in the terminal. Claude Code has access to the project filesystem, can execute commands, deploy to hosting, and reach other machines on the local network. Natural language in, system configuration out.

The CLAUDE.md protocol. The project root contains a CLAUDE.md file — roughly 120 lines of instructions that load at the start of every session. It specifies project values (capability over dependency, circulation over accumulation, stay with uncertainty — the same Illich and Schumacher principles that inform the whole project), safety rules (never delete without asking, never overwrite, never reorganize), session protocol (read context first, write a session log at the end, commit), active priorities ranked by urgency, settled architectural decisions that shouldn’t be revisited, and the current infrastructure topology. This file is the handshake — every session starts with shared context about what this project is, what matters, and what not to touch.

Session logs as persistent memory. At the end of each work session, Claude writes a structured log: what the context was coming in, what happened, decisions made with rationale, open questions, what the next session should pick up. Forty-four of these logs exist across five weeks of work. When a new session starts, Claude reads the most recent log and picks up the thread. Continuity across weeks and months happens through an accreting archive of structured handoff documents. Not a single persistent conversation, but an accumulation of documented decisions and outcomes.

Auto-memory for cross-session knowledge. Claude Code maintains a persistent memory directory that survives across conversations. As patterns stabilize — infrastructure facts, deployment procedures, environment constraints, debugging insights — they get written to memory files organized by topic. This means Claude doesn’t re-discover environmental constraints or rediscover deployment procedures each session. It knows, because it wrote it down last time.

What sessions actually look like:

Infrastructure cleanup (March 2026): One thing that surprised me — the system got simpler as it got more capable. In January the personal-core ran four VMs. By March I’d decommissioned two of them, rescued the useful code, and consolidated to two VMs with clearer purposes. The new site-core came online with three purpose-built VMs instead of the sprawl that had accumulated. Appropriate technology means knowing when to remove, not just when to add. The cleanup session felt more important than most of the build sessions.

Site-core standup (March 2026): Three VMs on new hardware — a knowledge hub with git repositories and shared storage, a sensor collector with time-series storage, and an inference server with GPU passthrough for local language models. The Proxmox install was hands-on (you can’t SSH into a machine that doesn’t have an OS yet), and the VLAN configuration was done through the UniFi controller dashboard. But once SSH was up, Claude Code handled VM provisioning, service deployment, firewall rules, and topology documentation. The session involved troubleshooting PCIe passthrough for the GPUs, diagnosing storage mount failures, configuring firewall rules between network segments, and writing service configurations for each component. Each problem was solved in conversation — describe the symptom, get a diagnostic command, run it, feed back the output, get the next step. The result is infrastructure I understand and can maintain — convivial technology, not a black box.

Cooking mentor deployment (March 2026): A service that needed to run on the inference server’s GPUs, serve five conversation modes, read from a knowledge base cloned via git, and be accessible from the kitchen kiosk. Claude Code wrote the service, the system configuration, the knowledge base sync mechanism, and the development access tunnel. When the first model choice (70B parameters) turned out too slow for conversation on the available hardware (~1 token/sec, limited by the bus between the two GPUs), Claude Code benchmarked alternatives and we settled on a 14B model at ~22 tokens/sec — fast enough for real-time kitchen conversation. The mentor is Schumacher’s criteria in action: understandable (I can read the service code and the prompts that shape its behavior, even if the model itself is opaque), maintainable (standard tools, local hardware), affordable (used GPUs, open-source model), skill-building (cultivates cooking judgment), context-fitted (mounted in the kitchen, not on a phone).

DAW integration (March 2026): An experimental session connecting the audio workbench to the Windows digital audio workstation across the network. Claude Code set up remote access, wrote helper scripts that translate between the development environment and Ableton’s command protocol, debugged platform-specific connection issues, and produced a command reference. Still early — this is the most experimental part of the infrastructure — but the signal chain works: compositions generated on the development server can be transferred and loaded into the DAW.

Kiosk fleet (March 2026): Defining the standardized pattern for embedded interfaces across the property — minimal OS, auto-login, window manager, kiosk browser, station server — then configuring the first two machines, writing the service configurations, and documenting the pattern so additional kiosks follow the template. Same appropriate technology principle: simple enough to understand, standard enough to maintain, fitted to the room it serves.

Website and essays (January 2026): Four long-form collaborative essays for thistlebridge.org, each 2,000-4,000 words with revision histories documenting every correction and iteration. A plant reservation system prototype. An arboretum knowledge garden on a subdomain. DNS migration that cut hosting costs from ~$100/year to ~$30/year. Claude Code handled the framework, the build tooling, the deployment pipeline. I handled every editorial and design decision.

Integrated pest management synthesis (March 2026): IPM — integrated pest management — is the practice of controlling greenhouse pests through biological and cultural methods rather than chemical pesticides: beneficial insects that prey on harmful ones, predatory mites, companion planting, environmental management. Claude Code ingested supplier catalogs for beneficial organisms, synthesized a comprehensive IPM strategy for the dome, then — at my direction — wrote a critique of its own plan (thirty beneficial organisms for a 490-square-foot greenhouse is probably overkill, no phasing, cost gaps in the analysis), then rebuilt the plan as a phased implementation with procurement calendars keyed to greenhouse temperature thresholds and pest pressure decision trees. Three documents: the plan, the critique, the implementation guide. The critique-then-rebuild pattern is something I use deliberately — generate a proposal, attack the proposal, then synthesize. It produces better results than trying to get it right the first time, and it mirrors how I’d work through a problem with a knowledgeable colleague.


The Bootstrapping Logic

Using a frontier cloud model to build local infrastructure is a deliberate dependency with a planned exit. This is the core tension of the project, and naming it honestly is part of the appropriate technology commitment.

Claude is the most capable reasoning tool available to me right now. The local models on my hardware — useful for specific tasks like kitchen conversation or voice transcription — can’t hold the context of a complex infrastructure project, reason about architectural tradeoffs, or translate intentions into working configurations across multiple technology stacks.

But the things Claude helps me build are local. The servers, the services, the knowledge base, the kiosk stations, the sensor network — all of that runs without cloud connectivity. If Claude disappeared tomorrow, the infrastructure would keep running. I’d lose the development methodology, not the systems. The knowledge base would still hold its contents.

The plan is to keep pushing this asymmetry: use frontier reasoning to stand up progressively more local infrastructure, document everything, and build toward a point where the local systems are self-sustaining and the frontier model becomes optional rather than necessary. Not because I want to stop using Claude — I don’t — but because Illich’s test applies to my own tools too. If the infrastructure I’m building for others requires a $200/month cloud subscription to maintain, it fails the conviviality test. The local systems need to stand on their own.


Nymphaea Update

In the January posts I mentioned Nymphaea — named after the water lily in the dome pond — a planned nonprofit built around the question: if this pattern works at one house, could it work in other communities? Here’s where that stands.

The model: train people at Thistlebridge through apprenticeship, not curriculum. Document what confuses them, what takes longest, what can be simplified. Then barn-raising replication — trained operators help set up new sites. A network of independent sites connected by mutual aid, not central control. Each site its own small City of Mind — and the network between them something closer to what Le Guin actually imagined: distributed intelligence, shared knowledge, no central administration. The Kesh don’t have a technology department. The City of Mind is a shared resource, not a managed service. That’s the model — distributed capability, not franchised dependency.

Collaborators are in conversation — people with backgrounds in cybersecurity, game development, fashion and sustainable design, community gardening, interactive media. None of this is formalized yet. The plan is that each writes essays about their domain using the same self-documenting methodology: markdown in git, session protocols, accumulated knowledge. The essays become the organization’s first public artifact — proof that the network thinks, not just that it exists. That hasn’t happened yet. It’s next.

The collaborator stack starts simple: a cheap used machine (~$100-200, no GPU), Claude Code via Max subscription, markdown files in git. That is a personal-core — a complete personal computing platform and knowledge system from day one. The files don’t care who’s reading them. If someone later adds GPU hardware, local inference plugs into what already exists.

The thumb drive test: if someone can stand up a working personal-core from a thumb drive and documentation, without a network connection back to Thistlebridge, the system is genuinely replicable. That test hasn’t been run yet. It’s next.

This grassroots barn-raising is distinct from — but needs to interface with — the institutional world. According to a 2024 ICMA survey, 48% of local governments rated AI utilization as a low priority, only 6% rated it high priority, and 77% cited lack of awareness and understanding as the primary obstacle to adoption.1 These aren’t large cities with dedicated innovation teams. These are small municipalities — the vast majority of local governments in the US — being told the world is changing while being sold tools designed for organizations ten times their size.

The Nymphaea approach and the municipal preparation problem aren’t the same work, but they’re complementary. The barn-raising builds community capability from the ground up — neighbors helping neighbors, skills transferring through direct relationship. The institutional interface helps local governments develop frameworks for the kind of community infrastructure we’re building — permitting, data governance, citizen engagement patterns. Both share an urgent timeline: the changes coming from AI are industrial-revolution scale. Small cities and grassroots communities both need preparation, and they need it from people who’ve actually built working systems at human scale, not from vendors selling dashboards.

1 International City/County Management Association, “AI in Local Government” survey, 2024. icma.org


The Learning Architecture

A core design principle across everything at Thistlebridge: augment learning, don’t replace it. This is Illich’s conviviality test applied specifically to education — does the tool develop the learner’s capability, or does it substitute for it?

The knowledge ingestion pipeline — a book scanner is incoming — will eventually let the mentors cite actual sources. But citation isn’t the point. The point is a learning architecture that walks people through things rather than handing them answers. The plan is a vectorized library — books and reference materials broken into searchable passages stored as embeddings — where mentor conversations can reference specific sections, ask the person to look something up, have them read the relevant passage and write down what they found in a workbook journal. Not “here’s the answer” but “here’s where to look, and write down what you learn.” Understanding built through the kind of active engagement that produces durable knowledge.

This is the difference between asking a mentor “what temperature should I ferment at?” and getting “68°F” versus getting “let’s look at what’s happening during fermentation — here’s the relevant section from Sandor Katz, read the paragraph on temperature ranges and write down what you think applies to your situation.” The first produces a number. The second produces a person who understands fermentation.

The workbook journal pattern — physical or digital — is deliberate. Writing things down in your own words is how understanding consolidates. The mentor prompts the writing; the person does it. The knowledge ends up in the person, not in the system. Schumacher’s criterion: skill-building through use.

This extends to every domain: the cooking mentor cultivates cooks, not recipe-followers. The greenhouse mentor cultivates growers who understand their soil, not people who do what the computer says. The workshop mentor cultivates people who understand their tools and materials. Each mentor is designed to make itself progressively less necessary as the person develops capability — and progressively more capable for the next person who uses it. Every interaction that surfaces a confusing passage, identifies a gap in the knowledge base, or produces a better way of explaining a concept gets folded back in. The mentor learns to teach better from each person it teaches. The hundredth person to learn fermentation through this system gets a better mentor than the first person did, because the first ninety-nine people’s questions and confusions shaped it.

Or that’s the plan. We can only know if it works by building a prototype, putting it in front of real people, and documenting their experience honestly enough to test the theory. Right now it’s a design with one running instance and zero external users.

And I should be honest about something larger. This whole project — using AI to build convivial tools, citing Illich and Schumacher as design specifications — is probably deeply at odds with what those thinkers would actually say about the technology I’m using. The AI industry, taken as a whole, is not building appropriate technology. It’s building attention-harvesting systems, surveillance infrastructure, and tools whose most visible consumer application so far has been generating nonconsensual sexual images of real people, including teenagers, from their social media photos. That’s the industry I’m drawing from. Illich would be horrified. Schumacher would walk out of the room.

But we’re in it now. The technology exists, the capabilities are real, and the only way out is through. Someone — hopefully many someones — needs to grab this stuff and make it do something better for humanity than what it’s currently doing. That’s not a justification. It’s a bet. And I’m making it with my eyes open.


What’s in the Chute

Spring plant sales — first revenue test. Reservation system prototype exists, needs testing before the sales window.

Knowledge ingestion — book scanner incoming. Once physical books can be digitized and structured, the vectorized library comes online and the mentors can walk people through actual texts. This is the difference between a useful prototype and a trustworthy learning tool.

Sensor network and digital twin — the mesh radio layer is running but the sensors that ride on it aren’t built yet. When environmental data flows in, the digital twin — currently a static property map — becomes grounded in measurement.

More mentor domains — cooking is the test case. Greenhouse management, workshop skills, and emergency preparedness follow the same pattern: local model, domain-specific knowledge, accessible from the relevant room’s interface. Each one an instance of the same convivial tool design.

Nymphaea essays and first replication — collaborator essays need to be written. The thumb drive test needs to be run. A second site on leased land nearby is planned as the first proof that the model replicates.

Deeper documentation — everything built at Thistlebridge is being documented as plain-language case studies. Not tutorials for developers. Accounts of what was tried, what worked, what broke, written for people who might want to do something similar. The site at thistlebridge.org has five long-form posts. It needs more, and deeper. There’s more to say.


What I Don’t Know

I don’t know if Nymphaea will work. The replication model is untested. The collaborator essays haven’t been written. The thumb drive test hasn’t been run.

I don’t know if Claude will keep working the way it works now. The development methodology depends on a cloud service I don’t control. The infrastructure and knowledge base should survive a change. But I’d be lying if I said I could route around the loss of the methodology today.

I don’t know what the right relationship is between human capability and AI assistance. I’ve been testing this since the end of 2025 and I still don’t have a clean answer. The dependency is real — I’m building toward local-first, human-scale technology while relying on a frontier model that is neither. I name the tension because it’s honest, not because I’ve resolved it. Illich would probably have something sharp to say about it. He’d probably be right.

What I do know: in January I had one server and a methodology that might hold together. By March — five weeks later — I had two servers across two network domains, a cooking mentor running on local hardware, two kiosk stations displaying useful information in rooms where people actually are instead of on their phones, a rotary phone taking voice notes, an audio workbench generating compositions, a comprehensive pest management plan for the greenhouse, a structured knowledge base with ninety entries across a dozen domains, and forty-four session logs accumulating institutional memory across months. The greenhouse is producing plant starts for the neighborhood. The pattern works — convivial tools, appropriate technology, a small City of Mind. The question is whether it transfers.

A greenhouse, some servers, and a dome off the freeway in Taylorsville. A proof of concept for using frontier AI to bootstrap the local systems that make communities more capable — and more human. Documented honestly, shared for whoever finds it useful.


How This Essay Was Written

This essay was developed across five Claude Code sessions and one claude.ai session over two days, using the same methodology it describes — iterative conversation, structured feedback, accumulated context.

Session 1: claude.ai (March 9, 2026)

The essay started in claude.ai, not Claude Code. Dixon was exploring the idea of writing something about Thistlebridge for the Claude Community Ambassadors program. The conversation covered what had been built, what mattered about it, how to frame it. The key output was conceptual: the essay shouldn’t read like a resume or application. It should stand on its own as an account of what’s being built and why.

The conversation also identified the philosophical grounding — Illich, Schumacher, Le Guin — as structural rather than decorative. These aren’t references dropped for credibility; they’re the actual design specifications the project uses. That distinction shaped everything that followed.

Session 2: Claude Code (March 9, 2026)

The essay moved to Claude Code for drafting, because Claude Code has access to the actual codebase — the session logs, the project files, the tools, the knowledge base, the CLAUDE.md protocol file. Claude.ai had Dixon’s verbal descriptions; Claude Code had the filesystem.

Five major revision passes:

  1. Draft 1: Claude Code surveyed the full workspace — 44 session logs, 87 knowledge files, all project and tool directories, the website content. Wrote a first draft. Too precious, too resume-like, mentioned the ambassador program explicitly.

  2. Draft 2: Restructured with the vision up front. Stripped security-sensitive details (IP addresses, port numbers, VM identifiers, authentication tokens). Removed the ambassador program mention — the essay should work without reference to any specific destination. Removed individual names.

  3. Draft 3: Major expansion. Added the heterogeneous I/O philosophy (smartphones as anti-pattern), the personal-core as computing platform concept, expanded the kiosk list to six planned stations, reframed the cooking mentor for community kitchens and climate migration, added the creative infrastructure section with the centralization paradox, corrected the timeline (Claude since end of 2025, not “about a year”), added ICMA survey statistics.

  4. Draft 4: Fixed cooking mentor mode descriptions (body maintenance = regular healthy eating, not something clinical; exploration = vivacious experimental cooking). Removed sonder.gallery reference. Lightened DAW details. Added the idea that mentors become more capable for the next person during interaction.

  5. Draft 5: Philosophical grounding pass. Opened with Illich, Schumacher, Le Guin as design specifications. Every body section now calls back to the framework: convivial tools, appropriate technology criteria, City of Mind. Reframed appropriate technology as local models enabling analog skills (not the models themselves as AT). Expanded the digital twin section. Spelled out IPM. Qualified Dixon’s background (computers since childhood, agriculture in college).

Session 3: Claude Code (March 9, 2026)

Dixon asked for a critique of the essay’s claims against what actually exists in the codebase. Claude Code went through every factual assertion and checked it against session logs, project files, tool directories, and the knowledge base. The critique identified nine claims that were overstated:

  • Network segmentation described as complete when VLAN enforcement wasn’t yet hardened
  • Personal computing platform claimed calendar and project management as built when only the knowledge base existed
  • Cooking mentor described as “reading from a knowledge base” when the pantry inventory was empty schema
  • Digital twin section blurred vision and current state
  • Persistent cameras and drone scans presented as in progress when no hardware exists
  • Sensor nodes described as “being assembled” with no evidence of assembly
  • Canon C100 and Soviet lenses mentioned but unverifiable from the codebase
  • Photogrammetry pipelines claimed but not documented
  • Collaborator recruitment described as active when it’s conversational

The essay was then revised to address each issue — tightening claims to match evidence, marking aspirations clearly as aspirations, adding the January→March trajectory to show what actually changed over time rather than presenting the current state as static.

What the Process Shows

The essay went through the same generate→critique→rebuild pattern it describes in the IPM section. The first draft overclaimed. The critique caught it. The revision was more honest. This is the methodology working on itself.

The move from claude.ai to Claude Code mattered. The conceptual framing came from conversation; the factual grounding came from filesystem access. Neither alone would have produced the essay. Claude.ai doesn’t know what’s actually in the session logs. Claude Code doesn’t have the prior conversation about what the essay should be. The handoff between them — via Dixon carrying the intent and Claude Code surveying the evidence — is itself an instance of the methodology: gather from multiple sources, accumulate what’s useful.

Errors caught during revision:

  • Claimed “calendar, documents, notes, project management” as running systems → corrected to knowledge base and structured documentation, with the rest marked as vision
  • Cooking mentor described as reading from a populated knowledge base → corrected to note empty pantry schema and token cap truncation
  • Digital twin presented as actively being built → corrected to note that only the mesh radio layer and a static SVG map exist
  • Foot switches described as if deployed → corrected to “specced but not yet purchased”
  • Services VM decommissioning and infrastructure simplification were missing from the narrative → added as the “infrastructure cleanup” session example
  • Site-core Proxmox install described as if Claude Code did it → corrected to note hands-on install with Claude Code taking over after SSH was available
  • Collaborators described as “being recruited” → corrected to “in conversation, none of this is formalized”

Session 4: Claude Code (March 10, 2026)

Dixon’s feedback on the revised draft identified structural issues and factual corrections:

  • The opener was too abrupt — jumping from philosophical design specifications into describing the project from scratch. Reframed as a spring check-in that revisits the January posts and takes stock of what’s changed.
  • VRAM math was wrong: 4 × 24GB = 96GB total, not 48GB. (48GB was correct when there were only two cards in January.)
  • VLAN configuration was done by Dixon in the UniFi controller dashboard, not by Claude Code. Corrected.
  • “I can read the code” for the cooking mentor was misleading — the service code and prompts are readable, but the model itself is opaque. Clarified.
  • “DAW” written out as “digital audio workstation” for readers unfamiliar with the term.
  • Electronics bench description had unnecessary jargon (“pinouts”) and unverifiable drone claims. Trimmed.
  • Shop description needed to note ground floor location and metal fabrication capabilities.
  • Rotary phone needed to describe where voice notes go — inbox → dashboard → knowledge base.
  • Nymphaea section needed a transition acknowledging it was introduced in January, not appearing from nowhere.
  • “Carrier bag” references were overused (7 instances) without being properly introduced for new readers. Trimmed to 2 — the initial Le Guin reference and one in the process documentation.
  • The learning architecture section needed an honest reckoning: the design is untested with zero external users, and the AI industry it draws from is doing deeply harmful things. Added both.
  • City of Mind concept moved to the Nymphaea section where it works better — a distributed network of small Cities of Mind is the actual vision, not just one household.

Errors caught during this session:

  • 48GB VRAM → 96GB (arithmetic error persisting from January when only 2 GPUs existed)
  • Claude Code claimed credit for VLAN config done in UniFi dashboard
  • Model opacity acknowledged where previously glossed over
  • Carrier bag theory used repeatedly without proper introduction for new readers

Session 5: Claude Code (March 10, 2026)

Final refinements before publication:

  • Intro reframed: the philosophical grounding doesn’t come from “three thinkers” but from a wider tradition of people thinking about how technology can serve human life. Illich and Schumacher provide philosophy of technology; Le Guin provides concrete imagining of what that looks like.
  • City of Mind interface corrected: research confirmed the Kesh access the City of Mind through terminals (“exchanges”) in every village, not heterogeneous I/O. The essay’s claim that “the City of Mind doesn’t require everyone to sit at a terminal” was inaccurate. Rewritten to acknowledge the terminal-based access while emphasizing the Kesh’s relationship to the technology — consult it when useful, then go back to living.
  • Media workstation: “almost entirely used or second-hand” corrected to “camera equipment is largely used or second-hand” — the station itself isn’t all second-hand.
  • Table saw analogy removed from the appropriate technology section — the example didn’t land. The cooking app and language model examples carry the point on their own.

What’s still uncertain:

  • The essay is long. It may need trimming, or it may work at length — depends on whether the density justifies the word count.
  • The learning architecture section is entirely aspirational. Now marked as such, but still the longest stretch of pure vision.
  • Dixon’s claude.ai conversation history informed the essay’s direction but isn’t directly accessible from Claude Code. Insights from that conversation were carried by Dixon into the Claude Code sessions.

Taylorsville, Utah — March 2026