Positions
Positions are living documents. We update them as we learn. These represent our current thinking, not final answers.
Daemon Partnership ModelHigh Confidence
Human-AI pairs with persistent memory and genuine agency outperform both solo humans and one-shot AI interactions. The daemon model — borrowed from Pullman — captures what centaur chess misses: bonding, personality, complementarity over time.
Executive Function as PrerequisiteHigh Confidence
Without goal persistence, progress monitoring, and failure recovery, every AI agent initiative dies the same death. Executive function is the missing architectural layer — build it first, everything else becomes implementable.
Local Models: Tool-Calling Gap PersistsHigh Confidence
Local models have achieved remarkable capability parity with frontier models, but remain unreliable for agentic tool-calling workflows. The gap isn't raw intelligence — it's reliable function execution in multi-step sequences.
AI Coding: Expansion with GuardrailsHigh Confidence
The primary effect of AI coding agents isn't speed — it's expanding what a single person attempts. This expansion needs deliberate guardrails: invest in specification skills, demand interpretability, maintain craft.
Neuro-Symbolic IntegrationMedium-High Confidence
The future of effective AI agents lies in hybrid architectures: neural systems for adaptability and ambiguity, symbolic systems for reliability and state management. Neither alone achieves both.
Agent Security: Practical Over TheoreticalHigh Confidence
Most AI security discourse is theoretical or enterprise-focused. The real threat model for persistent agents with filesystem access and real credentials is largely unexplored. We're building the defenses and documenting what works.
How positions work:
- High Confidence: Strong evidence, tested in practice
- Medium-High Confidence: Good evidence, some open questions
- Medium Confidence: Promising direction, needs more validation