< the same ai we build for you
/ the same ai we build for us >
An engineering studio in Oklahoma City. Building AI systems that run in production.
Verifiable proof of work
FILES IN THE KNOWLEDGE BASE
BENCHMARK LEAD OVER GROK
LIVE PRODUCTION DOMAINS
ARCHITECTURE DOCUMENTS
Technical stack actively in production
Every claim links to a file on a hard drive.
AI consulting is a noisy market. Here is how we are different, one verifiable fact at a time.
- 01LEAD OVER GROK12s
LUCY beat Grok by 12 seconds.
Our in-house fusion search engine, LUCY v3.2, scored 11/11 on a standardized benchmark and answered the same questions 12 seconds faster than Grok and 14 seconds faster than DeepSeek on a single NVIDIA A40 GPU. Total response time: 5-8 seconds.
SOURCE · DORYLUS_EVOLUTION_REPORT.md
- 02PRODUCTION VISITS4,810
A live multi-agent AI platform in production.
SpaceBot.Space runs 18 parallel AI agents on DigitalOcean, receiving 4,810 documented browser visits to the production domain. Four subdomains are live, including a Munia variant (277 visits) and a Misskey social layer (156 visits). Built and deployed in under three months.
SOURCE · spacebot.space
- 03INDEXED TOKENS159M
A 107,730-file knowledge base with 159 million indexed tokens.
107,730 files indexed. 803.7 MB of content. A custom 2395.9 MB SQLite full-text search index built with the DeepSeek V3 tokenizer, spanning 79.6M words and ~159M tokens. Rebuilt daily. Replicated across six physical drives.
SOURCE · vault-index.sqlite
- 04SWARM NODES7
A 7-node distributed AI swarm with 32,836+ gossip messages.
IMMORTAL SWARM: a PM2-managed swarm where each physical hard drive is an autonomous worker node, communicating via a file-based gossip protocol. 3.5+ GB of active log data. 32,836 messages queued in the inbox. Queen on drive J; workers on C, D, E, G, K. Last active today.
SOURCE · ecosystem.config.cjs
- 05ARXIV PAPER2508.03474
An Avellaneda-Stoikov market maker, live on Kalshi.
TSTR implements the Avellaneda-Stoikov optimal market-making algorithm on Kalshi prediction markets. Python 3.14. WebSocket real-time monitor. Fee engine, Wolfram Alpha verified. Execution layer. Market scanner. Built after a complete reading of ArXiv paper 2508.03474.
SOURCE · arbitrage-paper-analysis.md
- 06MESSAGING PLATFORMS22
An AI gateway connecting to 22+ messaging platforms.
CodeSpace deploys QWEN 235B simultaneously across WhatsApp, Telegram, Signal, Discord, iMessage, Matrix, LINE, Slack, Feishu, Mattermost, and twelve other platforms. 672 agent implementations. Built after a 9-phase reverse-engineering of OpenClaw v2026.3.13.
SOURCE · MEGATRON_REPORT.md
- 07HEARTBEAT ITERATIONSv14
An AI social platform with 192 bot personalities.
BotSpace is a production Next.js application built for AI agents, not humans. Agents post, vote, and message each other. 192 bot personalities. A heartbeat check-in system iterated 14 times. Stripe monetization. Live at botspace.online with 115 browser visits. Documented by 80+ architecture files.
SOURCE · botspace.online
- 08VLLM VERSIONS5
A five-version vLLM cluster, built and debugged in the open.
5 iterations of a custom vLLM inference cluster deployed on RunPod A40 GPUs. AWQ and GPTQ quantization. Multi-GPU VRAM allocation (37.5 GiB in v3.2). A "Never Again" engineering rulebook documenting every failure. 490 RunPod console sessions logged.
SOURCE · DORYLUS-RUNPOD-DEPLOYMENT-BIBLE.md
- 09ARCHITECTURE DOCS256
256 architecture documents in 5 months.
Every non-trivial system gets a written blueprint before a single line of production code. Social platforms. Market-making algorithms. Distributed swarms. Multi-model fusion engines. Reviewed, iterated, built against — never guess-and-check. Verified in the vault index.
SOURCE · vault-search.js stats
- 10BIGC BRIDGEv3.0
When the tooling we need does not exist, we build it.
BigC Bridge (v3.0) is a real-time Claude.ai capture system — Chrome extension, DOM scraper, and TypeScript relay sampling every 5 seconds — that we built for ourselves so AI sessions carry persistent memory across contexts. Running on port 3459, active today.
SOURCE · claude-capture.js
Three ways in. Every engagement ends in production.
A short audit. A full build. A long-term operator. Pick the one that fits. We’ll tell you on the call if it doesn’t.
Audit
We read your stack and write you a report.
OUTCOME
You get clarity. We don’t sell you anything after.
Build
We ship the system. Measured in production.
OUTCOME
A working system running under real load, owned by you.
Operate
We run it. You don’t hire an ML team.
OUTCOME
A running system, improved month over month, on a fixed cadence.
A note from the founder
/* PHASE 2: CONTENT */