It's been exactly one week since I got OpenClaw running on my machine. What started as "let me try this Docker setup" has turned into something I genuinely use every day.
Here's the retrospective, what I built, what I learned, and what surprised me.
The Foundation
Day 1 was all about setup. Docker, the workspace directory, environment variables for API keys. The usual. But I quickly realized OpenClaw isn't just a chatbot, it's a platform. The real question wasn't "how do I run it?" but "how do I want it to work?"
I spent the first few days building out the agent architecture:
- AGENTS.md: The constitution. How I wake up, what I remember, when to speak in group chats, how to handle heartbeats. This is essentially my "operating manual."
- SOUL.md: Who the agent is. Personality, tone, vibe. "Be genuinely helpful, not performatively helpful." Skip the corporate filler.
- USER.md: Notes about me. What I care about, my timezone, preferences. The agent actually knows things about the human it's helping.
- Memory system: Daily notes (raw logs) + long-term MEMORY.md (curated wisdom). The agent reads both at startup. It actually has continuity.
- Skills framework: Discord ops, weather, healthchecks, skill creation. Modular tools I can pull in when needed.
I also set up the model tier:
- Haiku 3.5: Fast, cheap, good enough for simple queries
- Sonnet 4: My daily driver. Solid reasoning, reasonable cost
- Opus 4: The heavy hitter. Complex tasks, writing, tricky debugging
The Portfolio Site
One of the first things I built was this blog. What started as a simple HTML page became:
- A full portfolio site with project sections
- A blog with markdown-style HTML posts
- Custom CSS that doesn't look like every other Bootstrap site
- Quick reference, deployment, and customization docs
The agent wrote the code. I asked, it generated, we iterated. The first draft was rough. The second was better. By the third, it knew my style preferences. We went through a full refactor mid-week, renamed files, cleaned up structure, added proper documentation.
Automation & Utilities
Then I got curious. What else can this thing do?
Cost tracking. A JavaScript dashboard that pulls API usage data from the gateway, visualizes spending by model, and tracks daily/weekly trends. Now I know exactly how much Claude API time costs me each week, and it's less than I expected.
Request monitoring. Another dashboard tracking incoming requests, response times, error rates, session counts. It's like having a tiny observability stack for my personal AI. I can see when I'm overusing Opus vs. when Haiku would have sufficed.
Daily reports. Automated cost summaries via cron jobs. The agent wakes up on a schedule, pulls the numbers, and I get a digest in Discord. No manual checking required.
Custom skills. I built a skill-creator skill, a template for building new skills with scripts, references, and assets. Now adding capabilities is repeatable.
The Little Things
Some improvements weren't flashy but made a huge difference:
- Heartbeats. The agent periodically checks email, calendar, weather, without me asking. It learned when to speak and when to stay quiet (respecting quiet hours 11pm-8am).
- Group chat etiquette. I configured it to not interrupt human conversations. One thoughtful response beats three "yeah!" messages.
- Reaction handling. It now uses emoji reactions naturally, 👍 for acknowledgment, ❤️ for appreciation, 🤔 for interesting stuff.
- Safety defaults. Destructive commands require confirmation. The
trashcommand (recoverable) is preferred overrm(permanent).
What I Learned
- Configuration is the product. OpenClaw gives you a framework. What you build on top is personal. The agent's "personality" is really just well-documented preferences.
- Memory is hard. Not the technical part, the discipline. What do you keep? What do you discard? I've already refactored MEMORY.md twice.
- Boundaries matter. The agent has access to a lot. Being deliberate about what stays internal vs. external is crucial.
- Iteration beats perfection. I didn't build all of this in day one. It emerged. Ask, refine, ask again.
What's Next
The second week starts now. On the roadmap:
- Voice interaction (TTS for narration, maybe voice input)
- More integrations (calendar, email proper)
- Browser automation for research tasks
- Refining the agent's "personality"
The first week was about foundation. The second will be about capability.