Archive
Feeds Sign In
Archive Feeds Sign In
Note

After the OpenClaw Anthropipocalypse, I have been struggling to find a suitable alternative. Started with OpenAI Codex, and while it matches Opus 4.6’s 1M token context window, it is just not well suited for the use case of orchestration and friendly assistant. It has a tendency to hallucinate and its projected demeanor is… weird. It’s like concentrated Mark Zuckerberg from a personality perspective. Decently good at technical tasks, tho.

I am currently using z.ai with their “Coding” plan, and I’m impressed. GLM-5.1 is remarkably similar to Opus 4.6 in my experience thus far. The 200k token window is tiny, unfortunately, but with some creative use of subagents, it’s manageable. I’ve also kept Codex around for now, modifying my standard operating procedures to encourage the use of Codex subagents for grunt work that requires a large token count.

Jonathan's location at time of posting:

LaCour Stationary -3.6 km/h 25%

Comments (2)

Khürt Williams
Khürt Williams via brid.gy

@jonathan Really appreciate you sharing this — it’s clear you’ve put a lot of thought and experimentation into making the workflow work for you. The idea of separating “thinking” from “heavy lifting” makes a lot of sense given the current model landscape.

Khürt Williams
Khürt Williams via brid.gy

@jonathan At the same time, it does feel like there’s a fair bit of overhead creeping in. Moving tasks between models, keeping context aligned, and managing slightly different behaviours can start to add friction, and sometimes the system becomes the thing you’re spending the most energy on.