HomeProjectsBlog
GitHubLinkedInTwitterInstagram
2026-03-08

treat your ai like a human

There's a gold rush happening around AI tooling. New protocols, new frameworks, new standards, every week. MCP, LangChain, CrewAI, AutoGen, dozens of agent harnesses. Everyone wants to be the one who defines how AI agents talk to tools, to each other, to the world.

Most of it is unnecessary.

Not because the harness doesn't matter. It does. Your agent needs to call tools, manage context, handle errors. That's real engineering. But the ecosystem around it has a chronic case of inventing new abstractions for things that already exist. New serialization formats for function calls. New protocols for tool discovery. New frameworks that wrap other frameworks.

The thing these efforts keep missing is that LLMs are, for practical purposes, just people. They read docs. They understand APIs. They figure things out from examples. You don't need a special protocol for an LLM to use a tool any more than you need a special protocol for a new hire to use your internal dashboard. You give them the docs and they figure it out.

This is why I think skills (just a markdown file describing a capability) are one of the few genuinely good ideas to come out of this space. A skill is a SKILL.md file. The agent reads it, understands what it can do, and starts using it. No schema negotiation, no capability discovery protocol, no handshake. Just a document written for a reader who can think.

The reason this works is progressive disclosure. The skill file gives the agent exactly what it needs to get started, and more detail is available if it needs to dig deeper. This is the same principle that makes good documentation work for humans. You don't dump the entire API reference on someone's first day. You give them a getting started guide and let them explore from there.

Progressive disclosure is not an AI-specific insight. It's a fundamental principle of how you communicate with any intelligent entity. Menus in software work this way. Good textbooks work this way. Conversations work this way. You start with what matters and reveal complexity as needed.

And that's the broader point. Almost every time I've seen someone build something genuinely useful for LLMs, the design principle was the same one that works for humans. Write clear docs. Give good examples. Structure information so the important stuff comes first. Don't dump everything at once. Make errors informative.

The people building elaborate machine-to-machine protocols are optimizing for a world where AI is fundamentally different from us. But the more capable these models get, the more they just act like people. They read. They reason. They get confused by the same things that confuse humans and succeed with the same things that help humans.

So the recipe is simple. Before you reach for a framework or a protocol, ask yourself: if I were handing this task to a smart person, what would I give them? Give your agent that. It'll probably work.

What better way is it to realize we have achieved true AI if not that.

← all posts