🇮🇹 Leggi questo articolo in italiano
8 out of 18. That’s the ratio.
I checked GitHub Trending this week. Of the top 18 repos, 8 are coding agents or agent frameworks. Not developer tools. Not libraries. Not databases. Agents that write code for you.
A year ago, this list was dominated by AI model repos, training frameworks, and the occasional viral toy project. Now it’s wall-to-wall agents: OpenAI, Google, Anthropic, GitHub, ByteDance, and a handful of indie developers, all shipping the same category of tool at the same time. That doesn’t happen by accident.
The lineup: who shipped what
Let’s go through them. Every one of these repos is trending right now, the first week of March 2026.
opencode (116K stars) is currently the most-starred: an open-source coding agent in TypeScript. It bills itself as “the open source coding agent,” which is a bold claim when three other repos on the same trending page are making it too.
Gemini CLI (96K stars) is Google’s answer. An open-source terminal agent built on Gemini 3, with a free tier of 60 requests per minute and a 1-million-token context window. It comes with built-in Google Search grounding, file operations, shell commands, web fetching, and MCP support. It also has GitHub Actions integration for automated PR reviews and issue triage. Apache 2.0 license. Google is giving this away.
Anthropic Skills (84K stars) takes a different angle. Instead of building another agent, Anthropic open-sourced the skill system that makes Claude Code effective. Skills are folders of instructions, scripts, and resources that Claude loads dynamically. The repo includes the document creation skills (docx, pdf, pptx, xlsx) that power Claude’s built-in file capabilities. It also defines the Agent Skills specification, which is starting to look like an industry standard.
GitHub Spec Kit (74K stars) addresses the elephant in the room: “vibe coding” produces code that doesn’t work. Spec Kit is a toolkit for Spec-Driven Development, a four-phase workflow (Specify, Plan, Tasks, Implement) that treats specifications as executable, living artifacts. It’s agent-agnostic, works with Copilot, Claude Code, Gemini CLI, Cursor, or Windsurf. GitHub is essentially saying: the agents are great, but please stop letting them freestyle.
Superpowers (71K stars) is the most opinionated entry. Built by Jesse Vincent, it’s a complete software development workflow that turns your coding agent into something resembling a disciplined junior engineer. It enforces red-green-refactor TDD, spec-first design, subagent-driven development, and multi-stage code review. The philosophy: your agent should brainstorm with you, write a plan you approve, then execute tasks in isolated git worktrees with automated review gates. If the code doesn’t pass the spec, it doesn’t ship.
OpenAI Codex (63K stars) is the one backed by the biggest revenue engine. Written in Rust, it runs locally and ties into your ChatGPT subscription. 1 million weekly active users, tripled from February. Token processing up 5x. Cisco, Nvidia, and Rakuten are running it across their developer teams. OpenAI is positioning Codex as a “standard agent” for enterprise use beyond just coding.
Context7 (47K stars) is the plumbing. An MCP server by Upstash that feeds up-to-date, version-specific code documentation directly into your agent’s context. It solves the “LLM hallucinates an API that doesn’t exist” problem by injecting live docs instead of relying on training data from 2024. Just add “use context7” to your prompt.
DeerFlow 2.0 (24K stars) is ByteDance’s “super agent harness.” A ground-up rewrite that orchestrates sub-agents, sandboxed Docker execution, file system access, long-term memory, and progressive skill loading. Built on LangGraph and LangChain. It was #1 on GitHub Trending when it launched last week.
The pattern nobody is talking about
Here’s what’s interesting. These repos don’t just share a category. They share an architecture.
Every single one of them has converged on the same design: a language model connected to a set of tools (file system, shell, web, search) controlled by a set of skills (structured instructions loaded dynamically based on context). The skills approach, whether called “skills” (Anthropic, Superpowers, DeerFlow), “specs” (GitHub), or “GEMINI.md files” (Google), is the same idea: feed the model the right context at the right time instead of stuffing everything into the system prompt.
There’s even a name for this now. Context engineering. Martin Fowler wrote about it. The core insight: AI coding tools fail not because they lack intelligence, but because they don’t have adequate understanding of the codebase context. The solution isn’t a smarter model. It’s smarter context delivery.
If you’re building with AI agents in 2026, understanding context engineering matters more than which model you’re using. The model is becoming a commodity. The context is the moat.
The number that should make you uncomfortable
OpenAI’s Codex hit 1 million weekly users in its first week as a desktop app. That’s not a developer tool adoption curve. That’s a consumer product launch.
Meanwhile, a post on r/programming titled “AI Isn’t Replacing SREs. It’s Deskilling Them” pulled 858 upvotes and 165 comments this week. The argument draws on a 1983 cognitive psychology paper called “Ironies of Automation”: when automation handles 95% of routine work, humans become worse at handling the 5% of complex, high-stakes situations that actually require expertise.
The parallel to coding agents is obvious. If an agent writes most of your code, debugs most of your errors, and handles most of your refactoring, what happens to the skills you stop practicing? The agent doesn’t get tired. The agent doesn’t forget. But you do.
The companies shipping these agents aren’t worried about this. OpenAI is targeting “non-technical users” as the next Codex audience. GitHub’s Spec Kit exists precisely because agents left unsupervised produce unreliable code. Superpowers exists because someone realized agents need a human-in-the-loop workflow or the output is garbage.
The tools are getting better at coding. The question is whether the people using them are getting worse.
What the non-agent repos tell you
It’s worth noting what else is trending alongside the agent repos:
- OpenCut (46K stars): an open-source CapCut alternative. Video editing is the next category AI agents will eat.
- nanochat by Karpathy (44K stars): a ChatGPT-style app you can run for about $100. The “run it yourself” movement isn’t slowing down.
- Anthropic’s prompt engineering tutorial (32K stars): an interactive course on how to talk to LLMs effectively. Even Anthropic thinks most people are bad at prompting.
The signal is clear. The infrastructure layer of AI coding is settling into place. Agents are the interface. Skills are the knowledge. Context engineering is the discipline. MCP is the protocol. What’s left is figuring out whether this makes software better or just makes it faster.
Where this goes
Three predictions, all of which I’m willing to be wrong about:
1. The “agent skills” format will standardize. Anthropic’s agentskills.io spec, GitHub’s Spec Kit, and the Superpowers skill format are all converging on the same pattern: markdown files with frontmatter, dynamic loading, and tool references. Within six months, there’ll be one format that works across all major agents.
2. Context engineering will become a job title. Right now it’s a blog post topic. By the end of 2026, companies will hire for it. The person who decides what an agent sees is more important than the person who writes the prompts.
3. We’ll have our first high-profile “agent-caused outage.” A production system will go down because an agent deployed untested code, or because the human who was supposed to catch the bug hadn’t manually reviewed code in months and missed something they would have caught a year ago. It’s not a question of if, but when.
The coding agent era isn’t coming. It’s here. Eight out of eighteen trending repos this week prove it. The tools are impressive. The adoption is real. But so is the risk that we’re building a generation of developers who are great at supervising agents and terrible at the craft those agents are replacing.
Keep your skills sharp. The agent won’t always be there to save you.
