Resources
- agentic-dev-workflow on GitHub — the full source, fork it and make it your own
Series
AI models are trained on billions of lines of "everyone's" code good and bad code, generic best practice, the Stack Overflow answer, the textbook approaches. But your project might not be that generic; why else did you otherwise create it?
We've all been there. You open your repository on Monday morning and there are a dozen dependency update PRs waiting. Some are patch updates, some are minor, one is a major version bump buried in the middle. CI is green on all of them. You merge them. What could go wrong?
Everyone has opinions about how tests should look. Naming conventions, structure patterns, which mocking library to use. The problem with AI coding assistants is that they have collected opinions for their learnings too, and they're usually not yours.
Most code quality checks exist in three places: a CI pipeline that runs after you push, a mental checklist you may or may not remember, and post-commit hooks that hit you with a wall of failures right when you thought you were done. The agentic workflow collapses all of these into a single command that runs before the PR, covers both .NET and Angular, and pairs automated scanning with reasoning about what the results mean.
Most teams write documentation after the feature ships. By then the context is stale, the pressure to move on is high, and the ADR nobody wrote is already forgotten. The agentic dev workflow treats docs as something you generate alongside the code, not something you backfill when someone complains the wiki is out of date.
Manual usability testing is slow, expensive, and easy to skip when a deadline looms. The /tool-ux-study skill spawns a coordinated team of AI tester agents that each log in as a different persona, test the application under different themes and viewports, and report back — while a lead agent acts as UX research facilitator, observing sessions, probing for clarity, and synthesizing findings into a research-grade report.
Skills are the most powerful part of any agentic workflow, and they're also the easiest to get wrong. This post covers the full lifecycle: writing a skill from scratch, finding and adopting skills from the community, and closing the loop so your skills improve over time.
You've discussed the feature with your AI agent, it wrote the code for it, but then what? You start the app, open the browser, click around, look for bugs, find one, describe it back to the agent. You're doing all the boring manual labor of verifying that what was built actually works. With Aspire's CLI overhaul in 13.2 and its new skills combined with Playwright CLI/skills the agent can manage and monitor your distributed app, open the browser, test the feature, and debug it. The tedious verify-and-fix loop becomes the agent's job, not yours.