← Back to posts
How to turn your AI coding tool into a real development partner. A workflow system with persistent memory, session rituals, and stack-aware skills for .NET and Angular projects.
8 parts 94 min total read
  1. Part 1

    Turning your AI tool into your pair programming companion

    AI models are trained on billions of lines of "everyone's" code good and bad code, generic best practice, the Stack Overflow answer, the textbook approaches. But your project might not be that generic; why else did you otherwise create it?

    Mar 23, 2026
  2. Part 2

    Dependency updates that understand your code

    We've all been there. You open your repository on Monday morning and there are a dozen dependency update PRs waiting. Some are patch updates, some are minor, one is a major version bump buried in the middle. CI is green on all of them. You merge them. What could go wrong?

    Mar 23, 2026
  3. Part 3

    Teaching your AI how to write tests with you

    Everyone has opinions about how tests should look. Naming conventions, structure patterns, which mocking library to use. The problem with AI coding assistants is that they have collected opinions for their learnings too, and they're usually not yours.

    Mar 23, 2026
  4. Part 4

    Quality gates that actually run: verification and security in the agentic workflow

    Most code quality checks exist in three places: a CI pipeline that runs after you push, a mental checklist you may or may not remember, and post-commit hooks that hit you with a wall of failures right when you thought you were done. The agentic workflow collapses all of these into a single command that runs before the PR, covers both .NET and Angular, and pairs automated scanning with reasoning about what the results mean.

    Mar 23, 2026
  5. Part 5

    Documentation as a first-class concern in your agentic workflow

    Most teams write documentation after the feature ships. By then the context is stale, the pressure to move on is high, and the ADR nobody wrote is already forgotten. The agentic dev workflow treats docs as something you generate alongside the code, not something you backfill when someone complains the wiki is out of date.

    Mar 23, 2026
  6. Part 6

    AI-driven usability testing: a think-aloud study with a team of AI testers

    Manual usability testing is slow, expensive, and easy to skip when a deadline looms. The /tool-ux-study skill spawns a coordinated team of AI tester agents that each log in as a different persona, test the application under different themes and viewports, and report back — while a lead agent acts as UX research facilitator, observing sessions, probing for clarity, and synthesizing findings into a research-grade report.

    Mar 23, 2026
  7. Part 7

    Building and evolving your own AI development skills

    Skills are the most powerful part of any agentic workflow, and they're also the easiest to get wrong. This post covers the full lifecycle: writing a skill from scratch, finding and adopting skills from the community, and closing the loop so your skills improve over time.

    Mar 23, 2026
  8. Part 8

    Don't let your AI agent delegate the debug work to you: manage, monitor, and test your app with Aspire 13.2's CLI overhaul and new agent skills

    You've discussed the feature with your AI agent, it wrote the code for it, but then what? You start the app, open the browser, click around, look for bugs, find one, describe it back to the agent. You're doing all the boring manual labor of verifying that what was built actually works. With Aspire's CLI overhaul in 13.2 and its new skills combined with Playwright CLI/skills the agent can manage and monitor your distributed app, open the browser, test the feature, and debug it. The tedious verify-and-fix loop becomes the agent's job, not yours.

    Mar 23, 2026

Resources