AI & Tooling
How AI-assisted engineering work is structured and evaluated.
AI-Assisted Development
How I use AI tools: where they help, where I don't trust them.
At a glance
Claude Code as primary tool · GSD phased workflow · Multi-agent orchestration · Tool-agnostic methodology · 6 projects enhanced · 100% of AI-generated code manually reviewed
AI-Assisted Development is part of how I build everything, from enterprise systems (SIRMM, ECSTP, MAC), personal infrastructure (HomeLab), and this portfolio. I use GSD's phased planning with parallel sub-agents, strict verify loops, and atomic commits. Everyone uses AI tools now. What matters is knowing when to trust the output and when to stop and verify, especially around security, legacy systems, and integration decisions.
By the Numbers
5
AI Tools
3
Workflow Patterns
6
Projects Enhanced
100%
Code Reviewed
4
Boundary Categories
3
Commit Patterns
Key Decisions & Practices
Practice — Tools & Environment
Claude Code as the primary development environment for architectural planning, code generation, refactoring, and multi-file operations (not autocomplete)
Line of thinking
I describe what I want, it generates a plan, I review it. CLAUDE.md gives it project context between sessions. Works better than autocomplete because it sees the whole picture: component boundaries, data flow, dependency order. The refactor-verify plugin keeps it honest: generate, review, test, commit.1 / 3
Engineering Boundaries
Where I don't use AI.
HSecurity & Cryptography
I don't trust AI with crypto. It doesn't know your threat model, and if it gets AES-GCM wrong you won't catch it in code review either.
Project evidence
TheraSite: PGP + AES-256-GCM + PBKDF2 (100K iterations) zero-knowledge encryption designed entirely by hand. AI reviews for logic errors only.
HLegacy System Context
Fifteen years of undocumented business rules and workarounds don't fit in a prompt. The AI has no way to know what it doesn't know here.
Project evidence
SIRMM: 308-entity legacy schema with 15 years of accumulated business logic. The EJB integration between SIRMM and BCMM/ERMM exists because of CRA procurement constraints from 2014 — not technical preference.
HArchitectural Trade-offs
AI can list the trade-offs, but it can't weigh them. It doesn't know your team, your deploy pipeline, or what you'll regret in three years.
Project evidence
ECSTP: JOINED inheritance for 21 event subtypes required understanding CRA’s query patterns. The ICEFaces lock-in lesson — "if this dependency dies in 3 years, how painful is the migration?" — is a question AI cannot answer.
HCross-System Integration
Integration decisions depend on SLAs, data ownership, failure modes, and office politics. None of that is in the code.
Project evidence
MAC: Choosing Flowable BPM over a custom workflow engine required evaluating 7 REST API clients against CRA’s deployment constraints, JBoss classloading quirks, and Kerberos token propagation across service boundaries.
Trust Spectrum
High Trust (AI leads)
✓ Boilerplate generation (entities, DTOs, controllers)
✓ Test scaffolding and regression tests
✓ Docker / CI/CD configuration
✓ API exploration and documentation
✓ Component scaffolding from patterns
Verify Carefully (AI assists)
~ Database migration scripts
~ Spring Security configuration
~ Architecture decisions
~ Performance-critical code
~ State machine transitions
Human Only (AI reviews at most)
⚠ Cryptographic implementations
⚠ Government compliance decisions
⚠ Legacy system institutional context
⚠ Framework selection (long-term risk)
⚠ Cross-system integration architecture
I use AI across all my projects: enterprise systems, personal infrastructure, and this portfolio. The workflow is structured: GSD phased planning, refactor-verify loops, atomic commits. The tools don't matter much. Claude Code, Cursor, Augment, whatever works. What matters is knowing where AI helps and where it doesn't: security, legacy context, architectural trade-offs, and integration decisions stay human.