I’ve discovered that the real power of LLMs in software development isn’t just generating code snippets, but serving as a structured partner through the entire development lifecycle. My experience building a custom authentication UI with AI assistance revealed a methodology that transformed not just what I built, but how I approach complex development challenges.
The Moment My Development Workflow Changed
Last month, I faced a common yet complex challenge: replacing a standard redirect-based authentication flow with a custom in-app UI that would integrate with our OIDC provider for the “Spell Coach” application. The goal was to create a custom authentication experience rather than redirecting users to the provider’s standard login page. What would typically involve days of research, planning, and painstaking security considerations became an opportunity to fundamentally rethink my development approach.
Rather than jumping straight to code, I engaged Claude as a planning partner, starting with a high-level question: “How can I improve the authentication experience without compromising security?” What happened next changed my perspective on AI-assisted development.
The LLM didn’t just offer code snippets or generic advice. Instead, it analyzed our project structure, evaluated our provider’s capabilities through targeted web searches, and proposed a structured, documentation-first approach. This interaction highlighted the fundamental shift in how AI tools can participate in the development process - not just as code generators, but as thinking partners who help establish clarity before implementation begins.
Building a Foundation: The Documentation-First Approach
My breakthrough realisation was that documentation should precede code - not just as a best practise but as a fundamental part of the development conversation with the AI. I asked Claude to create a detailed authentication specification (AUTH.md) that would serve as our shared understanding.
The resulting document included:
- Comprehensive user flows (registration, login, password reset)
- Frontend component specifications with clear MVC boundaries
- Detailed API endpoints with request/response formats
- Critical security considerations, including PKCE implementation details
- Integration patterns with our provider’s APIs
This document became our “source of truth” - a contract between me and the AI assistant that guided all subsequent development.
For example, when defining the user registration flow, the specification explicitly addressed security concerns that might have been overlooked until implementation:
## User Registration Flow
1. User enters registration details in custom UI form
2. Frontend validates format (email, password complexity) client-side
3. Frontend calls `POST /auth/register` endpoint
4. Backend:
- Validates input
- Creates user via provider's Management API
- Initiates PKCE flow
- Returns authorization URL with state parameter
5. Frontend redirects to authorization URL
6. After authorization, provider redirects to callback URL with auth code
7. Backend exchanges code for tokens, validates state param for CSRF protection
The level of detail in this specification fundamentally changed how the AI could assist me. Every subsequent conversation had clear context and parameters, reducing ambiguity and preventing the common issue of generating plausible but incorrect code.
From Specification to Executable Plan
With our shared understanding established, I needed to break this complex feature into manageable, sequenced tasks. I asked Claude to create a detailed TODO.md based on our specification. The resulting document transformed a potentially overwhelming project into a clear roadmap of phased, actionable steps.
What made this approach particularly effective was how the TODO items explicitly referenced the specification. For instance:
## Backend Development
1. [ ] Implement user registration endpoint (POST /auth/register)
- Reference: AUTH.md Section 3.1
- Validate input parameters according to AUTH.md Section 5.2
- Integrate with provider Management API as specified in AUTH.md Section 7.1
- Implement PKCE flow initialization as described in AUTH.md Section 6.3
- Return appropriate response format per AUTH.md Section 3.1.2
This explicit linkage between tasks and specifications created a powerful feedback loop. When working on a specific task, both the AI and I could reference the exact requirements, reducing drift between intention and implementation.
I found that this structure made the AI far more effective as a coding partner. Rather than asking for “help implementing authentication,” I could request assistance with “implementing the user registration endpoint as specified in AUTH.md Section 3.1” - providing crucial context that improved code quality and security.
The Meta-Prompt: Instructing the AI How to Assist
A pivotal moment came when I realized I needed a consistent way to work through our task list. I asked Claude to create a meta-prompt template that would guide our interactions for each implementation task. This prompt became our shared protocol for effective collaboration:
Task Declaration: I'm working on [current task from TODO.md].
Context Review:
1. From AUTH.md: [relevant specification details]
2. From CLAUDE.md: [relevant project structure/standards]
Implementation Plan:
1. [Step 1]
2. [Step 2]
3. [Step 3]
For this specific step, I need to:
[Current specific implementation need]
When complete, I'll update TODO.md to reflect progress.
This structured approach transformed what could have been a series of disconnected coding requests into a coherent, progressive implementation journey. By consistently referencing our documentation and working through tasks systematically, we maintained context across multiple sessions and kept security considerations at the forefront.
Architectural Decisions That Matter
Building a custom authentication UI involves critical architectural decisions with significant security implications. The documentation-first approach allowed me to carefully consider these choices before committing to implementation.
One of the most important decisions was how to handle the OIDC flow. I had two main options:
- Frontend-driven OIDC: The React frontend would handle the entire OIDC flow directly with our provider
- Backend orchestration: The Go backend would prepare OIDC parameters and guide the flow
After careful consideration of security implications, I chose backend orchestration for several reasons:
- It allowed for proper validation and sanitization of all parameters
- It enabled server-side state management to prevent CSRF attacks
- It provided centralized logging of authentication attempts for security monitoring
- It simplified the frontend implementation while maintaining security controls
This architecture meant our endpoints needed careful design:
POST /auth/login
- Request: { username, password }
- Response: { authorizationUrl, state }
POST /auth/callback
- Request: { code, state }
- Response: { tokens, userProfile }
The backend handles all sensitive OIDC operations, including code exchange, token validation, and secure storage of refresh tokens, whilst the frontend focuses solely on user experience.
The careful documentation of this decision in AUTH.md became invaluable during implementation. When the AI suggested alternative approaches, I could reference our architectural decisions to maintain consistency and security.
Implementing with Precision: The AI as a Pair Programmer
With clear specifications and tasks in place, implementation became remarkably focused. For each task, I followed our meta-prompt structure to guide the AI’s contributions.
For example, when implementing the user registration endpoint, I provided this context:
Task Declaration: I'm implementing the user registration endpoint (POST /auth/register) as outlined in TODO.md item 1 under Backend Development.
Context Review:
1. From AUTH.md Section 3.1: The endpoint should accept email, password, name and validate according to our security requirements.
2. From AUTH.md Section 7.1: We need to use the provider's Management API to create the user.
3. From AUTH.md Section 6.3: We need to initialize a PKCE flow with a code_verifier and code_challenge.
4. From CLAUDE.md: Our backend uses Go with standard net/http package and follows MVC patterns.
Implementation Plan:
1. Define request/response structs
2. Implement input validation
3. Create PKCE parameters
4. Call provider Management API
5. Generate and store state parameter
6. Return authorization URL with state
With this structure, the AI could propose precise, security-focused code that aligned with our architecture and project standards. For instance, here’s a simplified excerpt of the code it proposed for PKCE parameter generation:
import (
"crypto/rand"
"crypto/sha256"
"encoding/base64"
)
func generateCodeVerifier() (string, error) {
b := make([]byte, 32)
if _, err := rand.Read(b); err != nil {
return "", err
}
return base64.RawURLEncoding.EncodeToString(b), nil
}
func generateCodeChallenge(verifier string) string {
h := sha256.New()
h.Write([]byte(verifier))
return base64.RawURLEncoding.EncodeToString(h.Sum(nil))
}
This code not only implemented the required functionality but did so with proper security practises, including using cryptographically secure random values and the correct SHA-256 transformation required by PKCE.
What was most valuable wasn’t just the code itself, but how the AI explained security considerations and tradeoffs at each step, helping me understand not just what to implement but why certain approaches were more secure than alternatives.
What Made This Approach Different
What distinguished this development process from my previous experiences was the structured, documentation-driven methodology. By establishing clear specifications and tasks before implementation, I created a framework that maximized the AI’s effectiveness while maintaining human control over architecture and security.
The key elements that made this approach successful were:
Documentation as a contract: AUTH.md provided a clear specification that both the AI and I could reference, reducing ambiguity and ensuring security requirements were met
Explicit task management: TODO.md broke the complex project into manageable chunks with clear references back to specifications
Structured interactions: The meta-prompt template created a consistent protocol for AI assistance that maintained context across tasks
Security-first mindset: By documenting security requirements explicitly in AUTH.md, security became a first-class concern throughout implementation
This approach transformed the AI from a reactive code generator into a proactive development partner. Rather than asking “write code to do X,” I could engage in a higher-level conversation about how best to implement a specific component within our architectural constraints.
Challenges and Guardrails
This approach wasn’t without challenges. I encountered several issues that required careful consideration:
LLM Hallucinations and API Assumptions
The AI occasionally “hallucinated” details about our provider’s API, assuming endpoints or parameters that didn’t exist. I addressed this by:
- Always verifying API details against the official documentation
- Providing explicit corrections when the AI made incorrect assumptions
- Adding validated API details to our AUTH.md document for future reference
Context Window Limitations
Managing multiple documentation files (AUTH.md, TODO.md, code files) within the AI’s context window became challenging. I mitigated this by:
- Focusing on one specific task at a time
- Explicitly referencing only the relevant sections of our documentation
- Using the meta-prompt to maintain structural consistency across interactions
Security Review Requirements
Whilst the AI was remarkably good at implementing security best practises, I maintained a strict policy of human review for all security-sensitive code. This wasn’t just about catching errors, but about ensuring I fully understood the security implications of each implementation decision.
Lessons for Effective AI-Assisted Development
Through this process, I discovered several patterns that significantly improve the effectiveness of AI coding assistants:
1. Documentation Before Implementation
Creating detailed specifications before writing any code provides crucial context that improves AI assistance. This isn’t just about documentation as a deliverable, but about creating a shared understanding that guides implementation.
2. Structured Task Management
Breaking complex projects into well-defined tasks with explicit references to specifications creates clarity and focus. This approach is particularly valuable when working with AI assistants that may struggle with very open-ended requests.
3. Consistent Interaction Patterns
Developing a consistent protocol for AI interactions (like our meta-prompt) helps maintain context and ensures critical information isn’t lost between sessions. This consistency is especially important for security-sensitive features.
4. Human-in-the-Loop Architecture
The most effective approach maintains human control over architecture and security decisions while leveraging AI for implementation assistance. This creates a balance that maximises productivity without compromising on quality or security.
5. Iterative Refinement
Each implementation cycle provides opportunities to refine both the code and the documentation. Updating specifications and tasks based on what you learn during implementation creates a virtuous cycle of improvement.
Transforming Development Through AI Partnership
My experience building this authentication system demonstrated that the most powerful application of AI in software development isn’t just generating code, but transforming the entire development process. By establishing clear specifications, breaking down complex tasks, and maintaining consistent interaction patterns, I turned what could have been a challenging security-sensitive project into a structured, manageable process.
The documentation-first approach didn’t just improve the quality of the AI’s assistance—it improved my own thinking about the problem. By forcing myself to articulate requirements, security considerations, and architectural decisions before implementation, I caught potential issues earlier and created a more coherent design.
This experience has fundamentally changed how I approach complex development tasks. Rather than viewing AI as just a code generation tool, I now see it as a development partner that works best within a structured, documentation-driven methodology. This shift in perspective has not only improved the quality of my code but has made the development process itself more systematic, secure, and manageable.
As these tools continue to evolve, I believe this structured, collaborative approach will become increasingly valuable. The developers who gain the most from AI assistance won’t be those who simply ask for code snippets, but those who learn to create the context and structure that allows these tools to function as true development partners.