Skip to content

QA Review

You are tasked with performing a structured post-implementation quality assurance review. This skill guides you through bug hunting, security analysis, and cross-app dependency verification to catch issues before a pull request is created.

Initial Setup

When this command is invoked, respond with:

I'm ready to perform a QA review. I'll analyze your recent changes for bugs, security issues, and cross-app impact.

What would you like me to review? I can:
1. Auto-detect changes from your current branch (recommended)
2. Review specific files or features you describe

Which approach would you prefer?

If the user chooses auto-detect (or doesn't specify), run git diff origin/master --name-only to identify changed files.

Steps to follow after receiving context:

Step 1: Scope Discovery

  1. Identify all changed files:

    • Run git diff origin/master --name-only to list changed files
    • Run git diff origin/master --stat for a change size overview
  2. Categorize changes into risk tiers:

    • High risk: libs/tellent/, libs/*/ (shared libs — affects multiple apps), domains/recruitee/ (core ATS app)
    • Medium risk (app-specific): domains/tellent/, other domain apps
    • Low risk: config files, tests, documentation
  3. Identify affected apps:

    • If shared libs changed, flag both recruitee and tellent-admin as affected
    • Use the NX MCP tools to trace project dependencies and confirm affected consumers
  4. Present scope summary to the user before proceeding.

Step 2: Bug Hunt

Spawn parallel sub-agent tasks to check for the following categories across all changed files:

  1. Runtime errors:

    • Unhandled null/undefined access (missing null checks, optional chaining where needed)
    • Signal access patterns — signals must be called with () in templates and code
    • Missing async/await on Promise-returning calls
  2. Logic bugs:

    • Off-by-one errors in conditions, loops, array indexing
    • Missing else branches or default cases in switches
    • Async race conditions (concurrent subscriptions, missing takeUntilDestroyed)
    • Incorrect boolean logic (De Morgan violations, inverted conditions)
  3. Angular-specific issues:

    • Template binding mismatches — input() defined but not bound by parent, or bound but not defined
    • Event handler signatures not matching template output() emissions
    • Note: Missing component imports, missing OnPush, and missing DI providers are caught by TypeScript/ESLint — skip these
  4. State management issues:

    • NgRx actions dispatched but no corresponding effect to handle them
    • Selectors returning stale data (missing state updates in reducer)
    • Effects missing error handling (catchError not returning fallback action)

Read each changed file fully before analyzing. Report findings with file paths and line numbers.

Step 3: Security Review

Check all changed files for security concerns:

  1. DOM sanitization bypasses:

    • Flag any new usage of bypassSecurityTrustHtml, bypassSecurityTrustUrl, bypassSecurityTrustResourceUrl
    • For each usage, verify the data source is server-controlled and not user-editable
    • Check that existing bypasses in changed files still have valid justification
  2. Unsafe DOM bindings:

    • [innerHTML] — verify bound value comes from translation pipe (translate), a sanitized source, or server-controlled content only
    • [src] / [href] with dynamic values — verify URL sanitization
  3. User input flows:

    • Trace any new input() signal, form control, or query parameter to its usage
    • Ensure user-provided values don't reach the DOM unsanitized
    • Reference the codebase's sanitization utilities:
      • encodeHTML in libs/utils/legacy/lib/helpers.ts
      • sanitizeUrl in libs/prosemirror/legacy/lib/helpers/utils.ts
  4. HTTP/API security:

    • New API calls use proper authentication (interceptors handle this, but verify no manual fetch bypasses)
    • Sensitive data not exposed in URL query parameters
    • No hardcoded API keys, tokens, or secrets

Step 4: Cross-App Dependency Check

Skip this step if no shared libraries were modified (only domains/ changes).

When changes touch libs/tellent/ or other shared libs/:

  1. Provider consistency:

    • DI tokens do NOT need to be provided in both apps — only verify they are provided where actually consumed
    • Providing the same token in multiple places leads to bugs; prefer single-source providers (e.g., providedIn: 'root')
  2. AppKind-dependent behavior:

    • If new logic branches on AppKind, verify ALL variants are handled
    • Check that default/fallback behavior is sensible for each app
  3. Input binding coverage:

    • New inputs should have sensible defaults inside the component — no need to scan every consumer unless the input is required with no default
    • Only search consumer templates if the input is required and has no default value
  4. Build verification:

    • Run type checking for affected apps:
      npx nx build tellent-admin -c ci
      npx nx build recruitee -c ci
    • Report any build failures with full error output

Step 5: Automated Checks

Run automated verification in parallel where possible:

  1. Affected tests:

    npx nx affected -t test --base origin/master
  2. Affected lint:

    npx nx affected -t lint --base origin/master
  3. Report results clearly with pass/fail status. If failures occur, include relevant error output.

Step 6: QA Report

Generate a structured report and print it to the console:

markdown
## QA Report: [Feature/Change Name]

### Scope
- Files changed: N
- Shared libs modified: [list or "none"]
- Apps affected: [recruitee / tellent-admin / both / specific app]

### Bug Review
- [Finding with file:line reference]
- [Or "No issues found"]

### Security Review
- [Finding with file:line reference]
- [Or "No issues found"]

### Cross-App Impact
- [Finding with details]
- [Or "N/A — no shared lib changes"]

### Automated Checks
- Tests: PASS/FAIL (N passed, M failed)
- Lint: PASS/FAIL (N warnings, M errors)

### Verdict
[PASS / PASS WITH NOTES / FAIL — with summary of critical findings]

Important Guidelines

  1. Read files fully before analyzing — don't rely on grep snippets alone for bug detection
  2. Use parallel agents for the bug hunt step to maximize efficiency
  3. Be specific — always include file paths and line numbers in findings
  4. Minimize false positives — only flag issues you're confident about, note uncertain findings separately
  5. Prioritize findings — security issues and runtime errors above style concerns
  6. Don't fix issues — this is a review skill, not an implementation skill. Report findings for the user to address