Edit on GitHub

Review: Published Solution Guides

Reviewed: May 7, 2026

Scorecard

Rank Solution Guide Score Verdict
1 AIOps with Splunk and EDA 8.9/10 Deepest multi-use-case guide; three integration patterns with strong validation and troubleshooting
2 Automated Incident Remediation with IBM Instana 8.9/10 Dual-path architecture (EDA vs native); per-use-case operational impact and unusually complete validation
3 High-Availability AAP with EDB PostgreSQL DR 8.7/10 Reference-grade DR architecture with diagrams, runbooks, and failback procedures
4 Unlock AIOps with ServiceNow LEAP and Ansible MCP server 8.7/10 Strong LEAP/MCP governance story with MTTR focus, customer evidence, multi-agent visibility, and full framework alignment
5 AIOps automation with Ansible 8.5/10 Strong foundational reference architecture; best systems narrative, observability catalog, and playbook source mapping
6 AI Infrastructure automation with Ansible 7.3/10 Clear two-collection story (infra.ai + redhat.ai); needs framework alignment and deeper validation
7 Intelligent Assistant with Red Hat AI Inference Server 6.9/10 Strong hands-on RHAIIS + Lightspeed hookup; weakest framework alignment of published guides

How This Was Scored

Each guide was evaluated against the quality scoring model from the Best Practices for Writing Solution Guides:

Category Weight
Outcome Clarity 20%
Architecture Clarity 20%
Technical Executability 25%
Validation/Testability 15%
Production Readiness Info 10%
Business Framing 10%

Score each category 1-5. Multiply by weight. Final score out of 10. Any category below 3 means revise before publishing.


Guide Reviews


1. AIOps with Splunk and Event-Driven Ansible

File: README-AIOps-Splunk-ITSI.md Score: 8.9 / 10

Category Score
Outcome Clarity (20%) 5
Architecture Clarity (20%) 4
Technical Executability (25%) 4
Validation/Testability (15%) 5
Production Readiness (10%) 4
Business Framing (10%) 5
Stats: ~6,800 words 15 YAML blocks 1 hero image + 1 architecture image 17 walkthrough subsections

Strengths:

Weaknesses:

Suggestions:

  1. Fix or remove the dead “Incident Response Timeline” TOC entry
  2. Provide a distinct diagram (or clear caption) separating predictive ITSI topology from generic webhook flow
  3. Unify Prerequisites with every module used in excerpts (add f5networks.f5_modules, amazon.aws)

2. Automated Incident Remediation with IBM Instana

File: README-Instana-AIOps.md Score: 8.9 / 10

Category Score
Outcome Clarity (20%) 4
Architecture Clarity (20%) 5
Technical Executability (25%) 4
Validation/Testability (15%) 5
Production Readiness (10%) 4
Business Framing (10%) 5
Stats: ~4,300 words 8 YAML blocks 1 hero image + 2 architecture diagrams 8 numbered steps + 3 use cases + optional AI section

Strengths:

Weaknesses:

Suggestions:

  1. Fix the “an governed” typo in the Overview
  2. Add a subtitle or alias so “Integration Architecture” maps to the framework’s Workflow section for reviewers
  3. Consider adding a Mermaid version of the dual-path topology for inline rendering on GitHub Pages

3. High-Availability AAP with EDB PostgreSQL DR

File: README-EDB.md Score: 8.7 / 10

Category Score
Outcome Clarity (20%) 4.5
Architecture Clarity (20%) 5
Technical Executability (25%) 4
Validation/Testability (15%) 4
Production Readiness (10%) 4
Business Framing (10%) 4.5
Stats: ~7,130 words 1 YAML block 3 Mermaid diagrams (architecture, data flow, failover sequence) 6 phases, 23 titled substeps

Strengths:

Weaknesses:

Suggestions:

  1. Validate and fix the unified inventory INI grouping so DC2 host/variable assignments are unambiguous
  2. Add a short “Support and boundaries” note clarifying Red Hat vs EDB support scope
  3. For 3-5 key commands (podman ps, efm cluster-status, pg_is_in_recovery), show verbatim expected output

4. Unlock AIOps with ServiceNow LEAP and Ansible MCP server

File: README-AIOps-ServiceNow.md Score: 8.7 / 10 (updated May 2026 – MTTR-focused card, customer reference, multi-agent rationale, servicenow.itsm recommended)

Category Score
Outcome Clarity (20%) 5
Architecture Clarity (20%) 4.5
Technical Executability (25%) 4
Validation/Testability (15%) 4
Production Readiness (10%) 5
Business Framing (10%) 5
Stats: ~3,800 words 2 YAML blocks 1 hero image + 1 SVG architecture diagram + 1 Mermaid diagram 4 walkthrough steps + 4 verification artifacts

Strengths:

Weaknesses:

Suggestions:

  1. Add a short API-based alternative for Step 2 (connector setup) so readers without LEAP UI access can validate programmatically
  2. Consider adding a concrete multi-agent scenario (e.g., Cursor + LEAP both updating the same incident) to illustrate the feedback loop

5. AIOps automation with Ansible

File: README-AIOps.md Score: 8.5 / 10 (updated May 2026 after adding Red Hat Lightspeed content and terminology updates)

Category Score
Outcome Clarity (20%) 4
Architecture Clarity (20%) 5
Technical Executability (25%) 4
Validation/Testability (15%) 3
Production Readiness (10%) 4.5
Business Framing (10%) 4.5
Stats: ~7,500 words 8 YAML blocks 6+ substantive workflow/concept images 4 pipeline phases with 16 numbered substeps

Strengths:

Weaknesses:

Suggestions:

  1. Add verbatim expected output for each pipeline stage (event body, AI response structure, code assistant JSON)
  2. Sanitize YAML examples so template names are plain strings suitable for copy-paste
  3. Insert the KB blockquote under the H1 per publishing standards

6. AI Infrastructure automation with Ansible

File: README-IA.md KB Article: access.redhat.com/articles/7118390 Score: 7.3 / 10

Category Score
Outcome Clarity (20%) 4
Architecture Clarity (20%) 4
Technical Executability (25%) 4
Validation/Testability (15%) 3
Production Readiness (10%) 3
Business Framing (10%) 3
Stats: ~1,927 words 3 YAML blocks 1 workflow screenshot ~8-10 major phases

Strengths:

Weaknesses:

Suggestions:

  1. Add an Overview section with a problem statement and a consolidated Prerequisites table
  2. Show provision.yml execution alongside ilab.yml (or the AAP job settings equivalent)
  3. Expand Validation with sample success output and a 3-5 row troubleshooting table

7. Intelligent Assistant with Red Hat AI Inference Server

File: README-Intelligent-Assistant-RHAIIS.md KB Article: access.redhat.com/articles/7130595 Score: 6.9 / 10

Category Score
Outcome Clarity (20%) 4
Architecture Clarity (20%) 3
Technical Executability (25%) 3
Validation/Testability (15%) 4
Production Readiness (10%) 3
Business Framing (10%) 4
Stats: ~1,625 words 0 YAML blocks (2 YAML snippets in ~~~ fences) Arcade demo + 1 GPU screenshot ~12-14 major operations

Strengths:

Weaknesses:

Suggestions:

  1. Add an Overview with problem statement and a Workflow section with a simple diagram
  2. Fix podman run shell formatting for unambiguous line continuation; convert ~~~ to ` ```yaml `
  3. Add a Validation heading with a troubleshooting table (common failures: connectivity, 401/403, OOM/GPU, wrong URL path)

Cross-Cutting Observations

Patterns that work well across guides:

Recurring gaps across most guides:

  1. KB blockquote under the title – Most guides do not place the access.redhat.com link in a blockquote directly under H1 per repo convention
  2. Verbatim validation output – Most guides describe success indicators but do not show literal expected output
  3. Framework section naming – Several guides rename or omit canonical section names (Workflow, Solution Walkthrough, Prerequisites)
  4. YAML copy-paste fidelity – Some guides embed HTML, emoji, or formatting in YAML that breaks literal reuse
  5. Architecture diagrams – Published guides now use Mermaid or image diagrams; remaining WIP guides (SQS, Azure) still have ASCII flows to convert

Ranking rationale: