The PMAC Method
Why sequential. Why native. Why methodology beats horsepower.
PM Agent Chain is engineered around three architectural commitments. Here is the reasoning behind each.
01 · Floor
PMBOK is not a checkbox. It is the institutional grammar of project execution.
The Project Management Body of Knowledge, in its 7th edition, codifies decades of practice across thousands of program implementations. It defines the artifacts a serious PMO produces, the dependencies between them, and the standards each must meet. PM Agent Chain treats PMBOK as the floor, the minimum bar an artifact must clear before it leaves the platform. Every Charter we produce reads as if a PMP-certified senior PM authored it, because the platform’s prompts are the codified output of that role.
02 · Sequential
Project artifacts depend on each other. The PMAC Method must respect that.
A Charter establishes the authorizing context. A Risk Register depends on the Charter’s scope and constraints. A Schedule depends on the Risk Register’s mitigations. A Status Report references the Schedule’s actual versus planned. This dependency chain is not a coincidence. It is the institutional logic of project management.
Sequential agents preserve that logic. Each agent reads the prior phase’s complete output before producing its own. Parallel chats cannot do this. One-prompt chatbots cannot do this. The result, when an institutional reviewer audits the output, is artifacts that are internally consistent: risks reflected in mitigations, mitigations reflected in schedule, schedule reflected in resource plan, all the way through.
03 · Native
A PMO does not review a chat window.
A PMO reviews artifacts that go into a SharePoint folder, a Confluence page, a Drive deliverable bundle. Native .docx, .pptx, and .xlsx is not a “feature”. It is the only output format that fits how PMOs actually work. The Microsoft Copilot tier (chat output inside Office, no native files) and the per-user task-tool tier (ClickUp, Monday, Asana) both miss this. PM Agent Chain starts here.
04 · Durable
We do not promise smarter AI. We promise methodology that survives audit.
Foundation models are commoditizing. Faster GPT, smarter Claude, cheaper Gemini. All of these are inevitable improvements that any platform inherits. We do not compete on AI horsepower because horsepower will not differentiate any platform 24 months from now.
What does differentiate is the prompt architecture, the agent sequencing, and the artifact structures. A customer who switches AI providers next year still has PM Agent Chain’s methodology working for them. That is the durable thing. And it is what we build for.
05 · Internal practice
Triangulated Deep Research: the practice we recommend to every PMO researching AI providers.
When researching market, competitive, or methodology questions internally, we run identical prompts across Claude, ChatGPT, and Gemini in parallel. Each engine has distinct biases. Claude favors established players. ChatGPT favors small and active. Gemini favors regional and recent.
The union of three independent searches consistently outperforms any single engine. Two-engine consensus is high-confidence. Three-engine consensus is decisive.
When we needed to verify there is no competitor producing a complete native-file PMBOK lifecycle from a single brief, we ran the question through all three engines. The union returned 571 distinct sources. The independent confirmation across all three was decisive. And it is the methodology we recommend to any PMO running their own AI vendor evaluation.
06 · Rubric
PMBOK 7 Performance Domains. The exact rubric.
The PMAC Method’s quality gate operationalizes PMBOK’s eight Performance Domains as the rubric for evaluating every artifact produced. Each domain decomposes into observable sub-criteria. Weights derive from PMI Pulse of the Profession 2024 failure-cause analysis.
- Stakeholders
- 16%
- alignment, engagement, communication
- Uncertainty
- 16%
- risk identification, response, monitoring
- Planning
- 16%
- WBS, schedule, dependencies, estimates
- Project Work
- 12%
- resource management, knowledge transfer
- Delivery
- 10%
- scope, quality, value realization
- Measurement
- 10%
- KPIs, earned value, performance reporting
- Team
- 10%
- composition, leadership, governance
- Development Approach
- 10%
- lifecycle fit, methodology selection
Composite grade thresholds: A (90% and above), B (80 to 89.9%), C (70 to 79.9%), D (below 70%, triggers internal quality fallback).
07 · Maturity
Kerzner PMMM. Five levels. Three signals per run.
Alongside the composite grade, every run produces a maturity inference signal benchmarked against Kerzner’s five-level model. The grade tells you whether the artifact is right. The maturity signal tells you whether the organization producing it is ready to execute.
- Level 1Common Languagebasic PM vocabulary in use
- Level 2Common Processesreplicable approaches across projects
- Level 3Singular Methodologyintegrated organizational PM framework
- Level 4Benchmarkingsystematic comparison against external standards
- Level 5Continuous Improvementlessons learned drive sustained evolution
Each run returns three values: the level the project demands to succeed, the level the brief reveals about the organization’s actual practice, and the gap with bridging recommendations.
08 · Disclosure
Not certified. Benchmark-against. Why that matters.
PMAC is not certified by the Project Management Institute (PMI) or licensed by the International Institute for Learning (IIL). Methodology fidelity is maintained through internal audit and external peer review with practicing senior project managers. That distinction is deliberate.
Methodological honesty. Certification implies an institutional pathway PMAC does not follow. Benchmark-against is accurate.
Procurement defensibility. Enterprise buyers running formal procurement review prefer honest disclosure over claimed certification that cannot be verified.
Methodology evolution. PMBOK 7 and Kerzner PMMM are reference frameworks, not version-locked specifications. Benchmark-against allows methodology refinement without breaking certification claims.
09 · Verification
How methodology fidelity is verified.
Three mechanisms keep the rubric honest.
- Internal canonical run audit. Identical project briefs run periodically to verify scoring stability and rubric alignment across releases.
- External peer review. Senior practicing PMs (PMP-credentialed, 15+ years) review rubric changes and grade outputs before deployment.
- Procurement-grade documentation. Every claim on this page traces to a specific reference in PMBOK 7 or Kerzner published works.
For procurement security and methodology review queries, contact guy@pmagentchain.com.