Safe Passage to AI
Before approving LLM use in any project, run it through four questions — one per architectural level. All four must pass. One failure means the LLM does not belong there.
The four levels come directly from the ICL Enterprise Taxonomy: Conceptual, Logical, Physical, Implementation. The taxonomy defines what each level is responsible for. This model applies that structure to the LLM adoption decision.

Does the problem actually need language reasoning?
LLM fits when the capability requires:
LLM does not fit when the capability is:
Decision rule: If you can write the business rules down exhaustively — do not use an LLM.
Is the LLM a contained service, or is it taking over?
Acceptable:
Not acceptable:
Decision rule: LLM is a specialist, not a manager.
Can you run this in production without surprises?
You must have answers to:
Decision rule: No LLM in production without a defined cost ceiling, SLA, and fallback path.
Can you remove or replace the LLM without rewriting the system?
Required:
Decision rule: If pulling out the LLM would collapse the architecture, it was overengineered.
LLM is the right choice only when all four conditions are met:
| Taxonomy Level | Decision Level | Condition |
|---|---|---|
| Conceptual | Business Capability | Semantic reasoning is genuinely required |
| Logical | Service Design | LLM is a bounded, replaceable service |
| Physical | Operations | Cost, SLA, and failure paths are defined |
| Implementation | Implementation | Clean boundary, no logic hidden in prompts |
One “no” anywhere — and you are building Fred Brooks’ second system.
Iron Code Labs Enterprise Taxonomy
© dbj@dbj.org , CC BY SA 4.0