Enforce architecture decisions in CI
Document what your team decided, link it to a PR, and let the gate check every code change against the decision - blocking what doesn't fit.
$ npx decern-gate
Policy: decision required - YES
Matched: package.json, terraform/main.tf
Ref: ADR-001 - Approved
Judge: analyzing scope alignment...
Blocked - changes are out of decision scope.
Works with your existing CI
The problem worth solving
Decisions get documented.
Code still drifts.
The gap is not documentation, it's enforcement at merge time.
Without Decern
A PR adds a service layer nobody agreed on. Review misses it. It ships.
You deprecate a pattern. Next sprint, AI brings it back.
Three teams, three interpretations of the same decision.
With Decern
Every high-impact PR is checked against the approved decision scope.
Tech leads set guardrails once - no more repeating rules in review.
Teams keep velocity and architecture stays consistent.
How it works
Four steps.
Two minutes to set up.
Record the decision, link it to a PR, and let the gate do the rest.
Record the decision
Document context, options, and the decision. Generate from notes with AI or write manually.
Link it to the PR
Reference the decision in the PR. The pipeline knows which rules apply.
Gate checks in CI
Gate runs in CI. Checks the decision exists and is approved.
Judge checks alignment
The LLM Judge compares the diff to the decision. Out of scope? Blocked with a clear reason.
LLM as a JudgeWho it's for
One tool, four outcomes
Engineering Manager
Wants speed across squads without architecture going sideways.
Guardrails that scale without manual policing.
Tech Lead
Tired of repeating the same architectural rules in every review.
Decisions enforced in CI, not stuck in someone's head.
Platform / DevEx
Needs enforcement that doesn't add process friction.
One CI step. Same workflow. Measurable compliance.
Compliance / Audit
Needs traceability between decisions and production changes.
Clear audit trail: what was decided, what was merged.
Trust and control
Your model. Your keys. Your rules.
Decern runs on the LLM provider you choose. Keys are used per-request, never stored. You control enforcement mode and confidence threshold per workspace.
Bring your own LLM provider
OpenAI, Claude, or any OpenAI-compatible endpoint. Your API key, your budget, your latency profile.
Observation first, blocking when ready
Start in report-only mode on the Free plan. Enable blocking on Team or Business when the team is aligned.
Auditable, deterministic outputs
Every gate run produces a decision reference, scope verdict, and reason code. Ready for review and compliance.
Core capabilities
Three things
the gate does
LLM Judge in your CI pipeline
The Judge reads the decision and the PR diff. If the change goes beyond scope, the gate blocks with a clear, actionable reason.
Confidence scoring and thresholds
The Judge returns a score (0–100%). Set a threshold per workspace. Know exactly how well each change aligns before it merges.
Observation mode on Free
On Free, the gate reports but never blocks. Try it on real PRs with zero risk to your pipeline. No time limit.
Frequently asked questions
How is this different from Confluence, Notion, or ADRs in Markdown?
Does the AI write the decisions for us?
Won't this add bureaucracy or block us too often?
Do we need to change our GitHub or GitLab workflow?
Decisions only matter if they’re enforced.
Start in two minutes.
One decision log. One gate in CI.
Every change checked against what you decided.
Free forever on observation mode.
No credit card required.