Chapter 02: Architecture Decision Records
Documenting Key Technical Decisions
This chapter documents the major architectural decisions for the POS Platform using Architecture Decision Records (ADRs). Each ADR captures the context, decision, and consequences of a significant technical choice.
What is an ADR?
Architecture Decision Records provide a structured way to document important technical decisions:
ADR Structure
=============
+------------------------------------------------------------------+
| ADR-XXX: [Title] |
+------------------------------------------------------------------+
| Status: [proposed | accepted | deprecated | superseded] |
| Date: YYYY-MM-DD |
| Deciders: [who made the decision] |
+------------------------------------------------------------------+
| |
| CONTEXT |
| - What is the issue? |
| - What forces are at play? |
| - What constraints exist? |
| |
| DECISION |
| - What is the change? |
| - What did we choose? |
| |
| CONSEQUENCES |
| - What are the positive outcomes? |
| - What are the negative outcomes? |
| - What risks are introduced? |
| |
+------------------------------------------------------------------+
ADR-001: Schema-Per-Tenant Multi-Tenancy
Note: This ADR was superseded by the Row-Level Isolation with PostgreSQL RLS decision documented in Chapter 04, Section L.10A.4. The original decision is preserved here for historical context.
+==================================================================+
| ADR-001: Schema-Per-Tenant Multi-Tenancy |
+==================================================================+
| Status: SUPERSEDED (by Row-Level Isolation with RLS, Ch 04 |
| Section L.10A.4) |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
We are building a multi-tenant POS platform that will serve multiple
independent retail businesses. Each tenant needs:
1. Strong data isolation for security and compliance
2. Easy backup and restore of individual tenant data
3. Ability to scale individual tenants independently
4. Simple data model without tenant_id on every table
5. Compliance with SOC 2 and potential HIPAA requirements
We evaluated three multi-tenancy strategies:
Strategy A: Shared Tables (Row-Level)
- All tenants share tables
- tenant_id column on every table
- WHERE tenant_id = ? on every query
Strategy B: Separate Databases
- Each tenant gets own database
- Complete isolation
- High connection overhead
Strategy C: Schema-Per-Tenant
- Single database, separate schemas
- SET search_path per request
- Logical isolation, shared infrastructure
DECISION
--------
We will use SCHEMA-PER-TENANT multi-tenancy (Strategy C).
Each tenant gets a dedicated PostgreSQL schema:
- shared schema: Platform-wide data (tenants, plans, features)
- tenant_xxx schema: All tenant-specific tables
The tenant is resolved from the subdomain (e.g., nexus.pos-platform.com)
and the database search_path is set accordingly.
CONSEQUENCES
------------
Positive:
+ Strong logical isolation between tenants
+ No tenant_id needed on every table (cleaner data model)
+ Easy per-tenant backup: pg_dump -n tenant_xxx
+ Easy per-tenant restore without affecting other tenants
+ Single connection pool serves all tenants
+ Simpler queries (no WHERE tenant_id = ?)
+ Compliance-friendly for audits and data requests
Negative:
- Migrations must be applied to all tenant schemas
- Cross-tenant queries require explicit schema references
- PostgreSQL has soft limit (~10,000 schemas per database)
- Slight complexity in tenant provisioning
Risks:
- Must ensure search_path is ALWAYS set correctly
- Schema migration failures could leave tenants inconsistent
- Need robust tenant provisioning automation
Mitigations:
- Middleware validates and sets search_path on every request
- Migration runner applies changes atomically per tenant
- Tenant provisioning is scripted and tested
ADR-002: Offline-First POS Architecture
+==================================================================+
| ADR-002: Offline-First POS Architecture |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
POS terminals operate in retail environments where network
connectivity is unreliable:
1. Internet outages occur (ISP issues, weather, accidents)
2. WiFi can be congested during peak shopping hours
3. Store networks may have maintenance windows
4. Rural locations may have poor connectivity
A traditional online-required POS would:
- Block sales during outages (lost revenue)
- Show errors during slow connections (poor UX)
- Require manual workarounds (paper receipts)
Business requirements:
- Sales must NEVER be blocked by network issues
- Receipts must print immediately
- Data must eventually sync to central system
- Inventory should be reasonably accurate
DECISION
--------
We will implement OFFLINE-FIRST architecture for POS clients.
Key design elements:
1. Local SQLite database on each POS terminal
2. All operations work against local database first
3. Event queue for pending changes
4. Background sync when connectivity available
5. Conflict resolution for concurrent changes
Data flow:
User Action -> Local DB -> Event Queue -> [Background] -> Central API
CONSEQUENCES
------------
Positive:
+ Sales never blocked by network issues
+ Instant response time (local operations)
+ Resilient to any connectivity problem
+ Business continues regardless of server status
+ Better user experience for cashiers
Negative:
- Data is eventually consistent (not immediate)
- Inventory counts may drift until sync
- More complex architecture
- Conflict resolution logic required
- Local storage management needed
Risks:
- Data loss if local device fails before sync
- Inventory overselling possible during outages
- Conflict resolution edge cases
Mitigations:
- Aggressive sync when online (every 30 seconds)
- Local database backup to secondary storage
- Conservative inventory thresholds
- Clear offline indicator in UI
- Deterministic conflict resolution rules
ADR-003: Event Sourcing for Sales Domain
+==================================================================+
| ADR-003: Event Sourcing for Sales Domain |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
The Sales domain has specific requirements that traditional CRUD
does not adequately address:
1. Complete audit trail required (PCI-DSS compliance)
2. Need to answer "what happened?" not just "what is?"
3. Offline clients need conflict-free merge capability
4. Historical analysis (sales trends, patterns)
5. Debugging production issues by replaying events
Traditional CRUD limitations:
- Only stores current state
- Updates overwrite history
- Hard to reconstruct past states
- Audit logs separate from data model
DECISION
--------
We will use EVENT SOURCING for the Sales aggregate.
Implementation:
1. Append-only event store in PostgreSQL
2. Events are the source of truth
3. Read models (projections) for queries
4. Snapshots for performance on long streams
Events captured:
- SaleCreated, SaleLineItemAdded, PaymentReceived, SaleCompleted
- SaleVoided, RefundProcessed
- All inventory changes (InventorySold, InventoryAdjusted)
NOT event-sourced (traditional CRUD):
- Products (read-heavy, infrequent changes)
- Employees (HR data, simple lifecycle)
- Locations (configuration data)
CONSEQUENCES
------------
Positive:
+ Complete audit trail built into data model
+ Temporal queries ("inventory on Dec 15 at 3pm")
+ Offline sync via event merge (append-only = no conflicts)
+ Debugging by event replay
+ Analytics on event streams
+ Natural fit for CQRS pattern
Negative:
- More complex than CRUD
- Requires event versioning strategy
- Projections must be rebuilt if logic changes
- Storage grows over time (mitigated by snapshots)
- Learning curve for developers
Risks:
- Event schema evolution complexity
- Projection bugs cause stale read models
- Performance without proper snapshotting
Mitigations:
- Event versioning from day one
- Automated projection rebuild process
- Snapshot every 100 events
- Clear documentation and training
ADR-004: JWT + PIN Authentication
+==================================================================+
| ADR-004: JWT + PIN Authentication |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team, Security Team |
+==================================================================+
CONTEXT
-------
POS systems have unique authentication requirements:
1. API access needs secure, stateless authentication
2. Cashiers need quick clock-in at physical terminals
3. Sensitive actions need additional verification
4. Multiple employees may share a terminal
5. Terminals may be offline
Requirements:
- Strong authentication for API/Admin access
- Fast authentication for cashiers (< 2 seconds)
- Manager override capability
- Works offline for cashier PIN
Industry standards:
- JWT is standard for API authentication
- PINs are standard for POS quick access
- Password + MFA for admin portal access
DECISION
--------
We will implement a HYBRID authentication system:
1. JWT for API Authentication
- Admin portal uses email + password + optional MFA
- Issues JWT token (15 min access, 7 day refresh)
- Standard Bearer token in Authorization header
2. PIN for POS Terminal Access
- 4-6 digit PIN per employee
- Stored as bcrypt hash in database
- Used for: clock-in, sale attribution, drawer access
3. Manager Override
- Sensitive actions require manager PIN
- Void, large discount, price override
- Manager enters their PIN to authorize
4. Offline PIN Validation
- Employee records with PIN hashes cached locally
- Validated against local cache when offline
- Sync employee changes when online
CONSEQUENCES
------------
Positive:
+ Secure API access with industry-standard JWT
+ Fast cashier workflow with PIN
+ Manager oversight on sensitive operations
+ Works offline for POS operations
+ Clear audit trail (who did what)
Negative:
- Two authentication systems to maintain
- PIN is less secure than password (brute force)
- Local PIN cache could be extracted
- Token refresh complexity
Risks:
- PIN guessing attacks
- Stolen JWT tokens
- Stale employee cache (terminated employee)
Mitigations:
- Rate limiting on PIN attempts (3 failures = lockout)
- Short JWT expiry (15 minutes)
- Aggressive employee sync (every 5 minutes)
- PIN attempt logging and alerting
- Secure local storage encryption
ADR-005: PostgreSQL as Primary Database
+==================================================================+
| ADR-005: PostgreSQL as Primary Database |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
We need a database that supports:
1. Schema-per-tenant multi-tenancy
2. JSONB for flexible event storage
3. Strong ACID guarantees for financial data
4. Good performance at scale
5. Mature ecosystem and tooling
Options considered:
- PostgreSQL: Schema support, JSONB, mature
- MySQL: Popular, but weaker schema support
- SQL Server: Good, but licensing costs
- MongoDB: Document store, no ACID, no schemas
- CockroachDB: Distributed, but complexity
DECISION
--------
We will use POSTGRESQL 16 as the primary database.
Justifications:
1. Native Row-Level Security (RLS) for multi-tenancy isolation
(Originally: schema support; updated per ADR-001 supersession)
2. Excellent JSONB for event storage
3. Strong ACID for financial transactions
4. Proven at scale (Instagram, Uber, etc.)
5. Rich extension ecosystem (PostGIS, etc.)
6. Open source, no licensing costs
7. Excellent tooling (pgAdmin, pg_dump)
CONSEQUENCES
------------
Positive:
+ Native RLS for multi-tenant data isolation (see ADR-001 supersession)
+ JSONB enables flexible event data
+ Strong consistency guarantees
+ Mature, well-documented
+ No licensing costs
+ Excellent community support
Negative:
- Single point of failure without replication
- Requires PostgreSQL expertise
- Not as horizontally scalable as NoSQL
- Schema migrations need coordination
Mitigations:
- Streaming replication for HA
- Regular backups with pg_dump
- Team training on PostgreSQL
- Migration automation tooling
ADR-006: ASP.NET Core for Central API
+==================================================================+
| ADR-006: ASP.NET Core for Central API |
+==================================================================+
| Status: ACCEPTED |
| Date: 2025-12-29 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
We need a backend framework that supports:
1. High-performance API serving
2. Strong typing for complex domain
3. Entity Framework for database access
4. SignalR for real-time features
5. Docker deployment
6. Team expertise alignment
Options considered:
- ASP.NET Core (C#): Performance, typing, EF Core
- Node.js (Express): Fast dev, but weak typing
- Go (Gin): Performance, but less ecosystem
- Python (FastAPI): ML integration, but slower
- Java (Spring): Enterprise, but verbose
Team context:
- Existing .NET experience from Bridge project
- C# used for MAUI mobile app
- Entity Framework expertise available
DECISION
--------
We will use ASP.NET CORE 8.0 for the Central API.
Justifications:
1. Exceptional performance (near Go levels)
2. Strong typing catches bugs at compile time
3. Entity Framework Core for PostgreSQL
4. Built-in SignalR for real-time
5. Excellent Docker support
6. Team already proficient in C#
7. Same language as POS client and mobile app
CONSEQUENCES
------------
Positive:
+ High performance for API workloads
+ Strong typing reduces runtime errors
+ Seamless EF Core integration
+ Built-in dependency injection
+ Excellent tooling (Visual Studio, Rider)
+ C# across entire stack (API, Client, Mobile)
Negative:
- Larger runtime than Go or Rust
- Windows-centric tooling (though Linux deployment)
- C# developers cost more than Node.js
Mitigations:
- Alpine-based Docker images minimize size
- Use VS Code or Rider on Mac/Linux
- Leverage existing team expertise
ADR Index
| ADR | Title | Status | Date |
|---|---|---|---|
| ADR-001 | Schema-Per-Tenant Multi-Tenancy | Superseded (by Row-Level RLS, Ch 04 L.10A.4) | 2025-12-29 |
| ADR-002 | Offline-First POS Architecture | Accepted | 2025-12-29 |
| ADR-003 | Event Sourcing for Sales Domain | Accepted | 2025-12-29 |
| ADR-004 | JWT + PIN Authentication | Accepted | 2025-12-29 |
| ADR-005 | PostgreSQL as Primary Database | Accepted | 2025-12-29 |
| ADR-006 | ASP.NET Core for Central API | Accepted | 2025-12-29 |
| ADR-013 | RFID Configuration in Tenant Admin | Accepted | 2026-01-01 |
Future ADRs (Planned)
| ADR | Title | Status |
|---|---|---|
| ADR-007 | React for Admin Portal | Proposed |
| ADR-008 | Electron vs Tauri for POS Client | Proposed |
| ADR-009 | Redis for Session & Cache | Proposed |
| ADR-010 | Shopify Sync Strategy | Proposed |
| ADR-011 | Payment Gateway Integration | Proposed |
| ADR-012 | Logging and Monitoring Stack | Proposed |
| ADR-013 | RFID Configuration in Tenant Admin | Accepted |
ADR-013: RFID Configuration Embedded in Tenant Admin Portal
+==================================================================+
| ADR-013: RFID Configuration Embedded in Tenant Admin Portal |
+==================================================================+
| Status: ACCEPTED |
| Date: 2026-01-01 |
| Deciders: Architecture Team |
+==================================================================+
CONTEXT
-------
RapOS includes RFID inventory capabilities via the Raptag mobile app.
The question arose: where should RFID configuration (device management,
printer setup, tag encoding settings, templates) be managed?
We evaluated three options:
Option A: Embed in Tenant Admin Portal (app.rapos.com)
- RFID settings as feature-flagged section in existing portal
- Uses existing authentication, permissions, navigation
- Shared context with products, locations, users
Option B: Separate RFID Portal (rfid.rapos.com)
- Dedicated portal just for RFID configuration
- 4th portal in the architecture
- Independent scaling and development
Option C: Hybrid Approach
- Basic settings in Tenant Admin
- Advanced configuration in separate portal
- Users navigate between portals
Research was conducted on major RFID vendors:
- SML Clarity: Single platform, modular components
- Checkpoint HALO/ItemOptix: Unified SaaS platform
- Avery Dennison atma.io: Role-based dashboards in one platform
- Impinj ItemSense: Single Management Console
Key finding: NO major RFID vendor uses separate portals for RFID
configuration. All embed RFID features within unified platforms.
DECISION
--------
We will EMBED RFID configuration in the Tenant Admin Portal (Option A).
Implementation:
- Settings > RFID section (feature-flagged)
- Devices tab: Claim codes, device list, release
- Printers tab: IP configuration, test connectivity
- Tag Configuration tab: EPC prefix (read-only), variance thresholds
- Templates tab: Label template library
Mobile app downloads configuration from central API on startup.
No RFID configuration in the mobile app itself.
CONSEQUENCES
------------
Positive:
+ Matches industry pattern (SML, Checkpoint, Avery Dennison)
+ Single login/URL for all tenant management
+ Shared context with products, locations, users
+ Lower development cost (one portal, not two)
+ Progressive disclosure manages complexity
+ Same permissions system applies to RFID
Negative:
- Could become bloated if RFID features grow significantly
- Enterprise customers might want dedicated RFID admin
- Feature flags add slight complexity
Risks:
- Tenant Admin may feel "cluttered" with many features
- RFID power users may want more dedicated experience
Mitigations:
- Use progressive disclosure (collapse advanced settings)
- Role-based visibility (hide RFID from non-RFID users)
- Monitor feedback; re-evaluate if enterprise demand grows
- Feature-flagged sections can be extracted later if needed
Re-evaluation Triggers:
- Multiple enterprise customers (100+ stores) request separation
- RFID feature count exceeds 20+ configuration screens
- Evidence that RFID admins are different people than Tenant admins
How to Propose a New ADR
ADR Proposal Process
====================
1. Copy the ADR template
2. Fill in Context, Decision, Consequences
3. Set Status to "proposed"
4. Submit for architecture review
5. Discuss in architecture meeting
6. Update based on feedback
7. Set Status to "accepted" when approved
8. Add to ADR Index
MADR Template (Markdown Any Decision Records)
We use the MADR (Markdown Any Decision Records) format, which is more comprehensive than the basic ADR format and better suited for complex architectural decisions.
Full MADR Template
# ADR-XXX: [Short Title of Solved Problem and Solution]
## Status
[proposed | accepted | deprecated | superseded by ADR-YYY]
## Date
YYYY-MM-DD
## Decision-Makers
- [Name/Role 1]
- [Name/Role 2]
## Technical Story
[Link to ticket/issue: JIRA-123, GitHub Issue #456]
## Context and Problem Statement
[Describe the context and problem statement, e.g., in free form
using two to three sentences or in the form of an illustrative
story. You may want to articulate the problem in form of a question.]
## Decision Drivers
* [Driver 1, e.g., a force, facing concern, …]
* [Driver 2, e.g., a force, facing concern, …]
* [Driver 3, e.g., a force, facing concern, …]
## Considered Options
1. [Option 1]
2. [Option 2]
3. [Option 3]
4. [Option 4]
## Decision Outcome
**Chosen Option**: "[Option X]"
### Justification
[Justification for why this option was chosen. Reference the
decision drivers and explain how this option best addresses them.]
### Positive Consequences
* [e.g., improvement of quality attribute satisfaction, follow-up
decisions required, …]
* …
### Negative Consequences
* [e.g., compromising quality attribute, follow-up decisions required,
technical debt introduced, …]
* …
## Pros and Cons of the Options
### [Option 1]
[Example: Schema-per-tenant multi-tenancy]
**Pros:**
* Good, because [argument a]
* Good, because [argument b]
**Cons:**
* Bad, because [argument c]
* Bad, because [argument d]
### [Option 2]
[Example: Row-level multi-tenancy]
**Pros:**
* Good, because [argument a]
* Good, because [argument b]
**Cons:**
* Bad, because [argument c]
### [Option 3]
[Example: Database-per-tenant]
**Pros:**
* Good, because [argument a]
**Cons:**
* Bad, because [argument b]
* Bad, because [argument c]
## Links
* [Link type] [Link to ADR] <!-- example: Refined by ADR-007 -->
* [Link type] [Link to external resource]
* Supersedes ADR-XXX
* Related to ADR-YYY
## Notes
[Any additional notes, discussion points, or future considerations]
MADR Example: Kafka Selection
# ADR-014: Apache Kafka for Event Streaming
## Status
accepted
## Date
2026-01-15
## Decision-Makers
- Architecture Team
- Infrastructure Team
## Technical Story
ARCH-456: Select event streaming platform for POS event sourcing
## Context and Problem Statement
Our POS platform uses event sourcing for the Sales and Inventory
domains. We need an event streaming platform that supports:
- Event replay for new consumers
- Durable storage for audit compliance
- High throughput during peak retail periods (Black Friday)
- Multi-datacenter replication for disaster recovery
Which event streaming platform should we use?
## Decision Drivers
* Replayability - New analytics services must process historical events
* Durability - Events must survive broker failures (PCI compliance)
* Throughput - Handle 10,000+ events/second during peak
* Ecosystem - Good client libraries for .NET
* Operations - Team can manage without dedicated staff
## Considered Options
1. Apache Kafka
2. RabbitMQ with Shovel plugin
3. Amazon Kinesis
4. Redis Streams
5. PostgreSQL LISTEN/NOTIFY
## Decision Outcome
**Chosen Option**: "Apache Kafka (with KRaft mode)"
### Justification
Kafka is the only option that provides true event replayability with
configurable retention. New consumers can start from the beginning
of the log and process all historical events. This is critical for:
- Adding new analytics modules
- Rebuilding projections after bugs
- Audit investigations
KRaft mode eliminates ZooKeeper dependency, simplifying operations.
### Positive Consequences
* Complete replayability for compliance and analytics
* Proven at massive scale (LinkedIn, Uber)
* Strong .NET client (Confluent.Kafka)
* Schema Registry for event versioning
### Negative Consequences
* More complex than RabbitMQ
* Requires understanding of partitioning
* Higher resource usage than simpler queues
## Pros and Cons of the Options
### Apache Kafka
**Pros:**
* Good, because events are retained for configurable duration
* Good, because consumers can replay from any offset
* Good, because it handles 100K+ messages/second
* Good, because KRaft mode simplifies deployment
**Cons:**
* Bad, because it requires more operational knowledge
* Bad, because partition management adds complexity
### RabbitMQ with Shovel
**Pros:**
* Good, because it's simpler to operate
* Good, because team has existing experience
**Cons:**
* Bad, because messages are deleted after consumption
* Bad, because replay requires external archival
### Amazon Kinesis
**Pros:**
* Good, because it's fully managed
* Good, because it has replay capability
**Cons:**
* Bad, because of vendor lock-in
* Bad, because pricing is complex at scale
### Redis Streams
**Pros:**
* Good, because it's simple
* Good, because it's low latency
**Cons:**
* Bad, because durability is limited
* Bad, because it's not designed for long-term storage
### PostgreSQL LISTEN/NOTIFY
**Pros:**
* Good, because no additional infrastructure
**Cons:**
* Bad, because it doesn't scale
* Bad, because messages are ephemeral
## Links
* Refined by ADR-015 (Schema Registry Selection)
* Related to ADR-003 (Event Sourcing for Sales Domain)
* [Kafka Documentation](https://kafka.apache.org/documentation/)
## Notes
Evaluated during Q1 2026 architecture review. Confluent Cloud was
considered but rejected due to cost; self-hosted Kafka preferred.
**UPDATE (v3.0.0)**: Kafka is **deferred to v2.0**. Per the Architecture
Styles analysis (Chapter 04, Section L.4A.2),
v1.0 uses PostgreSQL event tables with LISTEN/NOTIFY for event notification
and Transactional Outbox for guaranteed delivery. This ADR remains valid
for v2.0 planning when scale justifies the Kafka operational overhead.
ADR Tooling & Automation
Recommended Tools
| Tool | Purpose | Installation |
|---|---|---|
| adr-tools | CLI for creating/managing ADRs | brew install adr-tools |
| Log4brains | ADR documentation site generator | npm install -g log4brains |
| adr-viewer | Web-based ADR viewer | Docker image available |
ADR Tools CLI
# Install adr-tools
brew install adr-tools # macOS
# or
sudo apt install adr-tools # Ubuntu
# Initialize ADR directory
adr init docs/adr
# Create new ADR
adr new "Use Kafka for Event Streaming"
# Creates: docs/adr/0014-use-kafka-for-event-streaming.md
# Supersede an ADR
adr new -s 3 "Replace Event Sourcing with Outbox Pattern"
# Creates new ADR that supersedes ADR-003
# List all ADRs
adr list
# Generate ADR index
adr generate toc > docs/adr/README.md
Log4brains Integration
Log4brains generates a searchable documentation website from ADRs:
# Install Log4brains
npm install -g log4brains
# Initialize in project
log4brains init
# Start preview server
log4brains preview
# Build static site
log4brains build
# Deploy to GitHub Pages
log4brains build --basePath /pos-platform-adr
# .github/workflows/adr-docs.yml
name: ADR Documentation
on:
push:
branches: [main]
paths:
- 'docs/adr/**'
jobs:
build-adr-site:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for dates
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Log4brains
run: npm install -g log4brains
- name: Build ADR site
run: log4brains build --basePath /pos-platform-adr
- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: .log4brains/out
ADR Linting
# .github/workflows/adr-lint.yml
name: ADR Lint
on:
pull_request:
paths:
- 'docs/adr/**'
jobs:
lint-adr:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Validate ADR Format
run: |
for file in docs/adr/*.md; do
# Check required sections
if ! grep -q "## Status" "$file"; then
echo "ERROR: $file missing Status section"
exit 1
fi
if ! grep -q "## Context" "$file" && ! grep -q "## Context and Problem Statement" "$file"; then
echo "ERROR: $file missing Context section"
exit 1
fi
if ! grep -q "## Decision" "$file" && ! grep -q "## Decision Outcome" "$file"; then
echo "ERROR: $file missing Decision section"
exit 1
fi
done
echo "All ADRs pass validation"
- name: Check ADR Numbering
run: |
# Ensure sequential numbering
expected=1
for file in docs/adr/[0-9]*.md; do
num=$(basename "$file" | grep -o '^[0-9]*')
if [ "$num" != "$expected" ]; then
echo "WARNING: Expected ADR-$expected, found ADR-$num"
fi
expected=$((expected + 1))
done
ADR Review Checklist
# ADR Review Checklist
Before accepting an ADR, verify:
## Structure
- [ ] Uses MADR template
- [ ] Has clear title
- [ ] Status is set correctly
- [ ] Date is current
- [ ] Decision-makers are listed
## Content Quality
- [ ] Context clearly explains the problem
- [ ] Decision drivers are explicit
- [ ] At least 3 options were considered
- [ ] Pros/cons are documented for each option
- [ ] Chosen option justification references drivers
## Completeness
- [ ] Positive consequences listed
- [ ] Negative consequences listed (be honest!)
- [ ] Risks identified
- [ ] Mitigations proposed for risks
- [ ] Links to related ADRs
## Traceability
- [ ] Linked to technical story/ticket
- [ ] References relevant documentation
- [ ] Supersedes/relates to other ADRs if applicable
## Approval
- [ ] Architecture team reviewed
- [ ] Security team reviewed (if applicable)
- [ ] Infrastructure team reviewed (if applicable)
ADR Template
Summary
These Architecture Decision Records capture the foundational technical decisions for the POS Platform:
| ADR | Key Decision | Primary Benefit |
|---|---|---|
| ADR-001 | Tenant isolation via tenant_id + PostgreSQL RLS policies | |
| ADR-002 | Offline-first | Sales never blocked by network |
| ADR-003 | Event sourcing | Complete audit trail and temporal queries |
| ADR-004 | JWT + PIN | Secure API + fast cashier workflow |
| ADR-005 | PostgreSQL | Schema support and JSONB flexibility |
| ADR-006 | ASP.NET Core | Performance and unified C# stack |
| ADR-013 | RFID in Tenant Admin | Industry-standard pattern, shared context |
These decisions form the architectural foundation upon which the rest of the system is built.
Document Information
| Attribute | Value |
|---|---|
| Version | 5.0.0 |
| Created | 2025-12-29 |
| Updated | 2026-02-25 |
| Author | Claude Code |
| Status | Active |
| Part | II - Architecture |
| Chapter | 02 of 32 |
Change Log
| Version | Date | Changes |
|---|---|---|
| 1.0.0 | 2025-12-29 | Initial ADRs (001-006) |
| 2.0.0 | 2026-01-01 | Added ADR-013 (RFID Configuration), MADR template, tooling section |
| 3.0.0 | 2026-02-22 | ADR-001 marked SUPERSEDED (Schema-Per-Tenant replaced by Row-Level RLS per Ch 04 L.10A.4); added Kafka v2.0 deferral note to ADR-014 example (per Ch 04 L.4A.2); fixed Next Chapter link; renumbered chapter references for v3.0.0 |
Next Chapter: Chapter 03: Architecture Characteristics
This chapter is part of the POS Blueprint Book. All content is self-contained.