Skip to main content

Command Palette

Search for a command to run...

quick reference

Quality Attributes Cheat Sheet

Quick reference card for software quality attributes, architectural tactics, and trade-off analysis.

Quality Attributes Cheat Sheet

Quality Attributes at a Glance

AttributeDefinitionKey Metric
AvailabilitySystem uptime% uptime (nines)
PerformanceResponse speedLatency (ms), throughput (RPS)
ScalabilityHandle growthMax load, scaling factor
SecurityResist attacksVulnerabilities, compliance
ReliabilityConsistent behaviorMTBF, error rate
MaintainabilityEase of changeChange lead time, defect rate
TestabilityEase of testingCode coverage, test time
ModifiabilityAdapt to changeCoupling, cohesion
UsabilityUser effectivenessTask completion, satisfaction
InteroperabilityWork with othersStandards compliance

Availability

THE NINES
─────────────────────────────────────────────
99%      = 3.65 days/year downtime
99.9%    = 8.77 hours/year downtime
99.95%   = 4.38 hours/year downtime
99.99%   = 52.6 minutes/year downtime
99.999%  = 5.26 minutes/year downtime

FORMULA
           MTBF
Avail = ─────────────
        MTBF + MTTR

SERIES:   A × B × C
PARALLEL: 1 - (1-A)(1-B)

TACTICS
├── Redundancy (active-active, active-passive)
├── Failover (automatic, manual)
├── Health checks (liveness, readiness)
├── Circuit breakers (fail fast)
└── Graceful degradation

Performance

KEY METRICS
─────────────────────────────────────────────
Latency:    Time to respond (ms)
Throughput: Requests per second (RPS)
Utilization: Resource usage (%)

PERCENTILES (Report these, not averages)
├── p50: Median experience
├── p95: Most users' worst case
├── p99: Edge case (matters at scale)

TACTICS
─────────────────────────────────────────────
REDUCE DEMAND
├── Caching (browser, CDN, app, DB)
├── Compression (gzip, Brotli)
├── Pagination (cursor-based)

MANAGE RESOURCES
├── Connection pooling
├── Thread pooling
├── Async processing

OPTIMIZE DATA ACCESS
├── Indexing (B-tree, hash)
├── Query optimization (EXPLAIN)
├── N+1 query elimination

SCALE OUT
├── Horizontal scaling
├── Load balancing
├── Sharding

Security

CIA TRIAD
─────────────────────────────────────────────
Confidentiality: Who can see it?
Integrity:       Is it accurate?
Availability:    Can I access it?

ZERO TRUST PRINCIPLES
├── Verify explicitly (never implicit trust)
├── Least privilege (minimum necessary)
├── Assume breach (design for compromise)

DEFENSE IN DEPTH
─────────────────────────────────────────────
┌─ Policies & Procedures
├─ Physical Security
├─ Perimeter (Firewall, WAF)
├─ Network (Segmentation, IDS)
├─ Host (Hardening, Patching)
├─ Application (Input validation)
└─ Data (Encryption)

OWASP TOP 3 DEFENSES
1. Validate all inputs (server-side)
2. Parameterized queries (prevent injection)
3. Encode output (prevent XSS)

ENCRYPTION REQUIREMENTS
├── In Transit: TLS 1.2+
├── At Rest: AES-256
└── Secrets: Key Vault / Secrets Manager

Scalability

SCALING TYPES
─────────────────────────────────────────────
VERTICAL (Scale Up)
├── Add more CPU/RAM to existing instance
├── Simple, no code changes
├── Hard limits exist
└── Single point of failure

HORIZONTAL (Scale Out)
├── Add more instances
├── Requires stateless design
├── No hard limits
└── Better resilience

SCALING LAWS
─────────────────────────────────────────────
LINEAR:     2x resources = 2x throughput
SUBLINEAR:  Diminishing returns (contention)
AMDAHL'S:   Speedup limited by serial portion

TACTICS
├── Stateless services
├── External session storage
├── Load balancing
├── Database read replicas
├── Sharding/partitioning
└── Caching layers

Reliability

KEY METRICS
─────────────────────────────────────────────
MTBF: Mean Time Between Failures
MTTR: Mean Time To Repair
RTO:  Recovery Time Objective
RPO:  Recovery Point Objective

TACTICS
─────────────────────────────────────────────
FAULT PREVENTION
├── Input validation
├── Resource limits
├── Capacity planning

FAULT TOLERANCE
├── Retry with backoff
├── Circuit breaker
├── Bulkhead isolation
├── Timeout handling

FAULT RECOVERY
├── Automated failover
├── Backup/restore
├── Rollback capability

Maintainability & Modifiability

COUPLING (Lower is better)
─────────────────────────────────────────────
TIGHT COUPLING
├── Direct dependencies
├── Shared databases
├── Synchronous calls
└── Breaks on change

LOOSE COUPLING
├── Interfaces/abstractions
├── Message queues
├── API contracts
└── Independent deployment

COHESION (Higher is better)
─────────────────────────────────────────────
HIGH COHESION
├── Single responsibility
├── Related functions grouped
├── Clear purpose
└── Easy to understand

DESIGN PRINCIPLES
├── SOLID principles
├── DRY (Don't Repeat Yourself)
├── KISS (Keep It Simple)
├── YAGNI (You Aren't Gonna Need It)
└── Separation of concerns

Trade-offs Matrix

COMMON TRADE-OFFS
─────────────────────────────────────────────

Availability  ←→  Cost
              ←→  Consistency

Performance   ←→  Security
              ←→  Maintainability

Security      ←→  Usability
              ←→  Performance

Scalability   ←→  Simplicity
              ←→  Consistency (CAP)

CAP THEOREM (Distributed Systems)
─────────────────────────────────────────────
        Consistency
           /\
          /  \
         /    \
        / Pick \
       /   2    \
      /──────────\
Availability    Partition
                Tolerance

CP: Consistent, Partition-tolerant (sacrifice availability)
AP: Available, Partition-tolerant (sacrifice consistency)
CA: Consistent, Available (single node only)

Quality Attribute Scenarios

SCENARIO TEMPLATE
─────────────────────────────────────────────
SOURCE:      Who/what generates the stimulus?
STIMULUS:    What event occurs?
ARTIFACT:    What part of system is affected?
ENVIRONMENT: Under what conditions?
RESPONSE:    What should happen?
MEASURE:     How do we know it worked?

EXAMPLE: AVAILABILITY
─────────────────────────────────────────────
Source:      External user
Stimulus:    Server failure
Artifact:    Order service
Environment: Normal operation
Response:    Failover to backup
Measure:     < 30 seconds recovery

EXAMPLE: PERFORMANCE
─────────────────────────────────────────────
Source:      1000 concurrent users
Stimulus:    Product search request
Artifact:    Search service
Environment: Peak load
Response:    Return results
Measure:     p99 latency < 500ms

Quick Tactics Reference

AttributeTop 3 Tactics
AvailabilityRedundancy, Failover, Health checks
PerformanceCaching, Async processing, Indexing
SecurityAuthentication, Encryption, Input validation
ScalabilityHorizontal scaling, Stateless design, Load balancing
ReliabilityRetry, Circuit breaker, Graceful degradation
MaintainabilityLoose coupling, High cohesion, Modular design
TestabilityDependency injection, Interface abstraction, Test isolation

Assessment Checklist

QUALITY ATTRIBUTE REVIEW
─────────────────────────────────────────────
□ Requirements defined with measurable targets?
□ Scenarios documented (stimulus → response)?
□ Tactics selected and documented?
□ Trade-offs explicitly acknowledged?
□ Monitoring in place to measure attributes?
□ Tests validate attribute requirements?

Sources