Last updated: March 16, 2026

Request for Comments (RFC) documents serve as the backbone of asynchronous decision making in distributed engineering organizations. When implemented effectively, RFCs enable teams to make thoughtful, documented decisions without requiring real-time synchronization, which proves especially valuable across time zones.

Table of Contents

Implementation Plan

1.

Use asynchronous voting for non-controversial decisions.

What Makes RFCs Effective for Async Decision Making

An RFC document captures a proposed change, its rationale, alternatives considered, and the expected impact. Unlike quick Slack messages or impromptu video calls, RFCs create a persistent, reviewable record that future team members can reference.

The async nature of RFCs provides several key benefits:

Structuring an RFC Document

A well-structured RFC contains specific sections that guide both the author and reviewers through the decision-making process.

# RFC: [Short Title]

## Problem Statement
Why is this change needed? What pain point does it address?

## Proposed Solution
Detailed description of the proposed approach.

## Alternatives Considered
What other approaches were evaluated and why they were rejected?

## Implementation Plan
Phases or steps for implementing the decision.

## Open Questions
Issues still requiring discussion or clarification.

## Success Metrics
How will we know this decision was correct?

## Timeline
Key dates and milestones.

This structure ensures consistency across proposals and helps reviewers know exactly what to expect.

Implementing an RFC Workflow

Establishing a clear workflow prevents RFCs from languishing in review limbo. Define explicit time frames for each phase.

Phase 1: Draft and Initial Feedback

The author creates the RFC and requests initial feedback. Use labels to track status:

# Example GitHub label workflow
gh label create rfc --color "fbca04" --description "Request for Comments"
gh label create rfc-draft --color "d93f0b" --description "RFC in draft stage"
gh label create rfc-review --color "1d76db" --description "RFC under review"
gh label create rfc-approved --color "0e8a16" --description "RFC approved"
gh label create rfc-closed --color "6e6e6e" --description "RFC not approved"

Phase 2: Review Period

Set a minimum review period—72 hours works well for global teams to accommodate different time zones. During this phase, reviewers add comments, suggest modifications, or express concerns.

Encourage reviewers to use specific markers:

Phase 3: Decision and Documentation

After the review period, a designated decision-maker (often a tech lead or architect) summarizes feedback and renders a decision. Document the outcome clearly:

## Decision

**Status**: Approved / Rejected / Deferred

**Summary of Key Feedback**:
- [Feedback point 1]
- [Feedback point 2]

**Action Items**:
- [ ] Item 1
- [ ] Item 2

Practical Example: Database Migration Decision

Consider a team deciding whether to migrate from PostgreSQL 13 to PostgreSQL 16. An RFC document captures this decision:

# RFC: Upgrade PostgreSQL 13 to 16

## Problem Statement
PostgreSQL 13 reaches end-of-life in November 2025. Running unsupported database versions introduces security risks and prevents access to performance improvements.

## Proposed Solution
Perform a blue-green deployment upgrading the primary database to version 16, then promote the new primary.

## Alternatives Considered
1. **Stay on PostgreSQL 13 with extended support**: Costs $5,000/year, delays access to new features
2. **Migrate to managed database service**: Exceeds current budget by 40%

## Implementation Plan
1. Test upgrade in staging environment (Week 1)
2. Schedule maintenance window (Week 2)
3. Perform blue-green deployment (Week 2)
4. Monitor performance metrics (Week 3)

## Success Metrics
- Query performance improvement of 15% or greater
- Zero data loss during migration
- Rollback capability demonstrated in staging

This document gives every stakeholder—developers, operations, management—a clear understanding of what changes, why it matters, and what success looks like.

Managing RFC Review Effectively

Large RFCs overwhelm reviewers. Break complex decisions into smaller, focused documents. If an RFC exceeds 2,000 words, consider splitting it into multiple interconnected RFCs.

Set expectations for response times. A reasonable SLA:

Use asynchronous voting for non-controversial decisions. When an RFC receives no significant concerns after the review period, the default assumption can be approval—explicitly state this in your team conventions.

Common Pitfalls to Avoid

RFCs fail when they become performative exercises rather than genuine decision-making tools. Avoid these mistakes:

RFC Tools and Workflow Integration

Successful RFC processes use tools that make submission and review frictionless:

Tool Options for Different Team Sizes

Small teams (5-15):

Mid-size (15-50):

Large teams (50+):

GitHub-based RFC Workflow (Recommended for Technical Teams)

Use a dedicated rfcs repository with this structure:

rfcs/
├── text/
│   ├── 0001-new-database-migration.md
│   ├── 0002-microservice-architecture.md
│   └── 0003-auth-system-redesign.md
├── README.md (process overview)
└── decisions/ (merged/accepted RFCs)
    ├── 0001-new-database-migration.md
    └── 0002-microservice-architecture.md

Create a pull request for the RFC. GitHub labels and review process handle:

Real RFC Examples

Example 1: Microservices Migration (Complex, Cross-Team Impact)

# RFC: Migrate from Monolith to Microservices

## Problem Statement
Our monolithic Rails app has reached 200K LOC. Deployments take 45 minutes.
Feature work in one domain blocks unrelated features. Database queries are
slow due to N+1 problems across modules.

## Proposed Solution
Migrate to microservices:
1. Extract payment domain into separate service (3 month timeline)
2. Extract user management domain (2 months)
3. Extract notification domain (1 month)
4. Keep core app for remaining features

## Alternatives Considered
1. Modular monolith with better code organization: Solves structure but
   doesn't improve deployment speed or database query issues
2. Full microservices immediately: Too risky, requires 12-month rewrite
3. Do nothing: Team velocity continues declining 15%/quarter

## Timeline
- Month 1: Design payment service API, set up infrastructure
- Month 2: Extract payment logic, parallel test with existing app
- Month 3: Cutover to payment microservice
- Month 4-6: Extract user management
- Month 7: Extract notifications
- Month 8: Evaluate results, decide next steps

## Risks & Mitigation
- Risk: Distributed systems complexity increases debugging difficulty
  Mitigation: Implement structured logging (ELK stack, $500/month) and
  distributed tracing (Jaeger)

- Risk: Network latency between services impacts performance
  Mitigation: Benchmark in staging, accept <100ms additional latency

## Success Metrics
- Deployment time reduced from 45 to 15 minutes
- New feature time to ship reduced by 30%
- Database query p99 latency reduced by 50%
- Measure at end of each service extraction

Example 2: Engineering Process Change (Moderate Impact, Cross-Team)

# RFC: Implement Code Review SLA

## Problem Statement
Code reviews currently take 24-72 hours. This blocks feature development
and creates context switching when developers return to their code days
later. Team members report frustration waiting for reviews.

## Proposed Solution
Implement 4-hour maximum code review SLA:
- PRs opened during work hours: reviewed within 4 hours
- PRs opened outside work hours: reviewed by 11 AM next business day
- Reviewers add self to PR queue via rotation schedule
- Small PRs (<200 lines) prioritized for faster turnaround

## Implementation Details
1. Create GitHub label: `waiting-review`
2. Implement bot (GitHub Actions, free) that escalates PRs waiting >4h
3. Assign reviewers via rotation (Alice week 1, Bob week 2, etc)
4. Track SLA compliance in weekly metrics

## Risks & Mitigation
- Risk: Forcing fast reviews reduces quality
  Mitigation: "Fast review" doesn't mean "thorough review." Use draft PRs
  and request feedback earlier. Measurement shows quality unchanged.

- Risk: Reviewers get overwhelmed with queue
  Mitigation: Implement PR size guidelines (max 400 lines) first. Large PRs
  get assigned earlier in week.

## Success Metrics
- 90% of PRs reviewed within 4 hours
- Average review time under 8 hours
- Quality metrics (bugs found, regression rate) unchanged
- Team satisfaction survey shows improvement

## Timeline
- Week 1: Communicate change, explain rationale
- Week 2: Deploy bot, establish rotation
- Week 3: Monitor and adjust expectations
- Week 4: Review first weekly metrics

Running Efficient RFC Review Periods

Standard 72-hour review period works well for most teams. However, optimize for:

Timing considerations:

Participation targets:

Review efficiency checklist:

If an RFC is missing any of these, request revisions before starting review period.

Learning from Decisions

Every approved RFC is a learning opportunity. Monthly, pick one approved RFC and:

  1. Review: Was implementation aligned with the RFC?
  2. Measure: Did we achieve success metrics?
  3. Reflect: What worked? What surprised us? What would we do differently?
  4. Document: Add a “Results” section to the RFC with learnings

This practice creates organizational learning that compounds over time. New team members can read old RFCs and understand not just decisions, but the outcomes of those decisions.

Frequently Asked Questions

Who is this article written for?

This article is written for developers, technical professionals, and power users who want practical guidance. Whether you are evaluating options or implementing a solution, the information here focuses on real-world applicability rather than theoretical overviews.

How current is the information in this article?

We update articles regularly to reflect the latest changes. However, tools and platforms evolve quickly. Always verify specific feature availability and pricing directly on the official website before making purchasing decisions.

Are there free alternatives available?

Free alternatives exist for most tool categories, though they typically come with limitations on features, usage volume, or support. Open-source options can fill some gaps if you are willing to handle setup and maintenance yourself. Evaluate whether the time savings from a paid tool justify the cost for your situation.

How do I get my team to adopt a new tool?

Start with a small pilot group of willing early adopters. Let them use it for 2-3 weeks, then gather their honest feedback. Address concerns before rolling out to the full team. Forced adoption without buy-in almost always fails.

What is the learning curve like?

Most tools discussed here can be used productively within a few hours. Mastering advanced features takes 1-2 weeks of regular use. Focus on the 20% of features that cover 80% of your needs first, then explore advanced capabilities as specific needs arise.

Built by theluckystrike — More at zovo.one