Architecture¶
How kubefwd works under the hood.
Overview¶
kubefwd establishes port forwards to Kubernetes services by:
- Discovering services in specified namespaces
- Allocating unique local IP addresses (127.x.x.x)
- Updating
/etc/hostswith service hostnames - Creating SPDY connections through the Kubernetes API
- Monitoring pod lifecycle for automatic reconnection
flowchart TB
subgraph workstation["Your Workstation"]
subgraph kubefwd["kubefwd"]
sw[Service Watcher]
ip[IP Allocator]
hm[Hosts Manager]
subgraph pfm["Port Forward Manager"]
pf1["127.1.27.1:8080"]
pf2["127.1.27.2:3306"]
pf3["127.1.27.3:6379"]
end
end
end
subgraph cluster["Kubernetes Cluster"]
api["api-pod :8080"]
mysql["mysql-pod :3306"]
redis["redis-pod :6379"]
end
pf1 -->|SPDY via K8s API| api
pf2 -->|SPDY via K8s API| mysql
pf3 -->|SPDY via K8s API| redis
Key Components¶
Service Discovery¶
kubefwd uses Kubernetes informers to watch for service events:
- Add: New service discovered → start port forward
- Update: Service modified → update port forward if needed
- Delete: Service removed → stop port forward, cleanup hosts
This event-driven approach means kubefwd reacts in real-time to cluster changes.
IP Allocation¶
Each service receives a unique loopback IP address:
| Component | Base | Range |
|---|---|---|
| First octet | 127 | Fixed |
| Second octet | 1 | 1-255 (cluster index) |
| Third octet | 27 | 27-255 (namespace index) |
| Fourth octet | 1 | 1-255 (service count) |
Why unique IPs?
Unlike kubectl port-forward which uses localhost:port, kubefwd assigns each service its own IP. This allows:
- Multiple services on the same port (e.g., several databases on 3306)
- Realistic service topology matching in-cluster behavior
- No port conflict management needed
Hosts File Management¶
kubefwd modifies /etc/hosts to map service names to allocated IPs:
# Before kubefwd
127.0.0.1 localhost
# After kubefwd starts
127.0.0.1 localhost
127.1.27.1 api-service
127.1.27.2 database
127.1.27.3 cache
Safety measures:
- Original hosts file backed up to ~/hosts.original
- Entries removed on clean shutdown
- Mutex locking prevents race conditions
- Stale entry purging available (-p flag)
Port Forwarding¶
kubefwd creates SPDY connections through the Kubernetes API server:
- Pod Selection: Find pods backing the service
- Connection: Establish SPDY tunnel via API server
- Local Binding: Bind to allocated IP and service port(s)
- Data Transfer: Proxy traffic bidirectionally
flowchart LR
app[Local App] --> ip["127.1.27.1:8080"] --> kf[kubefwd] --> api[K8s API] --> pod["Pod:8080"]
Pod Lifecycle Monitoring¶
kubefwd watches pod events to maintain forwarding:
- Pod deleted: Stop forward, trigger reconnection
- Pod recreated: Auto-reconnect (if enabled)
- Pod crash: Detect via connection loss, reconnect
The auto-reconnect feature uses exponential backoff: - Initial: 1 second - Maximum: 5 minutes - Reset on successful connection
Service Types¶
Normal Services (ClusterIP)¶
For services with selectors:
- Query pods matching the selector
- Select first available Running pod
- Forward to that single pod
- Hostname:
service-name
Headless Services (ClusterIP: None)¶
For services without cluster IP:
- Query all pods matching the selector
- Forward to ALL pods
- Hostnames:
service-name→ first podpod-name.service-name→ each specific pod
This is essential for StatefulSets and databases requiring pod-specific addressing.
Services Without Selectors¶
Not supported. kubefwd requires pod selectors to discover backing pods. Services with manually managed Endpoints are skipped.
TUI Architecture¶
The Terminal User Interface uses the Bubble Tea framework:
flowchart LR
subgraph core["Core Services"]
eb[Event Bus]
ss[State Store]
mr[Metrics Registry]
end
subgraph views["View Models"]
sm[Services]
lm[Logs]
dm[Detail]
end
core --> views
Event Bus¶
Decoupled communication between components: - Service status changes - Metrics updates - Log messages - User actions
State Store¶
Centralized, thread-safe state management: - Forward status for all services - Metrics data (bytes, rates) - Filter/sort state
Metrics Collection¶
Traffic monitoring with: - Byte counters (atomic operations) - Rate calculations (rolling window) - HTTP request/response detection - Sparkline visualization
Thread Safety¶
kubefwd handles concurrent operations carefully:
| Resource | Protection |
|---|---|
| Service registry | RWMutex |
| IP allocation | Per-namespace mutex |
| Hosts file | Global mutex with file locking |
| Metrics | Atomic operations + RWMutex |
| Event bus | Channel-based with buffer |
Shutdown Sequence¶
Clean shutdown ensures no leftover state:
- User signal (q, Ctrl+C) → stop listening
- Close service watchers
- Stop all port forwards
- Remove hosts file entries
- Remove network interface aliases
If shutdown is interrupted, use -p to purge stale entries on next run.
Network Interface Management¶
kubefwd creates loopback aliases:
macOS (lo0):
Linux (lo):
These aliases allow binding to unique IPs while keeping traffic local.
Security Considerations¶
kubefwd requires elevated privileges for:
- Network interface modification: Adding IP aliases
- Hosts file modification: DNS resolution
- Low port binding: Ports below 1024
Recommendations: - Run only in development environments - Don't run in production workstations with sensitive data - Review services before forwarding from production clusters - Use label selectors to limit scope
Performance Characteristics¶
| Operation | Complexity |
|---|---|
| Service discovery | O(services) per namespace |
| Port forward setup | O(1) per service |
| Metrics sampling | O(forwards) per second |
| Hosts file update | O(forwards) per change |
kubefwd is designed for development use with dozens to hundreds of services. For very large clusters, use selectors to limit scope.