Oreoluwa
Microservices Architecture: Design Patterns and Implementation
February 15, 2024
13 min read

Microservices Architecture: Design Patterns and Implementation

Microservices
Architecture
Distributed Systems
DevOps

Microservices architecture has become the go-to approach for building scalable, maintainable applications. However, transitioning from a monolith to microservices requires careful planning and understanding of distributed systems challenges.

Core Microservices Principles

Single Responsibility Principle

Each service should have one reason to change:

// User Service - handles user management only
class UserService {
  async createUser(userData) {
    // Validate user data
    const user = await this.userRepository.create(userData);
    
    // Publish event for other services
    await this.eventBus.publish('user.created', {
      userId: user.id,
      email: user.email,
      createdAt: user.createdAt
    });
    
    return user;
  }

  async getUserById(userId) {
    return await this.userRepository.findById(userId);
  }
}

// Order Service - handles order management only
class OrderService {
  async createOrder(orderData) {
    // Validate order data
    const order = await this.orderRepository.create(orderData);
    
    // Publish event
    await this.eventBus.publish('order.created', {
      orderId: order.id,
      userId: order.userId,
      total: order.total,
      createdAt: order.createdAt
    });
    
    return order;
  }
}

Database per Service

Each service owns its data:

# docker-compose.yml
version: '3.8'
services:
  user-service:
    build: ./user-service
    environment:
      - DB_HOST=user-db
    depends_on:
      - user-db
      
  user-db:
    image: postgres:15
    environment:
      POSTGRES_DB: users
      POSTGRES_USER: user_service
      POSTGRES_PASSWORD: password

  order-service:
    build: ./order-service
    environment:
      - DB_HOST=order-db
    depends_on:
      - order-db
      
  order-db:
    image: postgres:15
    environment:
      POSTGRES_DB: orders
      POSTGRES_USER: order_service
      POSTGRES_PASSWORD: password

Communication Patterns

Synchronous Communication with Circuit Breaker

// Circuit breaker implementation
class CircuitBreaker {
  constructor(options = {}) {
    this.failureThreshold = options.failureThreshold || 5;
    this.recoveryTimeout = options.recoveryTimeout || 60000;
    this.monitoringPeriod = options.monitoringPeriod || 10000;
    
    this.state = 'CLOSED'; // CLOSED, OPEN, HALF_OPEN
    this.failureCount = 0;
    this.lastFailureTime = null;
    this.nextAttempt = null;
  }

  async execute(operation) {
    if (this.state === 'OPEN') {
      if (Date.now() < this.nextAttempt) {
        throw new Error('Circuit breaker is OPEN');
      }
      this.state = 'HALF_OPEN';
    }

    try {
      const result = await operation();
      this.onSuccess();
      return result;
    } catch (error) {
      this.onFailure();
      throw error;
    }
  }

  onSuccess() {
    this.failureCount = 0;
    this.state = 'CLOSED';
  }

  onFailure() {
    this.failureCount++;
    this.lastFailureTime = Date.now();

    if (this.failureCount >= this.failureThreshold) {
      this.state = 'OPEN';
      this.nextAttempt = Date.now() + this.recoveryTimeout;
    }
  }
}

// Service client with circuit breaker
class UserServiceClient {
  constructor() {
    this.circuitBreaker = new CircuitBreaker({
      failureThreshold: 3,
      recoveryTimeout: 30000
    });
    this.baseURL = process.env.USER_SERVICE_URL;
  }

  async getUser(userId) {
    return await this.circuitBreaker.execute(async () => {
      const response = await fetch(`${this.baseURL}/users/${userId}`, {
        timeout: 5000,
        headers: {
          'Authorization': `Bearer ${this.getToken()}`,
          'Content-Type': 'application/json'
        }
      });

      if (!response.ok) {
        throw new Error(`HTTP ${response.status}: ${response.statusText}`);
      }

      return await response.json();
    });
  }
}

Asynchronous Communication with Event Sourcing

// Event store implementation
class EventStore {
  constructor(database) {
    this.db = database;
  }

  async appendEvents(streamId, events, expectedVersion) {
    const transaction = await this.db.beginTransaction();
    
    try {
      // Check current version
      const currentVersion = await this.getStreamVersion(streamId);
      if (currentVersion !== expectedVersion) {
        throw new Error('Concurrency conflict');
      }

      // Append events
      for (const event of events) {
        await this.db.query(`
          INSERT INTO events (stream_id, event_type, event_data, version, timestamp)
          VALUES (?, ?, ?, ?, ?)
        `, [
          streamId,
          event.type,
          JSON.stringify(event.data),
          currentVersion + 1,
          new Date()
        ]);
        currentVersion++;
      }

      await transaction.commit();
      
      // Publish events to message bus
      for (const event of events) {
        await this.eventBus.publish(event.type, {
          streamId,
          ...event.data
        });
      }
      
    } catch (error) {
      await transaction.rollback();
      throw error;
    }
  }

  async getEvents(streamId, fromVersion = 0) {
    const results = await this.db.query(`
      SELECT event_type, event_data, version, timestamp
      FROM events
      WHERE stream_id = ? AND version > ?
      ORDER BY version ASC
    `, [streamId, fromVersion]);

    return results.map(row => ({
      type: row.event_type,
      data: JSON.parse(row.event_data),
      version: row.version,
      timestamp: row.timestamp
    }));
  }
}

// Aggregate with event sourcing
class OrderAggregate {
  constructor(id) {
    this.id = id;
    this.version = 0;
    this.status = 'pending';
    this.items = [];
    this.total = 0;
    this.uncommittedEvents = [];
  }

  static async fromHistory(eventStore, orderId) {
    const events = await eventStore.getEvents(orderId);
    const order = new OrderAggregate(orderId);
    
    for (const event of events) {
      order.applyEvent(event);
    }
    
    return order;
  }

  addItem(productId, quantity, price) {
    const event = {
      type: 'ItemAdded',
      data: { productId, quantity, price }
    };
    
    this.applyEvent(event);
    this.uncommittedEvents.push(event);
  }

  confirmOrder() {
    if (this.status !== 'pending') {
      throw new Error('Order cannot be confirmed');
    }

    const event = {
      type: 'OrderConfirmed',
      data: { confirmedAt: new Date() }
    };
    
    this.applyEvent(event);
    this.uncommittedEvents.push(event);
  }

  applyEvent(event) {
    switch (event.type) {
      case 'ItemAdded':
        this.items.push(event.data);
        this.total += event.data.quantity * event.data.price;
        break;
      case 'OrderConfirmed':
        this.status = 'confirmed';
        break;
    }
    this.version++;
  }

  async save(eventStore) {
    if (this.uncommittedEvents.length === 0) return;
    
    await eventStore.appendEvents(
      this.id,
      this.uncommittedEvents,
      this.version - this.uncommittedEvents.length
    );
    
    this.uncommittedEvents = [];
  }
}

Service Discovery and Load Balancing

Service Registry Implementation

// Service registry
class ServiceRegistry {
  constructor() {
    this.services = new Map();
    this.healthCheckInterval = 30000;
    this.startHealthChecks();
  }

  registerService(name, instance) {
    if (!this.services.has(name)) {
      this.services.set(name, []);
    }
    
    const serviceInstance = {
      id: instance.id,
      host: instance.host,
      port: instance.port,
      healthCheckUrl: instance.healthCheckUrl,
      lastSeen: Date.now(),
      healthy: true
    };
    
    this.services.get(name).push(serviceInstance);
    console.log(`Service registered: ${name} (${instance.host}:${instance.port})`);
  }

  deregisterService(name, instanceId) {
    const instances = this.services.get(name);
    if (instances) {
      const index = instances.findIndex(i => i.id === instanceId);
      if (index !== -1) {
        instances.splice(index, 1);
        console.log(`Service deregistered: ${name} (${instanceId})`);
      }
    }
  }

  getHealthyInstances(serviceName) {
    const instances = this.services.get(serviceName) || [];
    return instances.filter(instance => instance.healthy);
  }

  async startHealthChecks() {
    setInterval(async () => {
      for (const [serviceName, instances] of this.services) {
        for (const instance of instances) {
          try {
            const response = await fetch(instance.healthCheckUrl, {
              timeout: 5000
            });
            instance.healthy = response.ok;
            instance.lastSeen = Date.now();
          } catch (error) {
            instance.healthy = false;
            console.warn(`Health check failed for ${serviceName}: ${error.message}`);
          }
        }
      }
    }, this.healthCheckInterval);
  }
}

// Load balancer with different strategies
class LoadBalancer {
  constructor(serviceRegistry) {
    this.serviceRegistry = serviceRegistry;
    this.roundRobinCounters = new Map();
  }

  getNextInstance(serviceName, strategy = 'round-robin') {
    const instances = this.serviceRegistry.getHealthyInstances(serviceName);
    
    if (instances.length === 0) {
      throw new Error(`No healthy instances available for service: ${serviceName}`);
    }

    switch (strategy) {
      case 'round-robin':
        return this.roundRobin(serviceName, instances);
      case 'random':
        return this.random(instances);
      case 'least-connections':
        return this.leastConnections(instances);
      default:
        return this.roundRobin(serviceName, instances);
    }
  }

  roundRobin(serviceName, instances) {
    if (!this.roundRobinCounters.has(serviceName)) {
      this.roundRobinCounters.set(serviceName, 0);
    }
    
    const counter = this.roundRobinCounters.get(serviceName);
    const instance = instances[counter % instances.length];
    this.roundRobinCounters.set(serviceName, counter + 1);
    
    return instance;
  }

  random(instances) {
    const randomIndex = Math.floor(Math.random() * instances.length);
    return instances[randomIndex];
  }

  leastConnections(instances) {
    // Simplified - would need actual connection tracking
    return instances.reduce((min, instance) => 
      (instance.connections || 0) < (min.connections || 0) ? instance : min
    );
  }
}

API Gateway Pattern

// API Gateway implementation
class APIGateway {
  constructor() {
    this.routes = new Map();
    this.middleware = [];
    this.loadBalancer = new LoadBalancer(serviceRegistry);
  }

  addRoute(path, serviceName, options = {}) {
    this.routes.set(path, {
      serviceName,
      timeout: options.timeout || 30000,
      retries: options.retries || 3,
      rateLimit: options.rateLimit,
      authentication: options.authentication
    });
  }

  use(middleware) {
    this.middleware.push(middleware);
  }

  async handleRequest(req, res) {
    try {
      // Apply middleware
      for (const middleware of this.middleware) {
        await middleware(req, res);
      }

      const route = this.findRoute(req.path);
      if (!route) {
        return res.status(404).json({ error: 'Route not found' });
      }

      // Rate limiting
      if (route.rateLimit) {
        const allowed = await this.checkRateLimit(req, route.rateLimit);
        if (!allowed) {
          return res.status(429).json({ error: 'Too many requests' });
        }
      }

      // Authentication
      if (route.authentication) {
        const authenticated = await this.authenticate(req);
        if (!authenticated) {
          return res.status(401).json({ error: 'Unauthorized' });
        }
      }

      // Forward request to service
      const response = await this.forwardRequest(req, route);
      res.status(response.status).json(response.data);

    } catch (error) {
      console.error('Gateway error:', error);
      res.status(500).json({ error: 'Internal server error' });
    }
  }

  async forwardRequest(req, route) {
    const instance = this.loadBalancer.getNextInstance(route.serviceName);
    const url = `http://${instance.host}:${instance.port}${req.path}`;

    for (let attempt = 1; attempt <= route.retries; attempt++) {
      try {
        const response = await fetch(url, {
          method: req.method,
          headers: this.sanitizeHeaders(req.headers),
          body: req.method !== 'GET' ? JSON.stringify(req.body) : undefined,
          timeout: route.timeout
        });

        return {
          status: response.status,
          data: await response.json()
        };

      } catch (error) {
        if (attempt === route.retries) throw error;
        await this.delay(1000 * attempt); // Exponential backoff
      }
    }
  }

  sanitizeHeaders(headers) {
    const sanitized = { ...headers };
    delete sanitized.host;
    delete sanitized['content-length'];
    return sanitized;
  }

  async delay(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
  }
}

Data Management Patterns

Saga Pattern for Distributed Transactions

// Saga orchestrator
class OrderSaga {
  constructor(eventStore, userService, inventoryService, paymentService) {
    this.eventStore = eventStore;
    this.userService = userService;
    this.inventoryService = inventoryService;
    this.paymentService = paymentService;
  }

  async execute(orderData) {
    const sagaId = uuidv4();
    const state = {
      sagaId,
      orderId: orderData.orderId,
      userId: orderData.userId,
      amount: orderData.amount,
      status: 'started',
      completedSteps: [],
      compensations: []
    };

    try {
      // Step 1: Validate user
      await this.validateUser(state);
      
      // Step 2: Reserve inventory
      await this.reserveInventory(state);
      
      // Step 3: Process payment
      await this.processPayment(state);
      
      // Step 4: Confirm order
      await this.confirmOrder(state);
      
      state.status = 'completed';
      await this.saveSagaState(state);
      
    } catch (error) {
      console.error('Saga failed:', error);
      await this.compensate(state);
      state.status = 'failed';
      await this.saveSagaState(state);
      throw error;
    }
  }

  async validateUser(state) {
    try {
      const user = await this.userService.getUser(state.userId);
      if (!user.active) {
        throw new Error('User is not active');
      }
      
      state.completedSteps.push('validateUser');
      await this.saveSagaState(state);
      
    } catch (error) {
      throw new Error(`User validation failed: ${error.message}`);
    }
  }

  async reserveInventory(state) {
    try {
      const reservation = await this.inventoryService.reserve({
        orderId: state.orderId,
        items: state.items
      });
      
      state.reservationId = reservation.id;
      state.completedSteps.push('reserveInventory');
      state.compensations.push({
        step: 'reserveInventory',
        action: 'releaseReservation',
        data: { reservationId: reservation.id }
      });
      
      await this.saveSagaState(state);
      
    } catch (error) {
      throw new Error(`Inventory reservation failed: ${error.message}`);
    }
  }

  async processPayment(state) {
    try {
      const payment = await this.paymentService.charge({
        userId: state.userId,
        amount: state.amount,
        orderId: state.orderId
      });
      
      state.paymentId = payment.id;
      state.completedSteps.push('processPayment');
      state.compensations.push({
        step: 'processPayment',
        action: 'refundPayment',
        data: { paymentId: payment.id }
      });
      
      await this.saveSagaState(state);
      
    } catch (error) {
      throw new Error(`Payment processing failed: ${error.message}`);
    }
  }

  async compensate(state) {
    // Execute compensations in reverse order
    for (const compensation of state.compensations.reverse()) {
      try {
        switch (compensation.action) {
          case 'releaseReservation':
            await this.inventoryService.releaseReservation(
              compensation.data.reservationId
            );
            break;
          case 'refundPayment':
            await this.paymentService.refund(
              compensation.data.paymentId
            );
            break;
        }
      } catch (error) {
        console.error(`Compensation failed for ${compensation.step}:`, error);
        // Log for manual intervention
      }
    }
  }

  async saveSagaState(state) {
    await this.eventStore.appendEvents(state.sagaId, [{
      type: 'SagaStateUpdated',
      data: state
    }]);
  }
}

Observability and Monitoring

Distributed Tracing

// Trace context
class TraceContext {
  constructor(traceId, spanId, parentSpanId = null) {
    this.traceId = traceId;
    this.spanId = spanId;
    this.parentSpanId = parentSpanId;
    this.baggage = new Map();
  }

  static fromHeaders(headers) {
    const traceId = headers['x-trace-id'] || uuidv4();
    const spanId = headers['x-span-id'] || uuidv4();
    const parentSpanId = headers['x-parent-span-id'] || null;
    
    return new TraceContext(traceId, spanId, parentSpanId);
  }

  toHeaders() {
    return {
      'x-trace-id': this.traceId,
      'x-span-id': this.spanId,
      'x-parent-span-id': this.parentSpanId
    };
  }

  createChildSpan() {
    return new TraceContext(
      this.traceId,
      uuidv4(),
      this.spanId
    );
  }
}

// Span for tracking operations
class Span {
  constructor(traceContext, operationName, serviceName) {
    this.traceId = traceContext.traceId;
    this.spanId = traceContext.spanId;
    this.parentSpanId = traceContext.parentSpanId;
    this.operationName = operationName;
    this.serviceName = serviceName;
    this.startTime = Date.now();
    this.endTime = null;
    this.tags = new Map();
    this.logs = [];
    this.status = 'ok';
  }

  setTag(key, value) {
    this.tags.set(key, value);
    return this;
  }

  log(message, data = {}) {
    this.logs.push({
      timestamp: Date.now(),
      message,
      data
    });
    return this;
  }

  setError(error) {
    this.status = 'error';
    this.setTag('error', true);
    this.log('error', {
      message: error.message,
      stack: error.stack
    });
    return this;
  }

  finish() {
    this.endTime = Date.now();
    // Send to tracing backend (Jaeger, Zipkin, etc.)
    this.sendToTracer();
  }

  sendToTracer() {
    const spanData = {
      traceId: this.traceId,
      spanId: this.spanId,
      parentSpanId: this.parentSpanId,
      operationName: this.operationName,
      serviceName: this.serviceName,
      startTime: this.startTime,
      endTime: this.endTime,
      duration: this.endTime - this.startTime,
      tags: Object.fromEntries(this.tags),
      logs: this.logs,
      status: this.status
    };

    // Send to tracing collector
    // Implementation depends on your tracing backend
  }
}

// Middleware for Express to add tracing
function tracingMiddleware(req, res, next) {
  const traceContext = TraceContext.fromHeaders(req.headers);
  const span = new Span(traceContext, `${req.method} ${req.path}`, process.env.SERVICE_NAME);
  
  req.traceContext = traceContext;
  req.span = span;
  
  // Add trace headers to response
  Object.assign(res.headers, traceContext.toHeaders());
  
  // Finish span when response ends
  res.on('finish', () => {
    span.setTag('http.method', req.method);
    span.setTag('http.url', req.url);
    span.setTag('http.status_code', res.statusCode);
    
    if (res.statusCode >= 400) {
      span.setError(new Error(`HTTP ${res.statusCode}`));
    }
    
    span.finish();
  });
  
  next();
}

Deployment and DevOps

Container Orchestration with Kubernetes

# user-service-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service
  labels:
    app: user-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
      - name: user-service
        image: user-service:v1.0.0
        ports:
        - containerPort: 3000
        env:
        - name: DB_HOST
          value: "user-db-service"
        - name: REDIS_HOST
          value: "redis-service"
        - name: SERVICE_NAME
          value: "user-service"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 3000
          initialDelaySeconds: 5
          periodSeconds: 5
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"

---
apiVersion: v1
kind: Service
metadata:
  name: user-service
spec:
  selector:
    app: user-service
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
  type: ClusterIP

---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: user-service-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: user-service
  minReplicas: 3
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

Best Practices Summary

  1. Start with a modular monolith before breaking into microservices
  2. Design for failure - implement circuit breakers, timeouts, and retries
  3. Embrace eventual consistency where strong consistency isn't required
  4. Implement comprehensive monitoring and distributed tracing
  5. Use API gateways for cross-cutting concerns
  6. Automate everything - deployment, testing, monitoring
  7. Keep services loosely coupled through well-defined interfaces
  8. Plan for data consistency with sagas or event sourcing
  9. Implement proper security at every layer
  10. Document your architecture and service contracts

Conclusion

Microservices architecture offers powerful benefits for scalability and maintainability, but it comes with complexity. Success requires careful design, robust infrastructure, and disciplined implementation of patterns like circuit breakers, event sourcing, and distributed tracing.

The key is to evolve gradually, learn from failures, and continuously improve your architecture based on real-world requirements and constraints.