Dev/Prod Parity — for 12-factor Microservices

Santosh Pai
ITNEXT
Published in
12 min readOct 23, 2023

--

Having dissected the nuances of Concurrency and Disposability (and the previous 7 factors), we arrive at a topic that’s not just a cornerstone but a catalyst in the modern software landscape — Dev/Prod Parity. This principle serves as the genetic code for a plethora of tools, platforms, and the burgeoning field of Platform Engineering.

The Tenth Factor — Dev/Prod Parity

Keep development, staging, and production as similar as possible.

The concept isn’t new! At its core, Dev/Prod Parity is about minimizing the differences between development and production environments. It’s about creating a seamless continuum where code can flow from a developer’s machine to a production cluster with minimal friction and maximal confidence. It’s not just about consistency in code but also in configuration, data stores, networking, and even the monitoring tools in use.

As we have evolved towards more automated, containerized, and orchestrated solutions, maintaining Dev/Prod Parity has become both easier and more complex. Easier, because tools like Kubernetes, Docker and Argo offer unprecedented control and consistency. Complex, because the diverse range of services and components involved can create subtle inconsistencies that are hard to detect but have significant impacts.

Parity Matters

The objective of Dev/Prod Parity isn’t just a technical aspiration; it’s a strategic approach that deeply intertwines with the architecture of your application. One effective strategy to reach this pinnacle of parity is by adhering to Domain-Driven Design (DDD) principles for encapsulating business logic, while rigorously externalizing every other aspect of your application — namely configurations, backing services, and other environment-specific dependencies.

Domain-Driven Design: The Core of Your Application

Domain Oriented. Extensible. Maintainable.

In a well-architected system, the business logic should be the nucleus around which everything else orbits. By keeping the domain-driven business logic within the source code, you’re ensuring that the most fundamental part of your application is environment-agnostic. This logic remains consistent whether the code is running on a developer’s local setup, a QA environment, or a high-availability production cluster.

Externalization of Dependencies: The Key to Flexibility

Clean Architecture

While the core logic remains static, every other aspect of your application — configurations, databases, caching systems, messaging queues, etc. — should be treated as an external dependency. The goal here is to abstract these elements out of your codebase and manage them through environment variables, configuration files, or service discovery systems. Tools like Kubernetes ConfigMaps and Secrets, can be incredibly useful here.

Create them,

kubectl create secret generic refresh-secret --from-literal=REFRESH_TOKEN_SECRET='aha!' -n development

reference them in deployments,

apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: xxx/xxx/auth/auth:latest
imagePullPolicy: Always
env:
- name: API_VERSION
value: 'v1'
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongo-dev
key: MONGO_URI
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
- name: REFRESH_TOKEN_SECRET
valueFrom:
secretKeyRef:
name: refresh-secret
key: REFRESH_TOKEN_SECRET
....

and then validate!

const start = async () => {
console.log('starting up ...')
if (!process.env.JWT_KEY){
throw new Error('JWT_KEY must be defined')
}
if (!process.env.MONGO_URI){
throw new Error('MONGO_URI must be defined')
}
if (!process.env.REFRESH_TOKEN_SECRET){
throw new Error('REFRESH_TOKEN_SECRET must be defined')

...

The Symbiosis: Where DDD Meets Externalization

When you merge Domain-Driven Design with the externalization of dependencies, you create a harmonious symbiosis that is inherently conducive to Dev/Prod Parity. Your domain logic remains isolated from volatile elements, which are managed externally. This decoupling allows your application to be both flexible and consistent, capable of adapting to different environments without suffering from the inconsistencies that so often plague less disciplined approaches.

The Payoff: Seamless Transitions and Reduced Friction

In our design, we have been meticulous from the outset, setting ambitious yet attainable goals rooted in the principles of clean architecture. By judiciously selecting tools that inherently uphold these principles, we’ve effectively laid a strong foundation for our design. Kubernetes provides the orchestration muscle, Kustomize offers the customization finesse, and Argo brings in the workflow automation — altogether forming an exemplary framework for platform engineering.

For example, to create an app (in an Argo Project), declare the application,

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: pipelines
spec:
destination:
namespace: development
server: https://kubernetes.default.svc
source:
repoURL: https://github.com/xxx/manifests-index.git
targetRevision: develop
path: infra/k8s/pipelines/overlays/development
project: default
syncPolicy:
automated: {}

and automate!

kustomize build infra/k8s/pipelines/argo/overlays/development | argocd app create --file -

For images, build a custom overlay,

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../../base

namespace: development

images:
- name: xxx-docker.pkg.dev/xxx/pipelines/pipelines:latest
newName: xxx-docker.pkg.dev/xxx/pipelines/pipelines
newTag: development


patches:
- path: build-version-patch.yaml
- path: memphis-dev-patch.yaml

and include in automation workflows!

kustomize build infra/k8s/pipelines/overlays/development

But we didn’t stop there; our relentless pursuit to enhance the developer experience has led us to innovate beyond this robust foundation. The result is a suite of complementary tools and features, such as Service Catalogs, Feature Flag toggles and promotion mechanisms, specifically designed to streamline the development lifecycle. These aren’t mere add-ons; they are strategic offshoots aimed at empowering developers, thereby enriching the overall quality and efficiency of our platform.

By adhering to these principles, you not only make your application more maintainable but also reduce the friction involved in promoting code from development to production. This leads to shorter lead times, fewer rollback scenarios, and, ultimately, a quicker and more reliable software delivery lifecycle.

Will do a post on relentless automation for Release Posturing, if enough votes!

Git: Not just Version Control

In our inaugural episode focused on Codebase, we dug deep into the transformative power of Git as a version control system. We touched upon its multifaceted roles, from orchestrating GitOps and streamlining branching strategies to enabling efficient workflows and laying the groundwork for Infrastructure as Code (IaC). We also introduced Kustomize, a potent tool for managing Kubernetes configurations.

Lets look at the commonly employed CI/CD strategies that leverage the synergies between Git, GitOps, IaC, and Kustomize — strategies designed to elevate your development and operational workflows to new heights.

Matrix Strategy: Optimizing CI/CD Pipelines

The Matrix Strategy serves as a powerhouse in CI/CD pipelines, allowing you to execute a multitude of jobs concurrently through the creation of a variable-based configuration matrix. This approach is particularly advantageous in deployment scenarios, as it enables comprehensive testing across multiple environments, thus ensuring a robust and reliable rollout.

Conditional Jobs

With conditional jobs, you can specify under what conditions a job should be executed. This can be based on branch names, tags, or even the results of a previous job. It allows you to create more dynamic and responsive pipelines.

In this example, we are using the Matrix Strategy and Conditions to successfully deploy 3 data-producer applications off a single code base and image, allowing us to then attach 3 different configuration objects.

~/manifests-index develop > tree -L 4 infra/k8s/data-producer/                                                                 
infra/k8s/data-producer/
├── argo
│ ├── base
│ │ ├── application.yaml
│ │ └── kustomization.yaml
│ └── overlays
│ └── development
│ ├── producer-nyct1
│ ├── producer-nyct2
│ └── producer-nyct3
├── base
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── service.yaml
└── overlays
└── development
├── producer-nyct1
│ ├── build-version-patch.yaml
│ ├── collector-url-patch.yaml
│ ├── data-url-patch.yaml
│ ├── kustomization.yaml
│ └── memphis-dev-patch.yaml
├── producer-nyct2
│ ├── build-version-patch.yaml
│ ├── collector-url-patch.yaml
│ ├── data-url-patch.yaml
│ ├── kustomization.yaml
│ └── memphis-dev-patch.yaml
└── producer-nyct3
├── build-version-patch.yaml
├── collector-url-patch.yaml
├── data-url-patch.yaml
├── kustomization.yaml
└── memphis-dev-patch.yaml
jobs:
build-and-push:
name: Build and Push Docker image
permissions:
contents: read
id-token: write
runs-on: ubuntu-latest
steps:
- name: Build and push Docker
id: build-image
uses: docker/build-push-action@v2
with:
push: true
tags: ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.IMAGE_NAME }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}
commit:
strategy:
matrix:
producer: [producer-nyct1, producer-nyct2, producer-nyct3]
name: Commit to Reconciliation Repo
needs: [build-and-push]
runs-on: ubuntu-latest
if: ${{ needs.build-and-push.result == 'success' }}
steps:
- name: Echo
id: echo-sha
run: |
echo "github_sha: " $GITHUB_SHA
echo "::set-output name=sha_short::${GITHUB_SHA::7}"
- name: Clone
run: |
git config --global user.email ${{ secrets.GH_USER_EMAIL }}
git config --global user.name ${{ secrets.GH_USER_NAME }}
git clone -b develop https://.:${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}@github.com/xxx/manifests-index.git
echo "clone repo success"
- name: Substitute
run: |
sed -i "s/value.*$/value\: $GITHUB_SHA/" manifests-index/infra/k8s/$IMAGE_NAME/overlays/development/${{ matrix.producer }}/build-version-patch.yaml
- name: Commit
run: |
cd manifests-index && git add . && git commit -m '$IMAGE_NAME updated with ${{ steps.echo-sha.outputs.sha_short }} on branch $IMAGE_TAG' && git push origin develop

Fan Out/ Fan In and Build Matrix Expansion are other strategies that require a special mention, and are invaluable when defining CI/CD Pipelines.

Argo — Rollout Strategies

We have covered Argo CD, and yet Argo Rollouts deserves a diversion!

Argo Rollouts can be configured to use metrics for automated decision-making during Canary and Blue/Green deployments, enabling a data-driven approach that can be consistently applied across environments.

Blue/Green Deployments

In a Blue/Green deployment, two separate environments are maintained — Blue for the currently live version and Green for the new one. Upon successful testing of the Green environment, traffic is shifted almost instantaneously.

  1. Zero Downtime: Switching between Blue and Green happens instantly, ensuring no service interruptions.
  2. Immediate Rollback: If issues occur, reverting to the Blue environment is swift and straightforward.

Argo CD can manage Blue/Green deployments by orchestrating the provisioning of separate environments and automating the traffic switch through Kubernetes service objects.

Canary Deployments

In Canary deployments, the new version is gradually rolled out to a small subset of users before making it available to everyone.

  1. Risk Mitigation: By exposing the new version to a limited user base initially, you can catch issues early.
  2. Fine-Grained Control: The rollout can be adjusted based on real-time metrics and user feedback.

Argo Rollouts can manage Canary deployments by gradually adjusting the weight between old and new versions based on predefined metrics and analysis.

Rolling Deployments

In Rolling (default) deployments, the new version replaces the old version instance-by-instance, ensuring that at least one instance of the application is always available.

  1. Resource Efficiency: Does not require duplicating the entire environment.
  2. Continuous Availability: At least one instance of the application remains live throughout the process.

Argo CD supports Rolling deployments by updating the Kubernetes Deployment objects one pod at a time, ensuring high availability during the transition.

By leveraging Argo’s rich arsenal of deployment strategies, teams can not only expedite their development process but also make it more resilient and aligned with production requirements.

Kubernetes: The Great Equalizer

Kubernetes Pre-Requisites for Parity

Kubernetes serves as a remarkable tool in achieving Parity between environments.

  • Declare IaC Intent: The declarative manifests used in Kubernetes ensure that the same configurations are applied across different environments.
  • ConfigMaps and Secrets: Recall our discussion on Config in Kubernetes. Using ConfigMaps and Secrets, you can externalize the configurations, ensuring that the application code remains environment-agnostic.
  • Persistent Storage: The Double-Edged Sword: As we discussed in our last episode, persistent storage plays a crucial role in stateful applications. However, it also introduces a layer of complexity in maintaining Dev/Prod Parity. Kubernetes’ StatefulSets and PersistentVolumeClaims (PVCs) can be orchestrated in such a way that both development and production environments have similar data persistence characteristics, albeit scaled differently.
  • The Advantage of Being Stateless: In line with our previous discussion on Disposability, maintaining stateless services has eased the burden of managing Dev/Prod Parity. This statelessness, achieved via externalizing configurations and persistent storage, has streamlined our deployment and scaling operations.
  • Observability: Prometheus, Grafana and Jaeger with OpenTelemetry complete the loop!

Platform Engineering: Achieving Parity

The quest for Dev/Prod Parity has led to the rise of Platform Engineering, a discipline focused on creating a seamless developer experience by abstracting the complexities of underlying infrastructure and services.

Deployment Scheme

The ability to “shift-left” Unit and Integration Tests and “shift-down” the intricacies of GitOps, Git / Argo Workflows and IaC considerations allows us to abstract the developer away from cognitive overload and help them focus on what they do best: deliver features, faster and more securely!

Elevating Developer Experience Through Platform Design

When designing Developer Platforms the Principle of Orthogonality is of vital importance. Orthogonality is a software design principle for writing components in a way that changing one component doesn’t affect other components. It is the combination of two other principles, namely strong cohesion and loose coupling.

  • Strong Cohesion: Strong Cohesion refers to the practice of encapsulating related functionalities within a single component or module. A strongly cohesive component is one that performs a specific set of tasks that are closely related to each other. In a Kubernetes world, think of a single microservice managing user authentication; it encapsulates all the operations related to this function — validating credentials, generating tokens, and maintaining session states.
  • Loose Coupling: is the practice of minimizing the dependencies between different components. Each component should have little to no knowledge of the inner workings of others. In a Kubernetes ecosystem, this might translate to using well-defined APIs for inter-service communication or employing a message queue for asynchronous tasks.

Orthogonality isn’t confined merely to the realm of code; its tenets are critically applicable across the architectural landscape of any software ecosystem.

Strong cohesion and loose coupling are great organizational traits too; allowing teams to work together yet be autonomous!

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

— Melvin E. Conway

Whether you’re crafting Infrastructure as Code (IaC) templates, configuring Argo projects for CI/CD, or setting up service meshes in Kubernetes, the dual pillars of Strong Cohesion and Loose Coupling should guide your design choices.

Cloud Native Sherpas — Paving the Golden Path

Just like a Sherpas who guide trekkers through the most efficient and safest routes up a mountain, the Golden Path serves a similar purpose for developers. It outlines the optimum routes — technologies, frameworks, and practices — to reach the summit of a project successfully. Both aim to remove obstacles and uncertainties, providing a tested and reliable path for reaching the desired destination, whether it’s the peak of a mountain or the completion of an MVP.

Internal Developer Platforms (IDPs) and Platform Engineering have become the fulcrums for implementing Paved Paths effectively. These platforms provide the necessary abstractions and automation, allowing developers to shift their focus back to coding while still benefiting from optimized workflows.

  1. Dev-First Kubernetes: The evolution toward a centralized developer platform signifies a shift from an ops-first to a dev-first approach in Kubernetes ecosystems. This aligns perfectly with the concept of Paved Paths, serving both operational efficiency and developer experience.
  2. Abstractions, Not Distractions: The key to a successful Paved Path strategy lies in offering the right level of abstraction. Developers should be enabled to ‘shift left’ in responsibilities without being sidetracked by operational complexities.

“Freedom is not so much the absence of restrictions as finding the right ones, the liberating restrictions” — Timothy Keller

Lean and Mean: The Minimalist Approach to Tooling

In our quest for 12-factor compliant microservices, we’ve deliberately opted for a lean toolset, deliberately avoiding external tools and platforms (pack only what you need when trekking). This minimalist approach not only reduces the cognitive load for our developers but also minimizes the potential points of failure.

The Future is Paved

As the developer experience becomes an increasingly business-critical requirement, Paved Paths, facilitated by robust Platform Engineering practices, have become the norm rather than the exception. These paths not only optimize individual developer workflows but also catalyze organizational transformation, making cloud-native development more accessible and manageable across diverse industries.

Tangible Outcomes

  • Secure and automated Argo application creation with Argo CLI.
  • GitOps driven Organization wide GitHub Workflows with the flexibility to adapt per repository.
  • Single-Click preview environment setup.
  • Observability deep links per microservice (with leading and lagging traces per function!) with telemetry instrumentation.
  • Seamless feature rollout with Feature Flags per environment; Feature Flags Platform as an off-shoot!
  • Self-Service Developer Platform with Service Catalogs.
  • Release Management automation, with rollbacks, and Observability!

more …

Next Up

Next we conclude our 12-factor series by discussing Logging and Admin Processes. Again, our Cloud Native toolset shines in the coming episode!

Reactive!

On display is our commitment to the Reactive Manifesto; not merely rhetorical — it’s deeply ingrained in our technological ethos, shaping not just our conversations but our codebases, architectures, and long-term strategies.

Here is an EP worth the vinyl from the Hessle Audio stable, doused in microtonality: Toumba — Petals

--

--