Microservice configuration
made easy
Microservice architecture implies a lot of services with similar configuration.
Each service has an application config, deployment config, a log template and so on.
Multiply that by 50 services with 5 different environments and you have a mess.
How Microconfig can help
Just to show the problem with microservice configuration more explicitly.
Below are typical configs that we have to support nowadays.
2 different microservices
with their dev
and prod
configs.
app config:
payment-backend
name: payment-backend
server:
port: 80
context: /api
payment-gateway: https://payment-gateway.com
database:
type: Postgres
min-pool-size: 10
max-pool-size: 50
url: jdbc:postgres://20.20.20.20:5432/payments
monitoring:
base-path: /monitoring
endpoints: info, health, ready, prometheus
payment-frontend
name: payment-frontend
server:
port: 80
minThreads: 10
maxThreads: 100
payment-backend:
host: http://payment-backend.com
path: /api
timeoutMs: 180000
monitoring:
base-path: /monitoring
endpoints: info, health, ready, prometheus
payment-backend
name: payment-backend
server:
port: 80
context: /api
payment-gateway: http://gateway-mock.local
database:
type: Postgres
max-pool-size: 10
url: jdbc:postgres://10.10.10.10:5432/payments
monitoring:
base-path: /monitoring
endpoints: info, health, ready, prometheus, threaddump
secure: false
payment-frontend
name: payment-frontend
server:
port: 80
hotReload: true
maxThreads: 50
payment-backend:
host: https://payment-backend.local
path: /api
timeoutMs: 90000
monitoring:
base-path: /monitoring
endpoints: info, health, ready, prometheus, threaddump
secure: false
deploy config:
payment-backend
image: "payment-backend:1.5"
replicas: 2
ingress:
host: http://payment-backend.local
probes:
health: /monitoring/health
ready: /monitoring/ready
payment-frontend
image: "payment-frontend:2.1"
replicas: 3
ingress:
host: https://payments.example.com
probes:
health: /monitoring/health
ready: /monitoring/ready
payment-backend
image: "payment-backend:latest"
replicas: 1
ingress:
host: http://payment-backend.local
probes:
health: /monitoring/health
ready: /monitoring/ready
payment-frontend
image: "payment-frontend:latest"
replicas: 1
ingress:
host: http://payments.local
probes:
health: /monitoring/health
ready: /monitoring/ready
log config:
payment-backend
<configuration>
<appender class="LogstashTcpSocketAppender">
<destination>30.30.30.30:9600</destination>
<encoder class="LogstashEncoder">
<customFields>{"servicename":"payment-backend"}</customFields>
</encoder>
</appender>
</configuration>
payment-frontend
<configuration>
<appender class="LogstashTcpSocketAppender">
<destination>30.30.30.30:9600</destination>
<encoder class="LogstashEncoder">
<customFields>{"servicename":"payment-frontend"}</customFields>
</encoder>
</appender>
</configuration>
payment-frontend
<configuration>
<appender class="FileAppender">
<file>logs/payment-frontend.log</file>
<encoder>
<pattern>%d{HH:mm:ss} %-5level %logger %msg %n</pattern>
</encoder>
</appender>
</configuration>
payment-backend
<configuration>
<appender class="FileAppender">
<file>logs/payment-backend.log</file>
<encoder>
<pattern>%d{HH:mm:ss} %-5level %logger %msg %n</pattern>
</encoder>
</appender>
</configuration>
This might not look scary for only 2 services, but what if you have to manage 50 of them with
additional
test
and staging
environments? Check out Features
to see how Microconfig can help your project scale up or head over to Quickstart
to see Microconfig applied to this example.