Services
Register, discover, and manage microservices in the service catalog.
Service Catalog
The service catalog is the central registry for all microservices in your organization. It uses Kubernetes labels for automatic discovery, meaning any service deployed with the correct labels is automatically indexed, searchable, and monitored.
The catalog provides a unified view of service metadata, API definitions, deployment status, health checks, and ownership information — all in one place.
Services are discovered automatically from Kubernetes. You do not need to manually register services that are already deployed with the correct labels.
Registering a Service
To register a new service, deploy it using one of the base Helm charts with the required Riven labels. The Helm chart handles setting up the Kubernetes Deployment, Service, Ingress, and ServiceMonitor resources.
TypeScript Service
# Helm values for a new TypeScript service
nameOverride: my-service
image:
repository: <account>.dkr.ecr.us-east-1.amazonaws.com/my-service
tag: latest
labels:
riven.dev/service-type: backend
riven.dev/base-chart: js-service-base
riven.dev/team: platform
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512MiPython Service
# Helm values for a new Python/FastAPI service
nameOverride: my-python-service
image:
repository: <account>.dkr.ecr.us-east-1.amazonaws.com/my-python-service
tag: latest
labels:
riven.dev/service-type: backend
riven.dev/base-chart: python-service-base
riven.dev/team: ml-platform
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
env:
- name: PYTHON_ENV
value: production
- name: WORKERS
value: "4"Once deployed, the service will appear in the Platform catalog within seconds as the Kubernetes watcher picks up the new resources.
Service Configuration
Each service's API contract is defined using Protocol Buffer definitions. These proto files serve as the single source of truth for all RPC endpoints, request/response types, and documentation.
syntax = "proto3";
package myservice.v1;
service MyService {
// GetStatus returns the current status of the service.
rpc GetStatus(GetStatusRequest) returns (GetStatusResponse);
}
message GetStatusRequest {}
message GetStatusResponse {
string status = 1;
string version = 2;
}After updating proto definitions, run yarn generate (TypeScript) or riven proto generate to regenerate types and Connect RPC client/server stubs.
Service Labels
Riven uses a set of standard Kubernetes labels to classify and discover services. These labels are required for automatic catalog registration:
| Label | Description | Examples |
|---|---|---|
riven.dev/service-type | The type of service | backend, frontend, worker, cronjob |
riven.dev/base-chart | The Helm base chart used | js-service-base, python-service-base |
riven.dev/team | The owning team | platform, ml-platform, infra |
riven.dev/env | The deployment environment | development, staging, production |
Services missing the riven.dev/service-type or riven.dev/base-chart labels will not be automatically discovered by the catalog.
Health Checks
Every service registered in the catalog is expected to expose health check endpoints. The base Helm charts automatically configure liveness and readiness probes that the catalog uses to report service health.
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /ready
port: http
initialDelaySeconds: 5
periodSeconds: 10The catalog dashboard displays real-time health status for each service, including uptime history, error rates, and the last successful health check timestamp.
Troubleshooting
Service Not Appearing in Catalog
If your service is deployed but does not appear in the catalog, check the following:
- Verify labels are set — Run
kubectl get deployment <name> -n <ns> --show-labelsand confirm all requiredriven.dev/*labels are present. - Check pod status — Run
kubectl get pods -n <ns> -l app=<name>and confirm pods areRunning. - Inspect watcher logs — The catalog watcher logs will show discovery events:
kubectl logs -n dev-center -l app=catalog-watcher --tail=50. - Restart the watcher — If labels were added after deployment, the watcher may need a restart:
kubectl rollout restart deployment/catalog-watcher -n dev-center.
Health Check Failures
If a service shows as unhealthy in the catalog:
- Confirm the
/healthzand/readyendpoints return200 OK. - Check that the container port matches the probe configuration.
- Review pod events with
kubectl describe pod <pod-name> -n <ns>for probe failure details.