Services

Register, discover, and manage microservices in the service catalog.

Service Catalog

The service catalog is the central registry for all microservices in your organization. It uses Kubernetes labels for automatic discovery, meaning any service deployed with the correct labels is automatically indexed, searchable, and monitored.

The catalog provides a unified view of service metadata, API definitions, deployment status, health checks, and ownership information — all in one place.

Services are discovered automatically from Kubernetes. You do not need to manually register services that are already deployed with the correct labels.

Registering a Service

To register a new service, deploy it using one of the base Helm charts with the required Riven labels. The Helm chart handles setting up the Kubernetes Deployment, Service, Ingress, and ServiceMonitor resources.

TypeScript Service

values.yaml
yaml
# Helm values for a new TypeScript service
nameOverride: my-service
image:
  repository: <account>.dkr.ecr.us-east-1.amazonaws.com/my-service
  tag: latest
 
labels:
  riven.dev/service-type: backend
  riven.dev/base-chart: js-service-base
  riven.dev/team: platform
 
resources:
  requests:
    cpu: 100m
    memory: 256Mi
  limits:
    cpu: 500m
    memory: 512Mi

Python Service

values-python.yaml
yaml
# Helm values for a new Python/FastAPI service
nameOverride: my-python-service
image:
  repository: <account>.dkr.ecr.us-east-1.amazonaws.com/my-python-service
  tag: latest
 
labels:
  riven.dev/service-type: backend
  riven.dev/base-chart: python-service-base
  riven.dev/team: ml-platform
 
resources:
  requests:
    cpu: 200m
    memory: 512Mi
  limits:
    cpu: 1000m
    memory: 1Gi
 
env:
  - name: PYTHON_ENV
    value: production
  - name: WORKERS
    value: "4"

Once deployed, the service will appear in the Platform catalog within seconds as the Kubernetes watcher picks up the new resources.

Service Configuration

Each service's API contract is defined using Protocol Buffer definitions. These proto files serve as the single source of truth for all RPC endpoints, request/response types, and documentation.

service.proto
protobuf
syntax = "proto3";
 
package myservice.v1;
 
service MyService {
  // GetStatus returns the current status of the service.
  rpc GetStatus(GetStatusRequest) returns (GetStatusResponse);
}
 
message GetStatusRequest {}
 
message GetStatusResponse {
  string status = 1;
  string version = 2;
}

After updating proto definitions, run yarn generate (TypeScript) or riven proto generate to regenerate types and Connect RPC client/server stubs.

Service Labels

Riven uses a set of standard Kubernetes labels to classify and discover services. These labels are required for automatic catalog registration:

LabelDescriptionExamples
riven.dev/service-typeThe type of servicebackend, frontend, worker, cronjob
riven.dev/base-chartThe Helm base chart usedjs-service-base, python-service-base
riven.dev/teamThe owning teamplatform, ml-platform, infra
riven.dev/envThe deployment environmentdevelopment, staging, production

Services missing the riven.dev/service-type or riven.dev/base-chart labels will not be automatically discovered by the catalog.

Health Checks

Every service registered in the catalog is expected to expose health check endpoints. The base Helm charts automatically configure liveness and readiness probes that the catalog uses to report service health.

health-probes.yaml
yaml
livenessProbe:
  httpGet:
    path: /healthz
    port: http
  initialDelaySeconds: 10
  periodSeconds: 15
 
readinessProbe:
  httpGet:
    path: /ready
    port: http
  initialDelaySeconds: 5
  periodSeconds: 10

The catalog dashboard displays real-time health status for each service, including uptime history, error rates, and the last successful health check timestamp.

Troubleshooting

Service Not Appearing in Catalog

If your service is deployed but does not appear in the catalog, check the following:

  1. Verify labels are set — Run kubectl get deployment <name> -n <ns> --show-labels and confirm all required riven.dev/* labels are present.
  2. Check pod status — Run kubectl get pods -n <ns> -l app=<name> and confirm pods are Running.
  3. Inspect watcher logs — The catalog watcher logs will show discovery events: kubectl logs -n dev-center -l app=catalog-watcher --tail=50.
  4. Restart the watcher — If labels were added after deployment, the watcher may need a restart: kubectl rollout restart deployment/catalog-watcher -n dev-center.

Health Check Failures

If a service shows as unhealthy in the catalog:

  1. Confirm the /healthz and /ready endpoints return 200 OK.
  2. Check that the container port matches the probe configuration.
  3. Review pod events with kubectl describe pod <pod-name> -n <ns> for probe failure details.