Skip to content

TimescaleDB Deployment Script β€” Admin Guide

This document explains the TimescaleDB deployer script you have, how to run it, what it installs, and how your centrally deployed tx-server (running in a Kubernetes Deployment with two containers: tx-server and redis) should point to the database it sets up.


What the script does

  • Uses Helm to install TimescaleDB in either:

  • single mode β†’ timescale/timescaledb-single

  • multinode mode β†’ timescale/timescaledb-multinode
  • Generates a temporary values.yaml with your chosen options (credentials, storage, resources, PgBouncer, metrics, S3 backups/tuning).
  • Applies helm upgrade --install, waits for readiness, and prints connection tips.
  • Provides status, uninstall, and port-forward utilities.

Prereqs: kubectl, helm, a Kubernetes StorageClass, and a namespace (defaults to timescale).


Quick start

# Single-node TimescaleDB, PgBouncer on NodePort, basic resources/persistence
./deploy_timescaledb.sh \
  --action install \
  --mode single \
  --namespace tx \
  --release tsdb \
  --service-type ClusterIP \
  --with-pgbouncer true \
  --data-pvc-size 100Gi \
  --wal-pvc-size 50Gi \
  --cpu-request 500m --mem-request 1Gi \
  --cpu-limit 2 --mem-limit 4Gi

After it finishes:

  • Service name (single): tsdb in namespace tx DSN for your tx-server: postgresql://postgres:postgres@tsdb.tx.svc.cluster.local:5432/postgres
  • If you enabled PgBouncer, prefer connecting via PgBouncer service (e.g., tsdb-pgbouncer.tx.svc.cluster.local:6432).

How your tx-server Deployment uses this

Your tx-server Kubernetes YAML will have two containers: the tx-server image and an in-pod redis container. Point the app to:

  • Redis (sidecar): REDIS_URL=redis://localhost:6379/0
  • Timescale (created by this script):

  • Single: PG_DSN=postgresql://postgres:postgres@tsdb.tx.svc.cluster.local:5432/postgres

  • Multinode (access node): PG_DSN=postgresql://postgres:postgres@tsdb-accessnode.tx.svc.cluster.local:5432/postgres

That’s all your tx-server needs to start ingesting and serving.


Script actions

--action install|uninstall|status|port-forward
  • install β†’ Helm install/upgrade and wait.
  • uninstallβ†’ Helm uninstall release (PVCs/namespace left intact).
  • status β†’ Show pods, services, helm releases.
  • port-forward β†’ For local testing, forwards Postgres access node on :5432.

Modes and what gets installed

--mode single | multinode
  • single:

  • Chart: timescale/timescaledb-single

  • One stateful primary (optionally standbys if --standby-replicas > 0)
  • Optional PgBouncer service
  • multinode:

  • Chart: timescale/timescaledb-multinode

  • One access node + N data nodes
  • Optional PgBouncer service targeting the access node

Key flags (by category)

Core & bootstrap

Flag Default Meaning
--namespace timescale Kubernetes namespace
--release timescaledb Helm release name
--mode single single or multinode
--superuser postgres DB superuser
--superuser-password postgres DB superuser password
--app-db appdb App DB created at bootstrap
--app-user app App user created
--app-password app App user password

Exposure

Flag Default Meaning
--service-type NodePort Service type for Postgres/PgBouncer (ClusterIP/NodePort/LoadBalancer)
--pg-nodeport 31050 NodePort for Postgres if NodePort

PgBouncer

Flag Default Meaning
--with-pgbouncer true Enable PgBouncer
--pgbouncer-service-type NodePort PgBouncer service type
--pgbouncer-nodeport 31051 PgBouncer NodePort

Storage

Flag Default Meaning
--storage-class "" PVC storageClass (optional)
--data-pvc-size 100Gi Data volume size
--wal-pvc-size 50Gi WAL volume size

Resources (access/single node)

Flag Default Meaning
--cpu-request 500m CPU request
--mem-request 1Gi Memory request
--cpu-limit 2 CPU limit
--mem-limit 4Gi Memory limit

Single-node HA

Flag Default Meaning
--standby-replicas 0 Async standbys count (if chart supports)

Multinode topology

Flag Default Meaning
--data-nodes 2 Number of data nodes
--data-node-data-size 200Gi Data node data PVC
--data-node-wal-size 50Gi Data node WAL PVC
--data-node-cpu-request 500m Data node CPU request
--data-node-mem-request 1Gi Data node memory request
--data-node-cpu-limit 2 Data node CPU limit
--data-node-mem-limit 4Gi Data node memory limit

PostgreSQL/Timescale tuning

Flag Default
--max-connections 200
--shared-buffers 2GB
--work-mem 16MB
--maintenance-work-mem 256MB
--effective-cache-size 6GB
--wal-level replica
--max-wal-size 4GB
--checkpoint-timeout 15min
--timescaledb-telemetry off
--timescaledb-tune true

Metrics & backups

Flag Default Meaning
--with-metrics true Enable exporter if chart exposes it
--backup-enabled false Enable S3 backups/archiving
--s3-bucket Bucket name
--s3-endpoint S3/MinIO endpoint (host:port)
--s3-region us-east-1 Region
--s3-access-key Access key
--s3-secret-key Secret key
--s3-insecure false true for HTTP/self-signed
--s3-prefix "" Optional path prefix
--backup-retention-days 7 Retention policy

If --backup-enabled true, the script validates S3 parameters before install.


What the script generates under the hood

  • values.yaml builders in a temp dir:

  • make_single_values() for timescaledb-single

  • make_multinode_values() for timescaledb-multinode
  • Shared blocks:

    • make_common_pg_block() for PostgreSQL parameters and Timescale knobs
    • make_backup_block() for S3 backups (pgBackRest/WAL-G selectors exposed by chart)
    • Temporary files are cleaned automatically when the script exits.

Example runs

1) Single-node with PgBouncer and persistence

./deploy_timescaledb.sh \
  --action install \
  --mode single \
  --namespace tx \
  --release tsdb \
  --service-type ClusterIP \
  --with-pgbouncer true \
  --data-pvc-size 200Gi --wal-pvc-size 100Gi \
  --storage-class rook-ceph-block \
  --max-connections 400 --shared-buffers 4GB

Connect your tx-server PG_DSN=postgresql://postgres:postgres@tsdb.tx.svc.cluster.local:5432/postgres or via PgBouncer: tsdb-pgbouncer.tx.svc.cluster.local:6432

2) Single-node + S3 backups (MinIO)

./deploy_timescaledb.sh \
  --action install \
  --mode single \
  --namespace tx \
  --release tsdb \
  --backup-enabled true \
  --s3-bucket ts-backups \
  --s3-endpoint minio.minio.svc.cluster.local:9000 \
  --s3-access-key minioadmin \
  --s3-secret-key minioadmin \
  --s3-insecure true

3) Multinode, 3 data nodes, LoadBalancer services

./deploy_timescaledb.sh \
  --action install \
  --mode multinode \
  --namespace tx \
  --release tsdb \
  --data-nodes 3 \
  --service-type LoadBalancer \
  --with-pgbouncer true \
  --data-node-data-size 300Gi --data-node-wal-size 100Gi

Connect your tx-server PG_DSN=postgresql://postgres:postgres@tsdb-accessnode.tx.svc.cluster.local:5432/postgres


Operational commands

# Status
./deploy_timescaledb.sh --action status --namespace tx

# Port-forward Postgres locally
./deploy_timescaledb.sh --action port-forward --namespace tx --mode single
# or for multinode: forwards the access node

# Uninstall (keeps namespace and PVCs)
./deploy_timescaledb.sh --action uninstall --namespace tx --release tsdb

How tx-server + in-pod Redis fit together

Your tx-server Deployment runs two containers:

  • tx-server (listens on 8080, runs the REST API and the Redis consumer worker)
  • redis (sidecar queue; simplifies networking and reduces latency)

Typical env for the tx-server container:

REDIS_URL=redis://localhost:6379/0
PG_DSN=postgresql://postgres:postgres@tsdb.tx.svc.cluster.local:5432/postgres
TX_STREAM=tx_queue
TX_GROUP=tx_workers
TX_BATCH_SIZE=200
TX_MAX_WAIT_S=1.5

The service for this Deployment exposes only the tx-server container port (e.g., 8080). Agents use the SDK/REST to submit transactions; the server enqueues to the local Redis sidecar and the worker batches to the TimescaleDB installed by the script.


Troubleshooting

  • Pods not ready: kubectl -n tx describe pod <pod> and check PVC binding and image pulls.
  • No storage class: set --storage-class to a valid class or create a default StorageClass.
  • Connection issues from tx-server:

  • Verify PG_DSN host: ns.svc.cluster.local form.

  • If using PgBouncer, ensure its service name/port and credentials are correct.
  • Backups fail to start:

  • Confirm S3 endpoint, credentials, and --s3-insecure true when using HTTP MinIO.

  • Performance:

  • Increase --shared-buffers, --max-connections, and PVC IOPS according to workload.

  • Use PgBouncer for connection pooling; point the app to PgBouncer service.

Deploying tx-queue (Redis Streams)

This doc explains how to deploy and operate the tx-queue, the Redis Streams backbone that buffers writes for the Transactions Server. We assume you already have a Kubernetes manifest file for the queue (referred to below as tx-queue.yaml).


Overview

  • Purpose: absorb write bursts from agents and decouple the tx-server from the database.
  • Technology: Redis Streams (XADD, consumer groups).
  • Ownership: centrally operated alongside the tx-server; the tx-server reads/writes the queue.

Two common topologies:

  1. Sidecar queue (in the tx-server Pod): simplest; Redis runs in the same Pod as the server.
  2. Shared queue Service (separate): one Redis instance backing multiple tx-server Pods or namespaces.

This guide focuses on the separate/shared queue deployment using tx-queue.yaml.


Prerequisites

  • Kubernetes cluster and kubectl access
  • A namespace for the stack (examples use tx)
  • StorageClass for persistence (recommended for Redis durability)
  • Your tx-queue.yaml manifest

Create the namespace if needed:

kubectl create namespace tx

What’s inside tx-queue.yaml

Typical contents (may vary by your file):

  • A StatefulSet (recommended) or Deployment running redis with:

  • command that loads a redis.conf (optional)

  • env or args to set password (require auth)
  • volumeClaimTemplates (StatefulSet) or PVCs for persistence
  • A Service exposing Redis on port 6379 (ClusterIP by default)
  • Optionally:

  • A ConfigMap with redis.conf

  • A Secret for REDIS_PASSWORD
  • A NetworkPolicy restricting access to tx-server Pods

If your file uses a Deployment instead of StatefulSet, ensure a PVC is still mounted at /data and the Pod uses a stable identity (e.g., do not use emptyDir in production).


Key configuration knobs

Environment variables for tx-server (so it can reach this queue):

  • REDIS_URL redis://:<password>@tx-queue.tx.svc.cluster.local:6379/0
  • Optional:

  • TX_STREAM (default: tx_queue)

  • TX_GROUP (default: tx_workers)
  • TX_STREAM_MAXLEN (default: 100000) – approximate stream trimming
  • Consumer/flush tuning (TX_BATCH_SIZE, TX_MAX_WAIT_S, TX_BLOCK_MS…)

Recommended Redis settings (ConfigMap or args), especially for Streams:

  • maxmemory sized to your workload + persistence strategy
  • maxmemory-policy allkeys-lru or volatile-lru if you prefer eviction (be cautious)
  • appendonly yes for durability (AOF)
  • auto-aof-rewrite-percentage and auto-aof-rewrite-min-size tuned for disk usage
  • requirepass <password> (or aclfile ACLs)

Deploy

Apply the manifest:

kubectl -n tx apply -f tx-queue.yaml

Confirm pods and service:

kubectl -n tx get pods -l app=tx-queue
kubectl -n tx get svc tx-queue

If you use a Secret for the Redis password, ensure it’s created before applying tx-queue.yaml (or embedded in the file):

kubectl -n tx create secret generic tx-queue-redis \
  --from-literal=REDIS_PASSWORD='strong-redis-pass'

Point the tx-server to this queue (in the tx-server Deployment):

REDIS_URL=redis://:strong-redis-pass@tx-queue.tx.svc.cluster.local:6379/0

Validate

Smoke test from a temp Pod:

kubectl -n tx run redis-client --rm -it --image=redis:7 -- bash
redis-cli -h tx-queue -a strong-redis-pass ping
redis-cli -h tx-queue -a strong-redis-pass XADD tx_queue MAXLEN ~ 100000 * data '{"op":"upsert","record":{"tx_id":"tx_test","timestamp":1725960000}}'
redis-cli -h tx-queue -a strong-redis-pass XREAD COUNT 1 STREAMS tx_queue 0-0

Check tx-server logs to confirm the worker flushes the batch to DB.