Skip to main content
OpenSearch is the default log storage and search engine used in the SambaStack monitoring reference architecture. It stores logs and audit information forwarded by Fluent Bit and makes them available for visualization and querying via Grafana or direct API access.
Reference Architecture Note: This setup uses third-party components. Versions, defaults, and command syntax may change over time. Address any issues not specific to SambaStack to the vendor or project that owns that component.

Prerequisites

Before you begin, ensure you have the following:
  • kubectl — Configured with access to your target Kubernetes cluster
  • Helm (latest version) — For deploying the OpenSearch chart
  • jq — For parsing JSON output during verification
  • Storage class — A valid storage class available in your Kubernetes environment for persistent volume provisioning
Deployment Order: OpenSearch should be deployed before Fluent Bit and before Prometheus/Grafana if you want log visualization in Grafana.

Resource requirements

The following are minimum resource recommendations for OpenSearch:
Deployment TypeCPUMemoryStorage
Single-node (dev/test)2 cores4 GB50 GB
Multi-node (production)4 cores per node8 GB per node100 GB per node
OpenSearch is memory-intensive. For production workloads, allocate at least 50% of available memory to the JVM heap, with a maximum of 32 GB. Set these values in the opensearch-values.yaml file using the esJavaOpts parameter, for example: esJavaOpts: "-Xmx16g -Xms16g".

Architecture overview

In the SambaStack monitoring stack, OpenSearch serves as the central log repository:
  • Receives logs from Fluent Bit via HTTPS (port 9200)
  • Stores logs in time-based indices (default: logs-7d)
  • Provides search APIs for Grafana dashboards and direct queries
  • Runs as a StatefulSet with persistent storage

Deployment steps

Step 1: Create the monitoring namespace

Start by creating a dedicated namespace for all monitoring components. This namespace is shared by OpenSearch, Fluent Bit, Prometheus, and Grafana. Create a file named monitoring-namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
Apply the namespace:
kubectl apply -f monitoring-namespace.yaml

Step 2: Create the admin password secret

OpenSearch requires an initial admin password at install time, which is injected via a Kubernetes secret. First, generate a base64-encoded password:
echo -n 'your-secure-password-here' | base64
Password requirements:
  • At least 8 characters
  • Must include a mix of: uppercase, lowercase, number, and special character
  • Allowed special characters: # @ % _ - + = .
  • Avoid ! and $ as they may cause issues
Save your plaintext password securely. You will need it when configuring Fluent Bit and Grafana in subsequent steps.
Create a file named opensearch-initial-admin-password-secret.yaml, replacing the placeholder with your encoded password:
apiVersion: v1
data:
  OPENSEARCH_INITIAL_ADMIN_PASSWORD: <base64-encoded-password>
kind: Secret
metadata:
  name: opensearch-initial-admin-password
  namespace: monitoring
type: Opaque
Apply the secret:
kubectl apply -f opensearch-initial-admin-password-secret.yaml

Step 3: Configure Helm values

Create a file named opensearch-values.yaml to customize the deployment. Replace <your-storage-class> with your cluster’s storage class name:
replicas: 1

extraEnvs:
- name: OPENSEARCH_INITIAL_ADMIN_PASSWORD
  valueFrom:
    secretKeyRef:
      name: opensearch-initial-admin-password
      key: OPENSEARCH_INITIAL_ADMIN_PASSWORD

persistence:
  enabled: true
  storageClass: <your-storage-class>
To find available storage classes in your cluster, run: kubectl get storageclass
The replicas: 1 setting deploys a single-node cluster suitable for development and testing. For production high-availability deployments, increase the replica count to at least 3 and ensure your storage class supports multi-AZ provisioning.

Step 4: Install OpenSearch

Add the official OpenSearch Helm repository and install the chart:
helm repo add opensearch https://opensearch-project.github.io/helm-charts/
helm repo update
helm upgrade --install opensearch opensearch/opensearch \
  -n monitoring \
  -f opensearch-values.yaml

Verification

Once the installation completes, verify that OpenSearch is running correctly.

Check pod status

kubectl -n monitoring get pod opensearch-cluster-master-0
Expected output:
NAME                            READY   STATUS    RESTARTS   AGE
opensearch-cluster-master-0     1/1     Running   0          2m

Test API connectivity

Set up port forwarding to access the OpenSearch API locally:
kubectl -n monitoring port-forward svc/opensearch-cluster-master 9200:9200 &
Retrieve the admin password:
kubectl -n monitoring get secret opensearch-initial-admin-password -o json \
  | jq -r '.data | to_entries[] | "\(.key): \(.value | @base64d)"'
Test the API endpoint:
curl -u admin:<password> -k https://localhost:9200
A successful response looks like this:
{
  "name": "opensearch-cluster-master-0",
  "cluster_name": "opensearch-cluster",
  "cluster_uuid": "eJI1tDhhQbO3fQb4LeZcyw",
  "version": {
    "distribution": "opensearch",
    "number": "3.3.2",
    "build_type": "tar",
    "build_hash": "6564992150e26aaa62d4522a220dfff5188aeb88",
    "build_date": "2025-10-29T22:24:07.450919802Z",
    "build_snapshot": false,
    "lucene_version": "10.3.1",
    "minimum_wire_compatibility_version": "2.19.0",
    "minimum_index_compatibility_version": "2.0.0"
  },
  "tagline": "The OpenSearch Project: OpenSearch"
}
To stop the port-forward process when finished:
pkill -f "port-forward.*opensearch"

Success criteria

Your OpenSearch installation is complete when:
  • The opensearch-cluster-master-0 pod shows Running status with 1/1 ready
  • The OpenSearch API returns valid cluster information via curl
  • The opensearch-initial-admin-password secret exists in the monitoring namespace

Configuration reference

ParameterDefaultDescription
replicas1Number of OpenSearch nodes. Use 3+ for production HA
persistence.enabledtrueEnable persistent storage for indices
persistence.storageClassKubernetes storage class for PVCs
Index namelogs-7dDefault index created by Fluent Bit (configured in Fluent Bit, not OpenSearch)

Troubleshooting

Pod stuck in Pending state

Symptom: opensearch-cluster-master-0 remains in Pending status. Cause: Usually indicates a PersistentVolumeClaim (PVC) cannot be fulfilled. Solution:
# Check PVC status
kubectl -n monitoring get pvc

# Check events for details
kubectl -n monitoring describe pvc opensearch-cluster-master-opensearch-cluster-master-0
Verify your storage class exists and has available capacity.

Pod in CrashLoopBackOff

Symptom: Pod repeatedly crashes and restarts. Cause: Often caused by insufficient memory or missing password secret. Solution:
# Check pod logs
kubectl -n monitoring logs opensearch-cluster-master-0

# Verify secret exists
kubectl -n monitoring get secret opensearch-initial-admin-password

Connection refused on port 9200

Symptom: curl returns “Connection refused” even with port-forward active. Solution:
# Verify port-forward is running
ps aux | grep port-forward

# Check service endpoints
kubectl -n monitoring get endpoints opensearch-cluster-master

Next steps

After OpenSearch is running:
  1. Deploy Fluent Bit — Set up log forwarding to populate OpenSearch with cluster logs. See Log Forwarding with Fluent Bit.
  2. Deploy Prometheus and Grafana — Add metrics collection and visualization. The Grafana deployment includes an OpenSearch datasource for log exploration. See Monitoring with Prometheus and Grafana.

Cleanup

To remove OpenSearch and all associated resources from your cluster, run the following commands. Uninstall the Helm release:
helm uninstall opensearch -n monitoring
Delete the admin password secret:
kubectl delete secret opensearch-initial-admin-password -n monitoring
Delete the PersistentVolumeClaim to free storage:
kubectl -n monitoring delete pvc opensearch-cluster-master-opensearch-cluster-master-0
Deleting the PVC permanently removes all stored log data. Ensure you have backed up any important logs before proceeding.
If you’re removing the entire monitoring stack and no other components remain, delete the namespace:
kubectl delete namespace monitoring
Deleting the namespace removes all resources within it, including Fluent Bit, Prometheus, and Grafana if deployed. Only do this if you intend to remove the entire monitoring stack.