SambaStack v0.5.17 Release
Release Date: April 8, 2026 This release introduces support for SambaRack SN40L-16 configuration with 4TB of DDR memory.New features and enhancements
Cluster-Level Memory Management Adds support for declaring cluster-wide DDR memory limits for SambaRack SN40L-16 via an environment variable insambastack.yaml, enforced by the bundle validation tool.
- The default memory limit is 12TB. Update this value to support SambaRack SN40L-16 with 4TB of DDR.
- Set the
DDR_PER_RDU_GBenvironment variable: default is768(12TB per-node), set to256for 4TB per-node. For more details, see the Sambastack.yaml reference - The memory limit applies to all SambaRack nodes in the cluster.
- The bundle validation tool enforces the limit at runtime. Configurations exceeding the limit fail with an informative error and must be refactored by removing model configurations.
Known issues
Inventory Check Shows Degraded Status for 4TB Memory Configurations Runningsnfadm inventory shows “degraded” status for RDUs that are in nodes with 4TB memory configurations.
- Impact: Cosmetic only. Does not affect operation.
- Resolution: Expected to be resolved in a future release.
SambaStack v0.5.14 Release
Release Date: April 1, 2026 This release introduces simplified checkpoint discovery, new model support (MiniMax-M2.5, Agentic RAG bundle), enhanced installation verification tools, and multiple API enhancements for improved OpenAI compatibility.For full deployment details, bundle configurations, and context length options for all models and bundles mentioned below, see Supported models and bundles.
New features and enhancements
Checkpoint Path Discovery via Model CRs Checkpoint paths are now discoverable through the Model Custom Resource (CR), eliminating the need for customers to manually locate checkpoint paths in configuration files. The following Kubernetes command can be used to view Model CRs, which now contain checkpoint paths:- Model CRs now include checkpoint path information for all supported models.
- Supports multiple checkpoints for different model configurations.
- Backwards compatible with existing bundle configurations - checkpoint paths in bundle CRs override Model CR paths if specified.
- Works with on-prem and air-gapped deployments.
- Models can be discovered using
kubectl -n <namespace> get models. - Bundles can be discovered using
kubectl -n <namespace> get bundles.
us-agentic-rag-1-1 bundle, a multi-model bundle optimized for retrieval-augmented generation (RAG) workflows. It contains the following model configs:
gpt-oss-120b- Seq Length: 32K, BS: 4
- Seq Length: 64K, BS: 2
- Seq Length: 128K, BS: 2
Llama-4-Maverick-17B-128E-Instruct- Seq Length: 8K, BS: 1
- Seq Length: 16K, BS: 1
Meta-Llama-3.3-70B(Target) /Meta-Llama-3.2-1B(Draft)- Seq Length: 4K, BS: 1, 4, 8, 16, 32
- Seq Length: 8K, BS: 1, 4, 8
- Seq Length: 16K, BS: 1, 4
- Seq Length: 32K, BS: 1, 4
- Seq Length: 64K, BS: 1
- Seq Length: 128K, BS: 1
Meta-Llama-3.1-8B-Instruct- Seq Length: 4K, BS: 1, 4, 16, 32
- Seq Length: 8K, BS: 1, 4, 16, 32
- Seq Length: 16K, BS: 1, 4, 8
E5-Mistral-7B-Instruct- Seq Length: 4K, BS: 1, 4, 8, 16, 32
- Checkpoint accessible via your artifact reader service account.
- Customers can include MiniMax-M2.5 in bundles that pass bundle validation and deploy successfully.
- Includes reasoning support.
- Pre-install script: Validates all hardware, connectivity, and software prerequisites.
- Post-install script: Confirms all SambaStack components are installed and running correctly.
- Clear pass/fail reporting with actionable guidance on failures.
- Scripts maintained and validated against the current SambaStack release.
- Distributed via the sambastack-tools public GitHub repository with README instructions.
- Queue depths can now be configured per context length group using
contextGroupsin the Service Tier configuration. - Queue depth controls how many concurrent requests can be queued for a model configuration.
- Lower queue depths for higher context lengths help prevent memory exhaustion and improve overall service stability.
- SambaStack now validates queue depth configuration at request time. Misconfigured models with missing queue depth definitions will surface a clear error instead of failing silently.
- The empty string
""incontextLengthsmatches requests to the base model name without a context length suffix (e.g.,DeepSeek-R1-0528). Requests with explicit suffixes like-8kor-128kmatch their correspondingcontextLengthsvalues.
Context length suffixes (8k, 16k, 32k, etc.) are case-sensitive. Use lowercase
k in all configurations. This applies to all models supported by SambaNova.contextGroups field is a sub-component of a model grouping within a service tier. Example configuration:
substitutions field has moved from bundles to global in the SambaStack Helm chart.
This is a breaking change that affects air-gapped and NFS customers. Update your Helm values file before upgrading.
API improvements and fixes
Enhancements to improve OpenAI compatibility across thechat/completions endpoint and a new, non-standard feature to track usage in streaming chunks.
Text Object Support in User Message Content
- Expanded support for text objects in content arrays, matching OpenAI
ChatCompletionsContentPartTextspecification. - Enabled for: DeepSeek-R1-0528, Llama-3.3-Swallow-70B-Instruct-v0.4, and MiniMax-M2.5.
- Added
logprobsfield that, when set totrue, returns the log probabilities for each generated token. - Added
top_logprobsfield that, when set to an integern, returns the topnlog probabilities for each generation.
- Added a non-standard feature to allow users to obtain partial usage statistics in chunks returned in streaming responses.
- This feature is enabled by setting
STREAM_USAGE_IN_CHUNKS: truein the replica group section of your custom bundle deployment.
tool_choice: noneensures that the model will not see available tools.
- The
chat/completionsendpoint now rejects invalid message roles. Onlyuser,assistant,system, andtoolare accepted.
- The Whisper transcription endpoint now returns descriptive error messages when audio file processing fails, instead of a bare HTTP 400 status code.
Bug fixes
Function Calling Routing Fix- Fixed an issue where function calling routing did not apply the model name prefix check correctly, causing some models to skip tool routing.
- Fixed the air-gap inventory to include the correct
cloudnative-pgimage configuration, preventing missing image errors during offline installation.
Known issues
SambaRack Manager Does Not Support 2 PDU Configurations SambaRack Manager does not currently support configurations with 2 PDUs. Customers using 2 PDU setups should contact SambaNova Support for guidance on alternative configurations.SambaStack v0.4.8 Release
Release Date: March 10, 2026 This release introduces air-gapped deployment support, custom checkpoint management with NFS storage, swappable model configurations, and multiple API enhancements for improved OpenAI compatibility.New features and enhancements
SambaStack Air-gapped Support Added support for air-gapped mode of operation, enabling secure, isolated deployments.- Install, upgrade, and setup for air-gapped configurations is performed in conjunction with SambaNova support.
- Ongoing administration (Auth, User Management, Custom DB) is designed for self-service and follows the same workflows as on-prem deployments.
Install, setup, port forwarding to access Keycloak UI, and upgrade steps are not documented for air-gapped deployments due to varying customer network configurations. Please work with SambaNova support for these workflows.
- By default, all models in bundles can be swapped out of HBM and replaced with other models in DDR memory.
- Use the
swappable: <boolean>field in the bundle YAML definition to enable or disable this behavior. - Default value is
true. When set tofalse, the model remains in HBM and cannot be swapped out, ensuring zero switching time for requests to that model.
API improvements and fixes
Enhancements to improve OpenAI compatibility across thechat/completions endpoint.
Text Object Support in User Message Content
- Added support for text objects in content arrays, matching OpenAI
ChatCompletionsContentPartTextspecification. - Enabled for: gpt-oss-120b, DeepSeek-V3.1, DeepSeek-V3.1-Terminus, DeepSeek-V3.2, DeepSeek-V3-0324, Qwen3-32B, Qwen3-235B.
- Fixed an issue where
response_format=textwould throw an error. - The endpoint now supports all OpenAI formats:
text,json_object,json_schema.
- Expanded
temperaturerange from 0.0–1.0 to 0.0–2.0, matching OpenAI specification.
- Tools with number-type arguments were always returned as floats.
- Now integers are preserved as integers, matching JSON Schema number specification.
- Added
parallel_tool_callsparameter support. - When set to
false, the model will make at most one tool call per response, matching OpenAI specification.
- Added support for token usage reporting in each chunk of stream.
Known issues
- Parallel Tool Calls with Constrained Decoding.
- The following models return
nullforlogprobseven whenlogprobs=trueortop_logprobsis set. The parameters are accepted without error but have no effect:- Llama-4-Maverick-17B-128E-Instruct
- Whisper-Large-v3
SambaStack initial release
Release Date: September 19, 2025 This release introduces the comprehensive SambaStack documentation suite.New features and enhancements
SambaStack Guide Added the SambaStack Guide, providing step-by-step instructions for deploying, configuring, and managing SambaStack.- Setup, installation, and environment configuration.
- User and authentication management (Keycloak, OIDC).
- Monitoring, logging, and artifact management.
- Bundle and model deployment workflows.
- Common command reference.
- Lists all supported models (e.g., Llama 3.3, Llama 4 Maverick, DeepSeek).
- Shows context length, batch size options, and supported features.
- Instructions for using the Model list API to check availability in your environment.
