Skip to main content
Open WebUI is a self-hosted, ChatGPT-style workspace with persistent conversation storage and native support for OpenAI-compatible APIs. This guide shows how to connect Open WebUI directly to SambaNova’s OpenAI-compatible endpoints (with an optional LiteLLM proxy) so you can deliver fast inference from a private, customizable UI.
This guide was tested with Open WebUI v0.6.x. If you’re using a different version, some UI paths or features may vary.

Prerequisites

  • SambaNova Cloud account and API key.
  • Python 3.11+ (Open WebUI recommends using the uv runtime manager, but pip works too).
  • Optional: Docker for containerized deployment.
  • Optional: LiteLLM proxy credentials if you plan to route SambaNova traffic through LiteLLM instead of connecting directly.

Setup

  1. Install Open WebUI.
    pip install uv
    uv pip install open-webui
    
  2. Run Open WebUI.
    open-webui serve --port=3000
    
    The server launches at http://localhost:3000. For persistent storage, set DATA_DIR:
    DATA_DIR=~/.open-webui open-webui serve --port=3000
    
  3. Create the admin account.
    1. Open your browser to http://localhost:3000.
    2. Click Get Started.
    3. Follow the onboarding flow to set the admin email, password, and optional team name.
    This account governs model visibility, team access, and connection settings. Open WebUI admin register panel
  4. Connect to SambaNova Cloud.
    1. Click your profile icon and go to Settings → Admin Settings → Connections.
    2. Select and edit the OpenAI-compatible option.
    3. Enter your SambaNova Cloud base URL (https://api.sambanova.ai/v1) and API key.
    4. Click Save.
    Open WebUI automatically fetches the available SambaNova models. Open WebUI register provider panel
  5. (Optional) Connect through LiteLLM proxy. If you already run LiteLLM as a unified gateway, add a LiteLLM connection pointing to your proxy host (for example, http://0.0.0.0:4000/v1) and use the proxy’s API key. SambaNova-backed models registered in LiteLLM appear alongside the direct connection.
  6. Test the connection.
    1. Start a new chat and select a SambaNova model (for example, Llama-4-Maverick-17B-128E-Instruct).
    2. Send a test prompt.
    3. Refresh the page to verify conversation persistence.
    4. Test tagging and export options.
    5. View the Chat Overview diagram to confirm token counts and tool usage.
    Open WebUI chat panel

Chat capabilities

Open WebUI provides several features that enhance your SambaNova experience:
  • Model switching: Switch between SambaNova models mid-conversation without leaving the chat.
  • Multimodal prompts: Send images to SambaNova’s Maverick models or attach documents for quick RAG experiments.
  • Conversation controls: Tag, edit, duplicate, or export (JSON/Markdown) any thread for auditing or sharing.
  • Audio: Enable audio input/output in Settings → Admin Settings → Audio → OpenAI. Enter your SambaCloud base URL, API key, and whisper model (Whisper-Large-v3).
  • Code interpreter: Use the built-in code interpreter to prototype scripts with SambaNova models.
  • Document knowledge base: Upload PDFs or text files so Open WebUI’s built-in RAG features can ground SambaNova responses.

Administration

To access the admin panel, click the user icon in the bottom-left corner and select Admin Panel.

Users and groups

Create groups, assign members, and restrict which models, tools, or documents they can access.

Connections and permissions

Manage available models per team. Promote SambaNova models in model settings to “public” so every user sees the same curated list, or add models to specific user groups.

Feature toggles

Control access per group to plugins, image uploads, audio, and experimental features from Admin Settings.

Tools and community extensions

Browse the Open WebUI community catalog for tools, plugins, and automations that enhance SambaNova-powered workflows. To import remote tools (OpenAPI or MCP), add their hosted URL in Settings → Admin Settings → External Tools. You can make tools public or share them within specific teams.

Troubleshooting

IssueSolution
Models not appearing after connectionVerify your API key is valid and the base URL is correct (https://api.sambanova.ai/v1). Click Save again to refresh the model list.
Authentication errorsRegenerate your API key in the SambaCloud console and update the connection settings.
Audio features not workingConfirm you’ve set all three fields in Audio → OpenAI: base URL, API key, and the whisper model name.
Conversations not persistingEnsure you’ve set the DATA_DIR environment variable to a writable directory.

More information