Getting Started with Tyumi
Tyumi is an open source AI assistant platform that gives you access to multiple AI models from different providers through a unified interface, all while keeping your data private and stored locally on your device.
Use Philosophy: Not a Companion
Tyumi is a creative and technical tool — not a buddy, therapist, or romantic partner. If you’re looking for a “companion,” you’re in the wrong place and the maintainer frowns on that usage. Please use Tyumi to build, learn, and make things — not to simulate intimacy. Read the short policy here: Not a Companion.
Setting Up API Keys
To get the most out of Tyumi, you'll need to provide API keys from the AI providers you want to use. These keys allow Tyumi to communicate with the AI services on your behalf.
Why You Need API Keys
API keys are like digital passwords that give you access to AI services. Each provider requires their own key, and you'll only need keys for the services you intend to use. Your keys are stored securely in your browser's local storage and are never sent to any server other than the respective AI provider's servers.
How to Get API Keys
OpenAI API Key
- Visit the OpenAI API Keys page
- Create an account or sign in if you don't already have one
- Click on "Create new secret key"
- Give your key a name (e.g., "Tyumi")
- Copy the API key and paste it into Tyumi's settings
Note: OpenAI offers free credit for new accounts, but after that, you'll need to add a payment method. Check their website for the latest pricing details.
Anthropic (Claude) API Key
- Visit the Anthropic Console
- Create an account or sign in
- Navigate to the API Keys section
- Click "Create Key"
- Copy the API key and paste it into Tyumi's settings
Note: Anthropic provides free trial credits for new users. Claude models are priced based on input and output tokens, with prices varying by model capability. Check their website for current pricing details.
Google (Gemini) API Key
- Visit the Google AI Studio
- Sign in with your Google account
- Create a new project if you don't have one
- Go to "API Keys" and create a new API key
- Copy the API key and paste it into Tyumi's settings
Note: Google offers free usage quotas for Gemini models, with paid usage starting after exceeding free limits. Visit their website for the most current pricing structure.
Mistral API Key
- Visit the Mistral AI Console
- Create an account or sign in
- Go to the API Keys section
- Create a new API key
- Copy the API key and paste it into Tyumi's settings
Note: Mistral offers competitive pricing for their models with different tiers based on model capabilities. Check their website for current rates.
xAI (Grok) API Key
- Visit the xAI Console
- Create an account or sign in
- Navigate to the API Keys section
- Generate a new API key
- Copy the API key and paste it into Tyumi's settings
Note: Access to xAI's Grok models may require different levels of subscription. Check their latest pricing details on their website.
Hugging Face API Key
- Visit the Hugging Face Tokens page
- Create an account or sign in
- Click "New token" to create a new access token
- Give your token a name (e.g., "Tyumi")
- Select "Read" permissions (sufficient for model inference)
- Copy the token and paste it into Tyumi's settings
Note: Hugging Face provides access to many open-source models through their Inference Endpoints. Some models are free to use while others may require payment. The service offers competitive pricing for accessing state-of-the-art models like DeepSeek-V3, DeepSeek-R1, and Llama-4 variants.
GitHub Models API Key
- Visit the GitHub Personal Access Tokens page
- Sign in to your GitHub account
- Click "Generate new token" and select "Fine-grained personal access token"
- Give your token a name (e.g., "Tyumi GitHub Models")
- Set expiration and select repository access as needed
- Under "Permissions", ensure you have the necessary model access permissions
- Copy the token and paste it into Tyumi's settings
Note: GitHub Models provides access to a wide variety of AI models from different providers (OpenAI, Meta, Microsoft, Mistral, etc.) through a single endpoint. This includes advanced models like GPT-4.1, DeepSeek-R1, Llama-4, and Microsoft Phi models. Tool calling (function calling) is supported but may have some limitations compared to direct provider access. The free tier typically has strict rate limits (around 1 request per minute), which makes tool calling less effective since multiple API calls are often needed. Check GitHub's documentation for current pricing and model availability.
Ollama (Local Models)
- Download and install Ollama on your computer
- Run Ollama locally on your machine
- In Tyumi settings, enter your Ollama server URL (typically "http://localhost:11434" if running on the same device)
- Click "Save Ollama URL" and then "Refresh Models" to see available models
Note: Ollama runs models locally on your device. No API key is needed, and no data is sent to external servers. However, you'll need a computer with sufficient RAM and processing power for running these models.
Network Access and HTTPS Setup
To use Ollama with other network devices or with HTTPS:
- Expose Ollama to the network according to the instructions for your platform
- For HTTPS access, create an HTTPS proxy using tools like
local-ssl-proxy
:
local-ssl-proxy --hostname <your-hostname> --source 11435 --target 11434 --key key.pem --cert cert.pem
This allows secure HTTPS connections to your local Ollama instance, which is especially useful when accessing from other devices on your network.
LM Studio (Local Models)
- Download and install LM Studio on your computer
- Run LM Studio locally on your machine
- In Tyumi settings, enter your LM Studio server URL (typically "http://localhost:1234" if running on the same device)
- Click "Save LM Studio URL" and then "Refresh Models" to see available models
Note: LM Studio runs models locally on your device. No API key is needed, and no data is sent to external servers. However, you'll need a computer with sufficient RAM and processing power for running these models.
Network Access and HTTPS Setup
To use LM Studio with other network devices or with HTTPS:
- Expose LM Studio to the network according to the instructions for your platform
- For HTTPS access, create an HTTPS proxy using tools like
local-ssl-proxy
:
local-ssl-proxy --hostname <your-hostname> --source 1235 --target 1234 --key key.pem --cert cert.pem
This allows secure HTTPS connections to your local LM Studio instance, which is especially useful when accessing from other devices on your network.
Managing Your API Keys in Tyumi
- Click the Settings icon in the top right corner
- Navigate to the "API Keys" section
- Enter your API keys in the appropriate fields
- Click "Save API Keys" to store them securely in your browser
Security Note: Your API keys are stored only in your browser's local storage and are only sent to their respective services when making API calls. Tyumi does not store your keys on any servers.
Choosing and Using AI Models
Tyumi gives you access to a variety of AI models from different providers. Each model has its own strengths and capabilities.
Available AI Providers
- OpenAI: GPT models for general conversation and tasks
- Anthropic: Claude models known for safety and reasoning
- Google: Gemini models with free usage tiers
- Mistral: Fast, efficient European AI models
- xAI: Grok models with real-time information from X and the web
- Hugging Face: Access to open-source models like DeepSeek-V3, DeepSeek-R1, and Llama-4 variants
- GitHub Models: Unified access to models from multiple providers (OpenAI, Meta, Microsoft, Mistral, etc.)
- Ollama: Run open-source models locally
- LM Studio: Run open-source models locally
Model Settings
You can customize how models respond by adjusting these parameters:
- Temperature: Controls randomness in responses. Higher values (closer to 1) make responses more creative but potentially less accurate; lower values (closer to 0) make responses more deterministic and focused.
- Top P: Controls diversity by limiting token selection to the most likely options whose cumulative probability exceeds the top_p value.
- Frequency Penalty: Reduces repetition by penalizing tokens that have already appeared frequently.
- Presence Penalty: Encourages new topics by penalizing tokens that have appeared at all, regardless of frequency.
- Max Context: Controls how many previous messages to include in the conversation context.
Using Advanced Features
Tools and Extensions
Tyumi supports various tools that extend the AI's capabilities:
- Web Search: Let the AI search for current information online
- Image Generation: Create images based on text descriptions
- News Search: Find recent news articles on various topics
- Weather Information: Get current weather data for locations
- Financial Data: Access stock and financial information
- Entertainment: Access information about movies, music, etc.
- Social Media: Search for social media content
- Utilities: Use various helpful tools like calculators, converters, etc.
Note: Some tools require specific API keys which you can configure in the Tools API Keys settings section.
Memory
Use the Memory feature to let the assistant remember or forget brief details you provide.
- Enable Memory: Open Settings → Memory tab and switch on “Enable Memory”.
- Limit: Choose how many items to retain (default 25). Oldest items are removed first when the limit is exceeded.
- Remember: When you say “remember …”, the assistant may call a special function to store a short summary of that detail.
- Forget: Ask to “forget …” to remove a matching memory. The assistant uses a case-insensitive keyword match.
- Context: When enabled, your saved memories are appended to the system prompt as a bullet list for better personalization.
- Local Only: All memories are stored in your browser (no server). You can clear or delete individual items at any time.
Text-to-Speech
Tyumi can read responses aloud using text-to-speech technology. Click the speaker icon next to a message to hear it spoken. Requires an OpenAI API key.
Theme Customization
Personalize your experience by choosing from various themes in the settings panel, including Dark, Light, Metal, Neon, and more.
Data Privacy and Storage
Tyumi is designed with privacy in mind:
- All your conversations and settings are stored locally in your browser
- No data is sent to any servers except the AI provider you choose to use
- You can export your conversation history or clear it at any time
- When using local Ollama models, no data leaves your device at all
Troubleshooting
Common Issues
- API Key Errors: Double-check that your API key is entered correctly and hasn't expired
- Model Not Available: Ensure you have the appropriate API key for the model you're trying to use
- Ollama Not Connecting: Verify that Ollama is running locally and that the server URL is correct
- Missing Tool Results: Check that you've provided any necessary API keys for specific tools
Getting Help
If you need further assistance:
- Visit the GitHub Repository for documentation and updates
- Report issues on the Issues page
- Join discussions with other users on the Discussions forum