Lorapok-Dynamic-Ollama-LLM-Chat-Interface

Lorapok Dynamic Ollama LLM Chat Interface - Usage Guide

Running the Server

  1. Start Ollama
    .\scripts\run_server.ps1
    
  2. Start Open WebUI (if installed)
    • If using Docker: The container should start automatically
    • If using pip/source: Run the appropriate command
    • Access at http://localhost:3000 or 8080

Lorapok Dynamic Console Chat Interface (Like Gemini/Claude)

The enhanced console interface provides a rich, interactive experience similar to Gemini or Claude, powered by the Lorapok Dynamic Ollama LLM Chat Interface.

Starting the Chat

  1. On Windows (PowerShell)
    python src/ollama_client.py
    
  2. On Linux/Mac (Terminal)
    chmod +x chat.sh
    ./chat.sh <server_ip>
    

    Or directly:

    python3 src/ollama_client.py <server_ip>
    
  3. Cross-platform usage
    • Works on Windows, macOS, and Linux
    • Requires Python 3.8+ and requests, rich packages

Available Commands

Command Description
/help Show help message with all commands
/model <name> Switch to a different model
/models List all available models with details
/pull <model> Pull a new model from registry
/remove <model> Remove an installed model
/history [limit] Show conversation history (default: 10)
/search <query> Search conversation history
/clear Clear conversation history
/save [filename] Save conversation to JSON file
/load <filename> Load conversation from JSON file
/export <format> Export conversation (markdown/text)
/stats Show conversation and performance statistics
/bench [model] Benchmark current or specified model
/sysinfo Show system resource information
/config Show current configuration settings
/set <key> <value> Update configuration setting
/reset Reset configuration to defaults
/exit Exit the chat

Dynamic Features

The enhanced console interface includes several dynamic features:

πŸ”„ Dynamic Model Management

πŸ’¬ Advanced Conversation Management

πŸ“Š Performance Monitoring

βš™οΈ Dynamic Configuration

🎨 Rich Console Experience

Example Session

πŸ€– Local LLM Chat Interface
Connected to: 192.168.0.219
Current Model: qwen2.5-coder:7b-instruct

Type your message and press Enter.
Commands: /help, /model, /history, /clear, /exit

You: Hello, how are you?
Assistant (qwen2.5-coder:7b-instruct)
Hello! I'm doing well, thank you for asking. How can I help you today?

You: /model llama2:7b
βœ… Switched to: llama2:7b

You: Tell me a joke
Assistant (llama2:7b)
Why don't scientists trust atoms? Because they make up everything!

Connecting from Another PC

Using with VS Code

  1. Install Ollama extension
    • Open VS Code
    • Go to Extensions (Ctrl+Shift+X)
    • Search for β€œOllama” and install
  2. Configure the extension
    • Set the Ollama server URL to http://<server_ip>:11434
    • Use the chat panel for AI assistance

API Usage

The Ollama API is available at http://localhost:11434

List Models

curl http://localhost:11434/api/tags

Generate Response

curl -X POST http://localhost:11434/api/generate \
  -H "Content-Type: application/json" \
  -d '{"model": "qwen2.5-coder:7b-instruct", "prompt": "Hello"}'

Managing Models

Available Models

Popular models to try: