.\scripts\run_server.ps1
The enhanced console interface provides a rich, interactive experience similar to Gemini or Claude, powered by the Lorapok Dynamic Ollama LLM Chat Interface.
python src/ollama_client.py
chmod +x chat.sh
./chat.sh <server_ip>
Or directly:
python3 src/ollama_client.py <server_ip>
requests, rich packages| Command | Description |
|---|---|
/help |
Show help message with all commands |
/model <name> |
Switch to a different model |
/models |
List all available models with details |
/pull <model> |
Pull a new model from registry |
/remove <model> |
Remove an installed model |
/history [limit] |
Show conversation history (default: 10) |
/search <query> |
Search conversation history |
/clear |
Clear conversation history |
/save [filename] |
Save conversation to JSON file |
/load <filename> |
Load conversation from JSON file |
/export <format> |
Export conversation (markdown/text) |
/stats |
Show conversation and performance statistics |
/bench [model] |
Benchmark current or specified model |
/sysinfo |
Show system resource information |
/config |
Show current configuration settings |
/set <key> <value> |
Update configuration setting |
/reset |
Reset configuration to defaults |
/exit |
Exit the chat |
The enhanced console interface includes several dynamic features:
config.jsonπ€ Local LLM Chat Interface
Connected to: 192.168.0.219
Current Model: qwen2.5-coder:7b-instruct
Type your message and press Enter.
Commands: /help, /model, /history, /clear, /exit
You: Hello, how are you?
Assistant (qwen2.5-coder:7b-instruct)
Hello! I'm doing well, thank you for asking. How can I help you today?
You: /model llama2:7b
β
Switched to: llama2:7b
You: Tell me a joke
Assistant (llama2:7b)
Why don't scientists trust atoms? Because they make up everything!
http://<server_ip>:11434The Ollama API is available at http://localhost:11434
curl http://localhost:11434/api/tags
curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{"model": "qwen2.5-coder:7b-instruct", "prompt": "Hello"}'
ollama listollama pull <model_name>ollama rm <model_name>Popular models to try:
llama2:7bcodellama:7bmistral:7bqwen2.5-coder:7b-instruct (default)