๐ฅ AI Server Observability Dashboard
192.168.1.20 ยท NVIDIA L4 ยท Ollama ยท Live monitoring
GPU โ NVIDIA L4 (24 GB VRAM)
GPU Utilization
โ
VRAM Used
โ
GPU Temperature
โ
Power Draw
โ
Service Health
Ollama API
checking...
GPU Exporter
checking...
Node Exporter
checking...
Models Loaded in VRAM
โ
โ
VRAM Allocation
Models Loaded in GPU
Loading...
Model Response Tester
Loading models...
Hello! What are you?
โถ Send Test
Latency:
โ
Tokens:
โ
Tokens/sec:
โ
Status:
โ
Last updated:
โ