Manage your servers
with natural language.
Tell Claude what you need. ManageLM agents execute it locally — using a privately-hosted LLM service, scoped to only the commands you allow. No SSH. No scripts. No risk.
You: Run a security audit on all production servers and fix any critical findings
Claude: I'll audit all production servers and remediate critical issues.
→ Running Production__*__run_security_audit
scope: "full" | 8 agents queried
✓ Audit complete — 3 critical findings across 2 servers
web-prod-02 CVE-2026-1234 — openssl 3.1.0 → 3.1.7 available
db-prod-01 2 exposed ports — 9090, 6379 open to 0.0.0.0
Patching openssl on web-prod-02…
→ Calling Production__web-prod-02__packages__upgrade
package: "openssl" | version: "3.1.7"
You: Restrict those ports to the private network only
→ Calling Production__db-prod-01__network__firewall_update
Built on trusted foundations
No more rigid dashboards and scripts. The next era of infrastructure is conversational, intelligent, and secure by design.
Talk to your infrastructure
The future is natural conversations with your IT systems — voice or prompts, no more hard-coded interfaces. Just talk to an AI that distributes tasks to a fleet of autonomous sysop agents, each with their own intelligence and specialized skills.
Local + Cloud AI, the right way
One of the only platforms that combines local and cloud AI models correctly. Local agents handle routine jobs with full confidentiality — your data never leaves the server. Cloud AI provides the reasoning power when you need it.
From natural language to server execution in seconds — with every command validated and constrained.
Talk to Claude in plain English
Use the Claude app — the same AI you already know — to describe what you need. "Restart the app", "Check logs", "Update packages on all staging servers".
Portal authenticates & routes
The ManageLM portal verifies your identity via OAuth 2.0, checks permissions, identifies the target agent, and dispatches the task over a secure WebSocket channel.
Agent executes locally with a local LLM
The lightweight agent uses Ollama (or any compatible LLM) in your infrastructure to interpret the task, generates commands, validates each one against the skill's allowlist, and executes. A single LLM server can serve all your agents. Sensitive data never leaves your network.
A clean, dark interface built for sysadmins who need clarity, speed, and full control.
It's the architecture.
Every layer prevents unauthorized actions — even if the LLM hallucinates or faces prompt injection.
Command Allowlisting
Skills define explicit permitted commands. Every AI-generated command is validated in code. Anything outside is blocked.
Local LLM — Data Stays In Your Network
Task interpretation runs via a local Ollama instance shared across your infrastructure. Passwords, configs, logs — nothing leaves your network.
Read-Only by Default
Agents with no allowed_commands can only run read-only operations. Write access requires explicit config.
Zero Inbound Ports
Agents connect outward via WebSocket. Your servers never expose a port. No SSH, no VPN, no attack surface.
Secrets Hidden from AI
Secrets are env vars. The LLM only sees $VAR_NAME — actual values injected at execution time.
Ed25519 Signed Messages
Every portal-to-agent message is cryptographically signed. Agents reject unsigned or tampered messages — no command can be injected via the WebSocket.
Change Tracking & Revert
Every mutating task is git-snapshotted before and after. See exactly what changed, get the full diff, and one-click revert any task within 30 days.
Kernel Sandbox (Landlock + seccomp)
Opt-in Linux kernel confinement. Landlock restricts filesystem writes to allowed paths. seccomp-bpf blocks dangerous syscalls like mount and reboot. Even if a command passes all other checks, the kernel stops it.
✓ LLM is untrusted by design
The AI generates commands, but every command is validated in code before execution. Prompt injection or hallucinations are blocked.
↻ Execution limits per task
Max 10 turns · 120s timeout · 8KB output cap. Every operation logged in a full audit trail.
From systemd to Kubernetes, databases to VPNs — every skill is security-scoped with exact command allowlists.
Built in. One click.
Automated security scanning and full service discovery on every server — no skills required, no prompting. Just actionable results.
Security Audit
One-click security scan across 18 checks — SSH hardening, firewall rules, TLS ciphers, certificate expiry, SUID binaries, failed logins, Docker exposure, and more. AI-analyzed findings with severity ratings and actionable remediation.
- ● Public vs. private server context
- ● One-click automated remediation
- ● PDF security report export
System Inventory
Discover every running service, installed package, container, database, and user account on your servers. AI-structured into categorized entries with automatic version detection.
- ● 12 service categories
- ● Automatic package version extraction
- ● Fleet-wide PDF inventory export
The only platform combining AI automation with hard-enforced security.
| Capability | ManageLM | SSH + Scripts | Ansible / Puppet | Generic AI |
|---|---|---|---|---|
| Natural language interface | ✓ | ✗ | ✗ | ✓ |
| Command allowlisting (hard-enforced) | ✓ In code | ✗ | ~ Limited | ✗ |
| Private LLM (data in your network) | ✓ | N/A | N/A | ✗ Cloud only |
| Zero inbound ports | ✓ | ✗ Port 22 | ✗ SSH | ~ Varies |
| No learning curve | ✓ Just talk | ✗ Bash | ✗ YAML | ✓ |
| Skill-scoped security | ✓ | ✗ Full access | ~ Roles | ✗ |
| Kernel sandbox (Landlock/seccomp) | ✓ | ✗ | ✗ | ✗ |
| Full audit trail | ✓ | ~ Manual | ✓ | ✗ |
| Multi-tenant RBAC | ✓ | ✗ | ~ Limited | ✗ |
| Built-in security audits | ✓ + remediation | ✗ | ✗ | ✗ |
Every feature, every integration, every skill — completely free for up to 10 agents. No credit card. No time limit.
- All 31 built-in skills
- 230+ operations
- Multi-tenant teams & RBAC
- Server groups
- Scheduled tasks
- Webhooks & API keys
- Full audit trail
- Passkeys & MFA
- Trial LLM included
- Local LLM support
- Security audits & inventory
No credit card required · No feature gates · Full platform access
Need more agents?
Scale beyond 10 agents with flexible plans for growing teams and enterprises.
- Unlimited agents
- Priority support
- Custom onboarding
- Volume discounts
Start with our managed cloud in seconds, or self-host on your own infrastructure with Docker.
ManageLM Cloud
Managed SaaS — start in minutes
- Free for up to 10 agents
- Fully managed infrastructure
- Automatic updates
- Trial LLM included
ManageLM Self-Hosted
Run on your infrastructure
- Full data sovereignty
- Docker Compose deployment
- Proxied LLM — centralized API keys
- No external dependencies
Multi-Tenant Teams
Owner, admin, member roles with granular permissions. Invite teammates, scope access per server or group.
Server Groups
Organize agents into groups. Run operations across entire groups with a single request.
Scheduled Tasks
Cron-based schedules for backups, log rotation, health checks — all automated.
Webhooks & API Keys
Real-time notifications on events. Full REST API for integration into existing workflows.
Full Audit Trail
Every action logged with timestamps, IPs, and full context. Complete accountability.
Passkeys & MFA
WebAuthn/FIDO2 passwordless login. Multi-factor auth and IP whitelisting for MCP.
Ready to manage servers
the intelligent way?
100% free for up to 10 agents — every feature included. Self-host with Docker for full control. Deploy in under 5 minutes.
Questions, demos, or enterprise needs? We'd love to hear from you.
Documentation
Response Time
We typically respond within 24 hours on business days.
Email client opened!
Please send the email from your mail application to complete the message.