How we built a free, local-first alternative to AlgoSec in ~3,000 lines of Python

Network configuration auditors — AlgoSec, Tufin, FireMon, Nipper — share three
properties: they cost upwards of $50,000 per year per license, they take one to two
weeks of professional services to deploy, and they want your configuration data to
flow through their cloud.

For 90% of the value those tools deliver, the architecture is overkill. Most audits
flag the same handful of things every time: any/any firewall rules, missing logging,
default SNMP communities, telnet still enabled, operating systems years past
end-of-life. These are pattern-matchable from a static configuration file. They do
not need a SaaS backend or a $50,000 license.

So we wrote one. It’s MIT-licensed, runs 100% on the auditor’s machine, supports
45 adapters, ships 22 controls, and installs with pip install safecadence-netrisk.
This post is about the engineering decisions that made it small enough to maintain
and useful enough to recommend.

The core abstraction: vendor adapter + rule pack

The first design choice was that vendor-specific parsing should be the only place
vendor knowledge lives. Everything downstream — scoring, reporting, AI explanation,
the dashboard — operates on a normalized device representation.

A vendor adapter is a class that takes raw configuration text and emits a Device:

@dataclass
class Device:
    hostname: str
    vendor: str
    os_name: str
    os_version: str
    interfaces: list[Interface]
    acls: list[Acl]
    users: list[User]
    services: dict[str, ServiceConfig]
    raw_config: str

Adding a new vendor is two files: a parser (adapters/cisco_ios.py) and a YAML rule
pack (data/rules/cisco_ios.yaml). The parser is pure stdlib Python — no
heavyweight dependencies — because we want a cold install to be 5 megabytes, not 500.

A rule pack looks like this:

- id: ios-no-telnet
  title: "Telnet allowed on VTY lines"
  severity: critical
  match_regex: 'transport input.*telnet'
  fix_snippet: |
    line vty 0 4
     transport input ssh
  tags: [cis:1.5.4, nist:AC-17]

Three rule types are supported: match_regex, absent_regex, and a sandboxed
custom for cases regex cannot handle. We deliberately did not add a fourth.
Constraint forces the rule library to stay readable.

Scoring

A finding’s severity is one input. The other is business-criticality. A misconfigured
core router earns a higher risk weight than the same misconfiguration on a closet
switch. We model that with a per-device weight (1.0 default, 0.5 for non-critical,
1.5 for crown-jewel) applied to the per-rule severity.

Health and risk are separate scales. Health is “how clean is the config” — a vector
of all rule outcomes. Risk is “how exploitable is what’s left” — KEV-aware,
EOL-aware, and weighted by exposure (any device with an internet-facing interface
gets a multiplier).

Both are 0–100 with explicit bands. We resisted a single “compliance score” because
two devices can have wildly different risk profiles at the same health score, and
collapsing them hides exactly the signal an auditor needs.

Live data without a SaaS backend

The trick to running locally while still surfacing current threat intelligence is
opt-in pull. safecadence enrich --refresh fetches:

  • CISA’s Known Exploited Vulnerabilities catalog (a public JSON feed)
  • endoflife.date’s product feeds (multiple JSON endpoints)

Both are cached on disk. Findings that match a KEV-listed CVE bubble to the top of
the report; OS versions past EOS get a critical-severity finding regardless of
config quality.

There is no telemetry — the binary doesn’t phone home. The user controls when (or
whether) to pull updates.

Bring-your-own-key AI

The hardest design choice was AI. The natural play would have been “send the
config to our LLM, get back a remediation plan, charge for it.” We did the
opposite: the user supplies their own OpenAI / Anthropic / Ollama API key, and
the call goes directly from the user’s machine to that LLM provider.

We never see the key. We never see the config. The LLM provider sees both,
because they have to — but that’s a relationship the user already has.

The trade-off: we lose a recurring-revenue surface. The win: we earn the trust
of buyers who would never ship a network config to a vendor’s cloud, which is
most of our buyers.

Output formats

Seven of them: terminal table (Rich), Markdown, JSON, branded HTML, Word .docx
(pure stdlib via the OOXML format), PDF, and a single-file HTML SPA dashboard.

The dashboard was the most interesting build. It’s a fleet view — every device,
sortable, drill-down to per-device findings, full running config viewer, KPI
cards, vendor breakdown chart, search and filter. It’s all inline JavaScript
and inline SVG, no CDN dependencies, because customers in air-gapped
environments need it to render with no network access.

The whole thing is one HTML file. You can email it as an attachment.

What we left out

  • No web SaaS. We have a FastAPI mode for teams who want a REST API behind their
    own auth, but the default is a CLI. Most network engineers prefer a CLI anyway.
  • No agent. Customers have agent fatigue. The CLI runs from the auditor’s laptop
    against config files they pull manually or via safecadence collect (SSH).
  • No telemetry. Not even anonymized. Everyone says this and most products lie;
    ours genuinely doesn’t, because there is no server to receive it.
  • No commercial license. MIT only. We monetize via consulting on the
    remediation side — fixed-scope engagements, not seat licenses.

What’s next

  • More vendor adapters (MikroTik, Ubiquiti, Meraki are next).
  • Better rules around Zero Trust posture (microsegmentation gaps, policy drift).
  • A community rule-pack repository so an organization can publish its house-style
    rules and others can fork.

If you’ve made it this far, the install is one line:

pip install safecadence-netrisk
safecadence scan path/to/config.txt

Source: https://github.com/famousleads/safecadence-network-risk
PyPI: https://pypi.org/project/safecadence-netrisk/