SafeCadence https://safecadence.com/ Free cybersecurity tools — check links, emails, passwords, and websites Fri, 08 May 2026 00:57:37 +0000 en-US hourly 1 SafeCadence Network Risk v10.0.2 — local-first network + identity policy automation, free on PyPI https://safecadence.com/safecadence-v10-production-milestone/ https://safecadence.com/safecadence-v10-production-milestone/#respond Fri, 08 May 2026 00:57:37 +0000 https://safecadence.com/?p=214 SafeCadence v10.0.2 ships local-first network + identity policy automation: 45 adapters, 22 controls, 16 multi-vendor translators, capability RBAC, attack-path graph, identity write-back. Free on PyPI, MIT licensed, BYO-AI keys never leave your machine.

The post SafeCadence Network Risk v10.0.2 — local-first network + identity policy automation, free on PyPI appeared first on SafeCadence.

]]>
SafeCadence Network Risk v10.0.2 is on PyPI:

pip install 'safecadence-netrisk[server]'
safecadence demo
safecadence ui

About a minute end-to-end and you have a 34-asset demo fleet, identity vault, NHIs, execution jobs, rollback plans, and compliance artifacts populated in a local web UI. Open http://127.0.0.1:8766/home and the first thing you see is the fleet Safe Score (0–100) and a Weak Link card that says “Fix edge-fw-01 and 7 attack paths collapse — fleet score climbs 64 → 78.” That’s the value proposition in one sentence.

What v10.x means

The v9.x line was a sustained audit-then-fix cycle across every customer-visible surface — Execute, Discover, Compliance, Identity write-back, Automation, AI assistant, the activity log. Each section got a deep audit doc, a punch list of honest gaps, and a dedicated release closing them out — most often followed by a .1 cleanup release for items the audit flagged but didn’t fix.

v10.0.0 declared the result: every load-bearing surface is capability-gated, rate-limited where it could be abused, audit-logged, and tested at the HTTP level. v10.0.1 added pre-ship validation (1271 tests across the suite, 325 modules importing cleanly, every CLI command rendering, every UI page returning 200) and a complete rewrite of HOWTO.md. v10.0.2 fixes the broken project URLs on PyPI’s package metadata.

The v10.x line is “trust posture” — the polish you ship after the feature work is done.

What ships in the box

Forty-five adapters across network gear, servers, identity, cloud, and backup. Twenty-two atomic security controls authored as policy. Sixteen multi-vendor translators that turn one declared intent into per-vendor configs. Attack-path graph, KEV+EPSS-prioritized CVEs, cross-system drift detection, posture scoring, full compliance suite, identity write-back with HMAC-bound confirm tokens, real per-vendor rollback plans, and Tier-3 SSH execution behind a triple-gate.

Three things SafeCadence does that no commercial alternative does well together:

  1. One declared policy → many vendors. “Block SMB inbound from anything outside the /24 mgmt subnet” becomes correct configuration for Cisco IOS, NX-OS, ASA, Arista EOS, Juniper Junos, Fortinet, Palo Alto PAN-OS, Aruba — and AWS IAM, Azure CA, GCP IAM, Okta, ISE, ClearPass when the same policy needs an identity equivalent.
  2. Attack-path-aware risk. The score on each asset reflects whether it sits on a path to a crown-jewel — not just whether it has a CVE in isolation. The Weak Link card finds the asset whose remediation collapses the most paths.
  3. Local-first / air-gap-ready. Pure-stdlib SNMP, file-backed JSON storage by default, optional Postgres for scale, optional BYO-AI for the LLM bits (OpenAI / Anthropic / local Ollama). Nothing phones home. The whole thing runs on a laptop you took into a customer SCIF.

Why local-first matters

The hardest problem with a tool that can push config to firewalls and identity systems is making sure it never does so by accident, never lies to the operator about what it’s about to do, and always leaves a way back. A SaaS backend that sees your data is also a SaaS backend that holds your data — every customer becomes a target through your vendor.

SafeCadence is built so that’s structurally not possible. There’s no remote server to compromise, no credential vault we can read, no policy decisions made on a machine that isn’t yours. Your identity connector credentials are Fernet-encrypted under a master key that only ever exists at ~/.safecadence/.identity_vault.key. Your AI keys go directly from your local SafeCadence server to the LLM provider — they never touch us.

Trust posture

The audit-then-fix discipline shows up everywhere customers care about it:

  • Dry-run is the default at every layer. Identity write-back, Tier-3 SSH execution, and policy translation all return a diff first. The operator has to flip an explicit flag to commit.
  • HMAC-bound confirm tokens for identity policy commits — bound to the IR hash, scope, actor, a 600-second TTL, and the adapter version. A token minted for one IR cannot be replayed against a different one.
  • Real per-vendor rollback plans — ~45 inversion patterns covering Cisco IOS / NX-OS, Arista EOS, Junos (set ↔ delete), Palo Alto, FortiGate. Operators see the inverted commands per-vendor in the rollback slide-over before clicking. Patterns that can’t be safely auto-inverted are flagged for manual review.
  • Pre/post config snapshots — Tier-3 SSH execution captures running-config before and after each command. /per-device-diff renders a unified diff with vendor pill, dry-run badge, and +/- line counts.
  • Tier-3 triple-gate. Real SSH execution requires SC_TIER3_ENABLED=1, the EXECUTE_REAL capability on the role, an explicit acknowledge + i_mean_it payload, and TOTP MFA.
  • Capability-based RBAC — 26 fine-grained capabilities (read.*, write.*, execute.*, identity.apply.*, admin.*) layered over a six-tier role floor. Per-user explicit grants/denies persisted in YAML, every change audit-logged, every change fires a capability_changed event.
  • OIDC SSO with capability auto-grant — Auth Code + PKCE against any RFC-compliant IdP. capability_map field maps IdP group claims to capability lists; on every login reconcile_sso_grants() idempotently grants what’s needed and revokes what’s gone. Manual grants are tracked separately and never touched.
  • Activity log + tenant-scoped audit page. Every authenticated mutation lands as a JSONL line via an ASGI middleware. /audit filters by date / actor / method / path / tenant / arbitrary date range, exports CSV with the export itself audit-logged, and shows browser-local time on hover. Endpoint is rate-limited and tenant-scoped.
  • AI assistant hardened. /ask honors SC_AI_DISABLED for air-gap mode, is gated by read.asset + read.finding, question length capped at 2 KB, per-(user, IP) rate-limited, snapshot truncation reported to the LLM rather than silently sliced, citations cross-checked against real asset/finding IDs, audit row stores SHA-256 hash of the question (not plaintext). Write-intent screen flags destructive CLI patterns even when generated.

How it compares

Tool categoryExamplesWhat they doWhat they don’t do
Vulnerability scanningTenable, Qualys, Rapid7Find CVEs on hostsNo multi-vendor config remediation, no identity, no air-gap
Network policyAlgoSec, Tufin, FireMonManage firewall rulesOne vendor at a time, no identity, no air-gap
Compliance automationDrata, Vanta, SecureframeCollect evidence for SOC 2 / ISOSaaS-only, no firewall configs, no air-gap
SafeCadencethis repoAll three, locally

How to try it

pip install 'safecadence-netrisk[server]'
safecadence demo
safecadence ui

That’s the whole demo. Open http://127.0.0.1:8766/home and click around.

The demo seed is intentionally three-tier: a “good” tenant with a connected Okta + healthy NHIs, a “medium” tenant with an unsynced ClearPass, and a “broken” tenant with an LDAP misconfig — so every connector state is visible without having to wire up real systems.

For real deployment, see docs/DEPLOY.md (laptop / small server / Docker / production paths) and docs/HOWTO.md (the five-minute quick start, killer features, real workflows, FAQ, CLI reference, REST API reference, env-var tunables).

Status

v10.0.2 — production milestone, on PyPI. Open source, MIT licensed, contributions welcome.

PyPI · GitHub · HOWTO · CHANGELOG

The post SafeCadence Network Risk v10.0.2 — local-first network + identity policy automation, free on PyPI appeared first on SafeCadence.

]]>
https://safecadence.com/safecadence-v10-production-milestone/feed/ 0
How we built a free, local-first alternative to AlgoSec in ~3,000 lines of Python https://safecadence.com/built-free-network-audit-cli-python/ https://safecadence.com/built-free-network-audit-cli-python/#respond Sun, 03 May 2026 20:22:28 +0000 https://safecadence.com/built-free-network-audit-cli-python/ AlgoSec, Tufin, FireMon all charge $50k+/year for what is mostly pattern-matching on configs. We open-sourced the same idea: 45 adapters, 22 controls, MIT-licensed, 100% local. Here is how it is built.

The post How we built a free, local-first alternative to AlgoSec in ~3,000 lines of Python appeared first on SafeCadence.

]]>
Network configuration auditors — AlgoSec, Tufin, FireMon, Nipper — share three
properties: they cost upwards of $50,000 per year per license, they take one to two
weeks of professional services to deploy, and they want your configuration data to
flow through their cloud.

For 90% of the value those tools deliver, the architecture is overkill. Most audits
flag the same handful of things every time: any/any firewall rules, missing logging,
default SNMP communities, telnet still enabled, operating systems years past
end-of-life. These are pattern-matchable from a static configuration file. They do
not need a SaaS backend or a $50,000 license.

So we wrote one. It’s MIT-licensed, runs 100% on the auditor’s machine, supports
45 adapters, ships 22 controls, and installs with pip install safecadence-netrisk.
This post is about the engineering decisions that made it small enough to maintain
and useful enough to recommend.

The core abstraction: vendor adapter + rule pack

The first design choice was that vendor-specific parsing should be the only place
vendor knowledge lives. Everything downstream — scoring, reporting, AI explanation,
the dashboard — operates on a normalized device representation.

A vendor adapter is a class that takes raw configuration text and emits a Device:

@dataclass
class Device:
    hostname: str
    vendor: str
    os_name: str
    os_version: str
    interfaces: list[Interface]
    acls: list[Acl]
    users: list[User]
    services: dict[str, ServiceConfig]
    raw_config: str

Adding a new vendor is two files: a parser (adapters/cisco_ios.py) and a YAML rule
pack (data/rules/cisco_ios.yaml). The parser is pure stdlib Python — no
heavyweight dependencies — because we want a cold install to be 5 megabytes, not 500.

A rule pack looks like this:

- id: ios-no-telnet
  title: "Telnet allowed on VTY lines"
  severity: critical
  match_regex: 'transport input.*telnet'
  fix_snippet: |
    line vty 0 4
     transport input ssh
  tags: [cis:1.5.4, nist:AC-17]

Three rule types are supported: match_regex, absent_regex, and a sandboxed
custom for cases regex cannot handle. We deliberately did not add a fourth.
Constraint forces the rule library to stay readable.

Scoring

A finding’s severity is one input. The other is business-criticality. A misconfigured
core router earns a higher risk weight than the same misconfiguration on a closet
switch. We model that with a per-device weight (1.0 default, 0.5 for non-critical,
1.5 for crown-jewel) applied to the per-rule severity.

Health and risk are separate scales. Health is “how clean is the config” — a vector
of all rule outcomes. Risk is “how exploitable is what’s left” — KEV-aware,
EOL-aware, and weighted by exposure (any device with an internet-facing interface
gets a multiplier).

Both are 0–100 with explicit bands. We resisted a single “compliance score” because
two devices can have wildly different risk profiles at the same health score, and
collapsing them hides exactly the signal an auditor needs.

Live data without a SaaS backend

The trick to running locally while still surfacing current threat intelligence is
opt-in pull. safecadence enrich --refresh fetches:

  • CISA’s Known Exploited Vulnerabilities catalog (a public JSON feed)
  • endoflife.date’s product feeds (multiple JSON endpoints)

Both are cached on disk. Findings that match a KEV-listed CVE bubble to the top of
the report; OS versions past EOS get a critical-severity finding regardless of
config quality.

There is no telemetry — the binary doesn’t phone home. The user controls when (or
whether) to pull updates.

Bring-your-own-key AI

The hardest design choice was AI. The natural play would have been “send the
config to our LLM, get back a remediation plan, charge for it.” We did the
opposite: the user supplies their own OpenAI / Anthropic / Ollama API key, and
the call goes directly from the user’s machine to that LLM provider.

We never see the key. We never see the config. The LLM provider sees both,
because they have to — but that’s a relationship the user already has.

The trade-off: we lose a recurring-revenue surface. The win: we earn the trust
of buyers who would never ship a network config to a vendor’s cloud, which is
most of our buyers.

Output formats

Seven of them: terminal table (Rich), Markdown, JSON, branded HTML, Word .docx
(pure stdlib via the OOXML format), PDF, and a single-file HTML SPA dashboard.

The dashboard was the most interesting build. It’s a fleet view — every device,
sortable, drill-down to per-device findings, full running config viewer, KPI
cards, vendor breakdown chart, search and filter. It’s all inline JavaScript
and inline SVG, no CDN dependencies, because customers in air-gapped
environments need it to render with no network access.

The whole thing is one HTML file. You can email it as an attachment.

What we left out

  • No web SaaS. We have a FastAPI mode for teams who want a REST API behind their
    own auth, but the default is a CLI. Most network engineers prefer a CLI anyway.
  • No agent. Customers have agent fatigue. The CLI runs from the auditor’s laptop
    against config files they pull manually or via safecadence collect (SSH).
  • No telemetry. Not even anonymized. Everyone says this and most products lie;
    ours genuinely doesn’t, because there is no server to receive it.
  • No commercial license. MIT only. We monetize via consulting on the
    remediation side — fixed-scope engagements, not seat licenses.

What’s next

  • More vendor adapters (MikroTik, Ubiquiti, Meraki are next).
  • Better rules around Zero Trust posture (microsegmentation gaps, policy drift).
  • A community rule-pack repository so an organization can publish its house-style
    rules and others can fork.

If you’ve made it this far, the install is one line:

pip install safecadence-netrisk
safecadence scan path/to/config.txt

Source: https://github.com/famousleads/safecadence-network-risk
PyPI: https://pypi.org/project/safecadence-netrisk/

The post How we built a free, local-first alternative to AlgoSec in ~3,000 lines of Python appeared first on SafeCadence.

]]>
https://safecadence.com/built-free-network-audit-cli-python/feed/ 0
HTTPS Is Not a Safety Badge: How Scammers Use It Too https://safecadence.com/https-is-not-a-safety-badge/ https://safecadence.com/https-is-not-a-safety-badge/#respond Sat, 02 May 2026 17:38:35 +0000 https://safecadence.com/?p=158 The padlock next to the URL means the connection is encrypted — not that the site is safe. Here's why scammers love HTTPS as much as legit sites do.

The post HTTPS Is Not a Safety Badge: How Scammers Use It Too appeared first on SafeCadence.

]]>
In the early 2010s, cybersecurity trainers told people to “look for the padlock” when browsing. That advice is now actively harmful. The padlock means exactly one thing: the connection between your browser and the server is encrypted. It does not mean the server is run by honest people, your data is handled responsibly, or the site is not a phishing page. In 2025, a majority of phishing sites use HTTPS — often with genuinely valid certificates.

Why phishing sites have valid certificates

Let’s Encrypt is a free, automated certificate authority that issues TLS certificates to any domain that can prove it controls that domain. That “any domain” is the problem. If a scammer registers paypa1-secure-login.top, they can get a valid Let’s Encrypt certificate for it in under a minute. Their phishing site now has a working padlock.

The certificate proves the scammer controls paypa1-secure-login.top. It does not prove anything about PayPal, about the scammer’s intent, or about what happens when you enter your credentials.

The padlock’s original purpose

HTTPS encrypts the network link so an attacker sitting between you and the server can’t read or tamper with the traffic. This is a real, valuable property: when you’re on public wifi, HTTPS prevents the person at the next table from reading your passwords.

It does not prevent the server on the other end from being a scammer’s credential-harvesting page. Once your data reaches the server, encryption stops protecting you — the server has the cleartext.

What actually matters

Ignore the padlock. Focus on these instead:

The domain name

Is the domain actually what it claims to be? Is it paypal.com or paypa1-secure.top? Run suspect URLs through our Phishing Link Checker.

How you got to the site

Did you type the URL yourself (safe) or click a link in an email/text/DM (risky)? 95% of credential-harvesting happens on sites the user reached via a link, not by typing.

Domain age

Was the domain registered last week? Use our Domain Age Lookup. Brand-new domains are a strong phishing signal — legitimate brands have domains that are years or decades old.

The certificate itself

Our SSL Certificate Checker pulls the actual cert and shows its issuer. Let’s Encrypt is fine for legitimate sites too, but on a site claiming to be a bank or major retailer, a 90-day Let’s Encrypt cert is unusual — big brands almost always use long-term extended-validation certificates.

The old advice that still works

  • Type URLs yourself rather than clicking links in messages.
  • Use a password manager — it won’t autofill on a spoofed domain, which tips you off.
  • Enable MFA — even if the scammer gets your password, MFA blocks the login.
  • Hover before clicking — see the actual destination URL.
  • Trust domain names, not page content — any designer can copy a bank’s login page in an afternoon.

The security industry’s quiet pivot

Major browsers stopped showing the padlock prominently in 2023-2024 for exactly this reason. Chrome replaced the padlock with a neutral icon; Firefox de-emphasized it. The padlock was misleading users into trusting sites that didn’t deserve it.

The industry is slowly moving toward browsing-safety-by-content (Safe Browsing, Microsoft SmartScreen, Google Safe Browsing’s phishing list). Those services check the URL against known phishing databases — a much more useful signal than “is this connection encrypted.”

FAQ

So should I trust sites without HTTPS?

No — you should treat any site without HTTPS as actively unsafe for typing anything sensitive. HTTPS is necessary but not sufficient. Modern browsers now warn on any plain-HTTP form, which is correct.

What’s EV (extended validation) certificate?

EV certificates require the certificate authority to verify the business behind the domain — paperwork, legal registration, phone verification. Used to show up as the green bar with company name. Browsers mostly stopped highlighting EV because users couldn’t tell the difference, so the value of paying for EV has declined. Banks still use them.

Why did Let’s Encrypt make this worse?

Let’s Encrypt democratized HTTPS — it’s now economically free to run an encrypted site. That’s a net good for the internet. The side effect is that the padlock stopped meaning anything about the site’s identity. The solution is better user education, not ending free certificates.

The post HTTPS Is Not a Safety Badge: How Scammers Use It Too appeared first on SafeCadence.

]]>
https://safecadence.com/https-is-not-a-safety-badge/feed/ 0
Crypto Recovery Scams: Why ‘Your $40K Is Stuck’ Is Always a Lie https://safecadence.com/crypto-recovery-scams/ https://safecadence.com/crypto-recovery-scams/#respond Fri, 01 May 2026 17:38:35 +0000 https://safecadence.com/?p=157 The scam targeting people who already lost money to crypto fraud. How it works, why it's so effective on victims, and how to spot the second layer.

The post Crypto Recovery Scams: Why ‘Your $40K Is Stuck’ Is Always a Lie appeared first on SafeCadence.

]]>
Among the darkest corners of online fraud is a scam that targets people who have already been scammed. Crypto recovery fraud — the promise to get back your lost crypto for a fee — is one of the fastest-growing scam categories of 2024-2025. The FBI estimates it cost victims more than $1.5 billion in the last two years. This article explains exactly how it works, why victims fall for it a second time, and what to do instead.

The setup

You lost money to a crypto scam — pig butchering, rug pull, fake exchange, something. You posted about it in a forum, filed a complaint on IC3, or just searched Google for “how to recover lost crypto.” You’re now on a list.

Within days, you receive a message. It might be an email, Telegram DM, reply under your forum post, or a Facebook comment. The sender claims to be a “blockchain forensics firm,” “ethical hacker,” “recovery specialist,” or sometimes a “law enforcement unit.” They’ve seen your case. They can get your money back. They have done it for others.

The pitch

Their website is professional. Reviews on the site look genuine. They might even show you a “tracker” dashboard showing your stolen crypto’s movement through the blockchain. (Those dashboards are fake — just images.) They know technical terms. They’re patient. They understand.

After a few days of conversation, they make the offer. They can recover your crypto. They’ll take 15-30% of the recovered amount as a fee. Before they can begin, though, they need a small upfront payment to “unfreeze” or “release” the funds. Or to “bribe a prosecutor.” Or to “pay the exchange’s legal fees.” Or to “deploy the forensic software.”

The first fee is often $300-$800. You pay it. They make progress. They show you more evidence. The funds are “about to be released.” Just one more small fee.

And another. And another. Until you stop paying.

Why it works on victims

The psychology is brutal. You already lost money. You feel foolish, angry, and desperate. You’d do almost anything to get it back. A recovery scammer knows this and leans on it.

The initial loss creates a sunk-cost distortion: “I already lost $40,000, what’s another $500 to get it back?” The recovery “work” creates hope: they’re actually making progress, they have tracking data, they know things. Each additional fee feels like the last — we must be close now.

The truth about crypto recovery

No private service can recover stolen cryptocurrency. Not one.

Here’s why: cryptocurrency transactions are immutable by design. Once a transaction confirms, it’s in the blockchain forever. No one — not even the original exchange — can reverse it. The only way to recover stolen crypto is for the attacker to voluntarily return it or for law enforcement to seize their wallet. Neither happens because a paid recovery service asked.

Law enforcement (FBI’s IC3, international partners) do sometimes recover crypto from seized addresses. It happens rarely, takes years, and costs victims nothing. Blockchain analytics firms (Chainalysis, TRM Labs) work exclusively with law enforcement — they don’t take private clients.

Every “recovery service” is a scam

If you search for crypto recovery, every single result for a paid service is a scam. Every testimonial is fake. Every case study is fabricated. Every “tracker” dashboard is a static image.

Run any suspect message through our Crypto Scam Detector — it flags the linguistic patterns these recovery pitches always share.

What to do instead

  1. File a report with FBI IC3 at IC3.gov. Include wallet addresses, transaction hashes, and any communications. This is the actual path to possible recovery if the attacker is ever caught.
  2. Report to the exchange you used (Coinbase, Binance, Kraken). They can sometimes freeze the destination wallet if the funds haven’t moved.
  3. Freeze your credit and enable MFA everywhere — see our Identity Theft Risk Score. If the attacker has your identity, they’ll use it.
  4. Accept the loss. This is the hardest step. Paying recovery scammers just doubles it.
  5. Block the recovery scammer and report their profile wherever you found them.

A warning about “law enforcement” imposters

A variant of this scam involves someone claiming to be from the FBI, Secret Service, or a foreign equivalent. They say your case is “under active investigation” and they need your help to complete it — for a fee, or a “cooperation deposit.” Law enforcement never charges victims. The FBI does not ask for money. Nor does any legitimate agency. Anyone claiming to be from one who asks for payment is a scammer.

FAQ

What about “white-hat hackers” who offer to recover?

Scammers sometimes adopt the “white-hat” branding. The fact pattern is identical: fee before service, vague methodology, pressure to stay quiet. All variants are scams.

Is there ever a legitimate paid service that can help?

A qualified attorney (one you find, not one who finds you) can sometimes pursue civil action if the attacker is identifiable in a friendly jurisdiction. The legal fees are usually more than the recovery. That’s the only legitimate paid path, and it’s vanishingly rare to be cost-effective.

How can I tell a legitimate news article about recovery from a planted one?

Legitimate news stories name specific law-enforcement agencies, specific prosecutors, and specific case numbers. Planted articles are vague (“authorities recovered funds from…”) and always link to a paid-service homepage. If the article is the only source, it’s promotional.

The post Crypto Recovery Scams: Why ‘Your $40K Is Stuck’ Is Always a Lie appeared first on SafeCadence.

]]>
https://safecadence.com/crypto-recovery-scams/feed/ 0
How to Read Email Headers: A Plain-English Guide https://safecadence.com/how-to-read-email-headers/ https://safecadence.com/how-to-read-email-headers/#respond Thu, 30 Apr 2026 17:38:35 +0000 https://safecadence.com/?p=156 Email headers are the receipt trail of every message. Learn to read the five fields that tell you whether a message is real in under a minute.

The post How to Read Email Headers: A Plain-English Guide appeared first on SafeCadence.

]]>
Every email carries a set of headers — metadata fields that trace the path the message took from sender to your inbox. Email headers are the closest thing email has to a forensic audit log. You don’t need to be a sysadmin to read them; you need to know five fields. Here they are.

Getting to the raw headers

First, find them. Your email client hides them by default:

  • Gmail: open the message, click the three-dot menu (top right), choose “Show original.”
  • Outlook (desktop): File → Properties → the “Internet headers” box at the bottom.
  • Outlook (web): open message → three-dot menu → View → View message details.
  • Apple Mail: View → Message → All Headers, then Command+Shift+H.

You’ll see 50-100 lines of text. Most of it is noise. Skim for the five fields below.

1. Authentication-Results

The most important header. Looks like:

Authentication-Results: mx.google.com;
  spf=pass smtp.mailfrom=yourdomain.com;
  dkim=pass header.d=yourdomain.com;
  dmarc=pass header.from=yourdomain.com

Three verdicts: SPF, DKIM, DMARC. Any “fail” is a strong signal something is wrong. “pass” across all three is necessary but not sufficient — it means the message was correctly authenticated by SOMEONE, not necessarily the claimed sender.

2. Return-Path

The “envelope sender” — where bounce messages go if delivery fails. Should match or align with the visible From address for legitimate mail. Scam mail often has a Return-Path at a throwaway domain while the From says “ceo@yourcompany.com.”

3. From vs. Reply-To

The From address is what you see in your client. The Reply-To is where your response goes. In Business Email Compromise scams, these often diverge: From says ceo@yourcompany.com while Reply-To points to ceo@y0urcompany.com (the lookalike). Your reply goes to the attacker.

4. Received chain

Each mail server that touched the message adds a “Received” line. The most recent server is at the top; the original sender at the bottom. Read from the bottom up.

Look for:

  • Unexpected countries. An email claiming to be from your US vendor, with the first Received line from Nigeria or Russia, is suspicious.
  • Suspicious hostnames. Mail from a legit company goes through their well-known mail servers (google.com, outlook.com, mimecast.com). Mail from a compromised mailbox or a scam shop goes through random or residential-ISP hostnames.
  • Time-zone jumps. Unusual if mail routes through a server in a country the sender never mentions.

5. Message-ID

A unique identifier for this specific email. Usually ends in the sender’s mail server domain: <abc123@mail-server.yourdomain.com>. If missing entirely, or if the domain doesn’t match the sender, that’s a red flag. Scammers sometimes forget to include it; sometimes include a generic one.

The 30-second triage

Paste the whole header block into our Email Header Analyzer. It parses all five fields, flags the red flags, and gives you a verdict: critical, high, medium, or low risk.

Do it for any email that asks for money, credentials, or urgent action. Before you reply. Before you click. Before you do anything.

What headers can’t tell you

Headers prove who signed the message and what path it took. They don’t prove what’s in the body is true. A legitimately authenticated email from a compromised mailbox will pass every header check — because it really did come from that mailbox. The attacker just has the password.

For any high-stakes request (wire transfer, password change, credential share), use out-of-band verification: call the sender on a known phone number. Headers are a necessary check, not a sufficient one.

FAQ

Can the attacker forge the headers?

They can forge some fields (From is trivially forgeable). They cannot forge Authentication-Results because those are added by the receiving server, not the sender. SPF and DMARC checks are honest.

What if Authentication-Results is missing entirely?

That’s unusual for major email providers (Gmail, Outlook all add it). Missing Authentication-Results often means the message was routed through an unusual server or relayed in a way that bypasses standard checks. Treat with suspicion.

Should I forward suspicious emails to anyone?

For business: forward to your IT security team or security@yourcompany.com alias. For consumer phishing: report to reportphishing@apwg.org (industry anti-phishing group). For attacks impersonating a specific brand, report to that brand’s security team — they’ll issue takedowns.

The post How to Read Email Headers: A Plain-English Guide appeared first on SafeCadence.

]]>
https://safecadence.com/how-to-read-email-headers/feed/ 0
Fake Job Offers on LinkedIn: The $40 Software-Fee Trick https://safecadence.com/fake-job-offers-on-linkedin/ https://safecadence.com/fake-job-offers-on-linkedin/#respond Wed, 29 Apr 2026 17:38:35 +0000 https://safecadence.com/?p=155 The 2024 scam that turned LinkedIn into a scammer's playground. How it works, who's targeted, and how to spot it in under a minute.

The post Fake Job Offers on LinkedIn: The $40 Software-Fee Trick appeared first on SafeCadence.

]]>
If you’ve applied for a remote job in the last year, you’ve probably received at least one fake offer. The most common variant — the “$40 software fee” scam — is costing Americans hundreds of millions a year, and hitting new graduates and mid-career workers hardest. Here’s how it works, how to spot it, and what to do.

The setup

A “recruiter” messages you on LinkedIn. Their profile has a corporate headshot, a reasonable work history, and maybe even connections you recognize. Their company name is real — often a large company that was genuinely hiring recently. They say they saw your profile and think you’d be a fit for a remote entry-level or data-entry role. The pay is great. The hours are flexible. No experience needed.

You reply. They schedule an interview — often over Microsoft Teams or, more suspiciously, over Telegram. The interview is short. They ask a few generic questions. A day later you get an “offer.” Base pay: $35/hour, up to 40 hours/week. You’re thrilled.

The sting

Before you start, the “recruiter” says you need to install the company’s “internal productivity software” — or more recently, buy “training materials” or a “secure laptop kit.” The cost is modest: $40 to $150. They’ll reimburse you on your first paycheck. All you need to do is Zelle / CashApp / bitcoin the payment, or use a gift card.

You pay. Sometimes there’s a second payment a few days later for “additional software” or “hardware shipping.” Sometimes the recruiter then stops responding and disappears. Sometimes they ask for your bank details for “direct deposit setup” and drain your account.

The job was never real. The company name was stolen. The recruiter doesn’t work there.

The nine tells every fake offer shares

  1. Unusually high pay for an entry-level or no-experience role.
  2. “No experience required” — real companies want skills.
  3. Interview on Telegram, WhatsApp, or Signal instead of a standard video platform.
  4. Offer within hours of a brief conversation — real hiring has background checks and reference calls.
  5. Request to purchase software, training, or equipment before starting. Legitimate employers provide all of this. Always.
  6. Payment via Zelle, CashApp, bitcoin, or gift cards — never refunded, never recovered.
  7. Requests for banking details, SSN, or ID before a signed offer letter. Real onboarding requires these; real hiring doesn’t.
  8. Generic email domain (Gmail, Yahoo) when the “recruiter” claims to be at a large company.
  9. No LinkedIn connections at the target company even though they claim to work there.

Run any suspect recruiter message through our Fake Job Offer Detector — it scores the nine patterns in seconds.

How to verify a real recruiter

  1. Google the recruiter’s name + “LinkedIn.” A real recruiter has a history — multiple roles, endorsements, posts.
  2. Check the email domain. Is it @company.com or @company-recruitment.net? The second is a scam.
  3. Call the company’s main HR line (find the number on the company’s real website, not from the recruiter). Ask if the recruiter works there. Real HR departments confirm this in seconds.
  4. Look for the job on the company’s careers page. If it’s not there, it’s not real.
  5. Check the offer letter. Real offers are on letterhead, signed, with contractual terms. They are not Word docs with your name in bold.

If you paid the fee already

Most payment methods scammers prefer are non-recoverable. But try:

  • Zelle: call your bank within 24 hours — some reverse it, most don’t.
  • CashApp / Venmo: same as Zelle — fast call, low chance.
  • Credit card: dispute the charge, high chance of refund.
  • Gift cards: call the card issuer; if the card hasn’t been used yet it can sometimes be frozen.
  • Bitcoin / crypto: unrecoverable. Do not engage with “crypto recovery services” that contact you afterward — they are a second scam layered on the first. See our Crypto Scam Detector.

Report the fraud to the FBI IC3 at IC3.gov and to LinkedIn (Report the profile). Freeze your credit using our Identity Theft Risk Score checklist if you shared PII.

FAQ

Is every remote job offer a scam?

No — most remote jobs are real. What’s almost always a scam is a remote job that DMs you first, pays unusually well, doesn’t require experience, and asks for any money before starting.

Should I report the recruiter’s LinkedIn profile?

Yes — LinkedIn removes reported scam profiles within a day or two typically. Report, then block. If the profile impersonates a real company, also notify that company’s security team so they can send takedown notices.

What about Indeed and other job boards?

The scam exists there too, though LinkedIn is the current hotspot. Same rules apply — real companies don’t ask candidates to pay anything, ever.

The post Fake Job Offers on LinkedIn: The $40 Software-Fee Trick appeared first on SafeCadence.

]]>
https://safecadence.com/fake-job-offers-on-linkedin/feed/ 0
Business Email Compromise: The Wire-Fraud Script in Full https://safecadence.com/business-email-compromise-the-wire-fraud-script/ https://safecadence.com/business-email-compromise-the-wire-fraud-script/#respond Tue, 28 Apr 2026 17:38:35 +0000 https://safecadence.com/?p=154 BEC is the single costliest cybercrime of 2025. Here's the exact script attackers use — and the two controls that kill it.

The post Business Email Compromise: The Wire-Fraud Script in Full appeared first on SafeCadence.

]]>
Business Email Compromise (BEC) is what the FBI calls the $50-billion problem. In 2024 alone, U.S. companies lost more than $2.9 billion to BEC scams — more than ransomware, more than consumer fraud combined. Most attacks don’t even involve malware. They’re pure social engineering. Here’s the full script, step by step, and the two controls any business can implement this week to shut it down.

The attack, step by step

Step 1: Recon. The attacker identifies your CEO, CFO, or controller on LinkedIn. They note the org chart, the accounting software you mention in case studies, and any vendors you’ve publicly announced. Often they go further — a breached mailbox elsewhere gives them real email threads to mimic.

Step 2: Spoofed sender. They register a lookalike domain — y0urcompany.com with a zero instead of an “o”, or yourcompany-accounting.com. They send email from this domain that looks exactly like mail from your CEO. Or they compromise the CEO’s actual mailbox via phishing and send from there.

Step 3: The ask. A message arrives at your accounts-payable team or a junior finance staffer: “Hi, I need you to process an urgent wire transfer. It’s for a confidential acquisition — please don’t discuss with anyone. The details are below. I’m in meetings all day, please text me when it’s done.”

Step 4: The follow-up. Urgency and authority do the work. The staffer doesn’t want to bother the CEO. They process the wire. By the time anyone discovers the transfer was fraudulent — often days later when the CEO mentions it in passing — the money is gone through a chain of intermediary accounts.

The vendor-payment variant

Equally common: an attacker compromises your vendor’s email (or yours) and sends a “bank details have changed — please route future invoices to this new account” message. Nothing urgent, nothing dramatic. Your AP team updates the record. The next invoice payment goes to the attacker. This variant often takes 3-6 months to notice.

Run any suspect message through our Business Email Compromise Checker — it scores the common BEC linguistic patterns in seconds.

The two controls that kill BEC

1. Out-of-band verification for every wire or banking change

Rule: no wire transfer or change of banking details is processed without a phone call to a known number. Not the number in the email. Not a number provided in the email. The number you already have — from the vendor’s website, their contract, your contacts list.

This single rule defeats every variant of BEC. The attacker has control of the email thread; they don’t have the real person’s phone. One two-minute phone call breaks the attack.

Document this as company policy. Make it mandatory. Train AP staff that they will never be in trouble for insisting on a verification call — they will be in trouble for skipping it.

2. DMARC at p=reject + domain-lookalike monitoring

DMARC prevents attackers from sending email that appears to come from your domain. If you have p=reject published, the fake “CEO” email never reaches your staff’s inbox — it’s rejected at the gateway. Check your current DMARC with our DMARC Checker.

Separately, monitor for lookalike domains. Services like DMARC.org, EasyDMARC, or even regular manual checks against our Fake Brand URL Detector surface the “y0urcompany.com” registrations that attackers spin up.

Train your team on the patterns

Every staff member who handles money or vendor data should recognize:

  • Urgency — “I need this done right away”
  • Secrecy — “Please don’t discuss with anyone”
  • Authority — email appears to be from CEO, CFO, general counsel
  • Banking change — “New account” in an otherwise-normal message
  • Unreachable sender — “I’m in meetings, text when done”
  • Slight domain difference — hover every From address and verify

If you were hit: the first hour matters

If a wire went out and you suspect BEC:

  1. Call your bank immediately. Ask for wire recall. If within 72 hours, there’s a meaningful chance of recovery.
  2. File a complaint with FBI IC3 at IC3.gov. FBI has a “financial fraud kill chain” that works across international correspondent banks.
  3. Notify the other party (vendor or customer) — they may be compromised too.
  4. Preserve evidence — full email headers, the original message (run through our Email Header Analyzer to extract metadata), any attachments.
  5. Change all relevant passwords and force MFA on all finance and exec accounts.

FAQ

My company is small — are we a target?

Yes. BEC attacks scale down — small businesses are easier to compromise and rarely have verification controls. The median loss for small businesses is $15k-$50k per incident. Worth more to the attacker than it sounds.

Can insurance cover BEC losses?

Some cyber policies cover social-engineering losses explicitly, many don’t. Check yours — look for “social engineering fraud” or “fraudulent instruction” coverage with an explicit sublimit. Our Cyber Insurance Readiness Tool covers the controls underwriters look for.

What about AI deepfakes over video?

Growing concern. In 2024, a Hong Kong finance worker wired $25M after a deepfake Zoom call with “the CFO.” The out-of-band verification rule still applies: even a live Zoom request for a wire needs a callback to a known number before execution.

The post Business Email Compromise: The Wire-Fraud Script in Full appeared first on SafeCadence.

]]>
https://safecadence.com/business-email-compromise-the-wire-fraud-script/feed/ 0
Password Reuse: Why It Matters More Than Password Strength https://safecadence.com/password-reuse-why-it-matters/ https://safecadence.com/password-reuse-why-it-matters/#respond Mon, 27 Apr 2026 17:38:35 +0000 https://safecadence.com/?p=153 A 30-character random password is useless if you use it on two sites. Here's why password reuse is the single most important password decision.

The post Password Reuse: Why It Matters More Than Password Strength appeared first on SafeCadence.

]]>
Most people obsess over password strength and ignore password reuse. The math says they have it backward. A 12-character random password used uniquely across every account is more secure than a 30-character password reused on two sites. Here’s why — and what to do about it.

The math of password breaches

Every year, somewhere between 5 and 20 major services get breached. 2024 alone saw LinkedIn, Ticketmaster, AT&T, Snowflake, and dozens of smaller services. When a breach happens, the attacker gets a database of email addresses and password hashes. Within days those hashes are cracked (most are), and the email+password pairs are traded on criminal forums.

What happens next is called credential stuffing. Attackers take the leaked pairs and try them against every major service — Gmail, Amazon, your bank, Facebook, Netflix, Instacart. Any service where the same email+password combo works, the attacker gets in. The attack is fully automated, runs across millions of accounts, and costs the attacker almost nothing.

If you used the same password on the breached site and another site, you are now compromised on both. It doesn’t matter how strong the password was — it’s known now.

Why strength still doesn’t save you from reuse

Imagine you have one 30-character password: j#9K$mN2pQ!vX@8rT&5wL*3zH!7bF%cY. It would take centuries to brute-force. And then LinkedIn gets breached, and the cleartext of your password appears in a credential dump. It’s now worthless. The attacker doesn’t need to brute-force it — they have it.

If you used that password on Gmail, your bank, and Amazon — all three are now compromised. One breach, three accounts gone. Strength doesn’t matter once the password is leaked.

Unique passwords: the only real defense

Unique passwords turn every breach into a local event. LinkedIn gets breached, your LinkedIn password leaks, you change it on LinkedIn, nothing else is affected. The attacker’s credential-stuffing runs against Gmail and your bank fail because the password you used on LinkedIn never existed on those sites.

Unique passwords are the single most important thing you can do for your online security. Not strength, not complexity, not length — uniqueness. The only reason most people don’t have unique passwords everywhere is that remembering 200 unique passwords is impossible. Which is why password managers exist.

Password managers: the solution that actually works

1Password, Bitwarden, and Dashlane all solve this problem the same way: you memorize one strong master password, the manager generates and stores unique passwords for every site you visit. It autofills on login, so you never type (or even see) the actual password. The password is unique per site and long enough that it’s effectively uncrackable by brute force.

Bitwarden has a free tier that is enough for most individuals. 1Password costs about $3/month per user. Both are better than any spreadsheet or text file, and immensely better than password reuse.

How to move off of reuse

You don’t have to fix every account at once. Do this in order:

  1. Install a password manager. Today. Bitwarden is free, takes 5 minutes.
  2. Set a strong unique master password. 14+ characters. Write it down on paper in a safe place.
  3. Enable MFA on the password manager itself. Especially for cloud-synced ones.
  4. Pick 3 accounts to fix first: your email, your primary bank, your password manager itself. These are the crown jewels.
  5. Generate a unique password for each using the manager’s built-in generator. Or use our Password Generator.
  6. Continue for the next 20 accounts over the next week. Focus on anything that touches money, identity, or work.
  7. Check your existing passwords against breach data using our Password Leak Checker. Anything that’s been leaked needs to be changed.

Audit what you currently have

Most password managers export a CSV of everything they know. Run it through our Password Audit Tool — it’s 100% browser-local (we never see your passwords) and flags weak, reused, and short passwords so you know what to fix first. Then walk through the Password Reuse Checker as a self-audit across the accounts you use most.

FAQ

Isn’t using a password manager a single point of failure?

Yes, which is why you protect it with a strong master password plus MFA. Even in the rare event a password manager is breached (LastPass 2022), the vaults are encrypted with your master password — attackers get encrypted blobs, not cleartext passwords. Password managers are not a perfect defense but they’re dramatically better than reuse.

What if I already used the same password on 50 sites?

Assume that password is compromised and change it everywhere you used it. Prioritize email, bank, and any service with a stored payment method. Work through the list over a week or two — you don’t have to fix them all at once, but the longer you wait the bigger the exposure.

What about passkeys?

Passkeys are better than passwords for sites that support them (Google, Apple, Microsoft, 1Password, and growing). They’re phish-proof and breach-proof. Enable them wherever offered. Until every site supports them, unique passwords-plus-MFA remains the fallback.

The post Password Reuse: Why It Matters More Than Password Strength appeared first on SafeCadence.

]]>
https://safecadence.com/password-reuse-why-it-matters/feed/ 0
The Grandparent Scam: How It Works and How to Stop It https://safecadence.com/grandparent-scam-how-to-stop-it/ https://safecadence.com/grandparent-scam-how-to-stop-it/#respond Sun, 26 Apr 2026 17:38:35 +0000 https://safecadence.com/?p=152 Voice-cloning AI made the grandparent scam 10x more convincing in 2024-2025. Here's the full playbook and a protocol that kills it dead.

The post The Grandparent Scam: How It Works and How to Stop It appeared first on SafeCadence.

]]>
The grandparent scam has existed for decades — a voice on the phone claims to be a grandchild in trouble and begs for money “before mom finds out.” In 2024 it got a lot worse. AI voice cloning means the voice on the phone now sounds exactly like your grandchild, because it was generated from a three-second clip of their TikTok.

How the scam actually works in 2026

The scammer finds a family online. Public Facebook profiles, LinkedIn, Instagram — enough to figure out grandchildren’s names and voices. They use a free voice-cloning tool to copy the grandchild’s voice from any short audio clip. Then they call the grandparent, often at 3am when fear and confusion are high. The voice says: “Grandma, it’s me. I’m in jail. Please don’t tell mom. I need $4,000 for bail.”

A “lawyer” or “bail bondsman” gets on the phone. The lawyer is calm and authoritative. The lawyer sends a courier to pick up cash, or walks the grandparent through a wire transfer, or instructs them to buy gift cards.

The grandparent hangs up. Nothing happens for a few days. By the time they talk to their grandchild in person and realize the scam, the money is gone.

The red flags, in order

  1. The call comes out of the blue, often late at night.
  2. The voice sounds off in the first 10 seconds — even good AI clones have brief pauses, flat affect, or unusual word choices.
  3. The request for secrecy — “please don’t tell mom” — is the defining tell. Every real family discusses crises together.
  4. Urgency with a short deadline — bail in an hour, flight in two hours, surgery tomorrow.
  5. Payment by gift card, wire, or courier-pickup cash — no real legal system accepts these.
  6. A “lawyer” or “officer” takes over the call to apply authority pressure.

The family protocol that kills the scam

One family agreement defeats this entire scam. Talk to your grandparents now — before anyone calls — and agree on two things:

1. A family safe-word

Pick a word or phrase that only the immediate family knows. It should be something that wouldn’t appear on social media — not a pet’s name, not a hometown. A random word, a memorable phrase, anything. If someone calls claiming to be a family member in trouble, the first question your grandparent should ask is: “What’s the family word?”

The AI voice doesn’t know. The real family member does. Scam dies in one question.

2. A call-back rule

Nothing — nothing — happens without hanging up and calling the relative back on their known number. If the “grandchild” says their phone was lost and they’re calling from someone else’s, the rule still applies: hang up, call the known number, if it doesn’t go through call another family member who would know.

Scammers prevent this by keeping the grandparent on the line (“the lawyer is on the phone, we need to move now”). Train your grandparent to always hang up anyway. They can call back in 30 seconds. A real emergency waits 30 seconds.

What to do if it’s already happened

If you or a relative fell for this scam, move fast:

  1. Call the bank immediately. If the transfer was a wire and less than 24 hours old, recall is sometimes possible.
  2. File a police report. Local police can coordinate with FBI IC3 for cases over $5,000.
  3. Report to IC3.gov (FBI Internet Crime Complaint Center) and FTC at ReportFraud.ftc.gov.
  4. If gift cards were bought, call the card issuer (Apple, Google, Amazon) immediately. Some cards can be frozen if you call within the hour.
  5. Freeze credit at all three bureaus if any personal details were shared — see our SSN Exposure Safety Guide.

Tools that help

Forward suspicious voicemails through our Is This Voice AI? Detector for basic metadata analysis. Walk through the Elder Scam Protection Checker with your relatives as a conversation starter. And run the AI Scam Detector on any threatening message to score its scam-pattern matches.

FAQ

How much voice does AI need to clone someone?

Commercial voice cloning works with 3-10 seconds of clear audio. TikTok videos, YouTube comments, podcast cameos, voicemail greetings — anywhere the person has been recorded is enough.

Should we delete social media to prevent this?

You don’t have to. The family safe-word and call-back rule defeat voice cloning regardless of how much audio exists. Focus there instead.

What if my grandparent insists the voice was real?

It probably sounded real. Don’t argue the audio. Argue the process: real emergencies don’t require secrecy, don’t take gift cards, and don’t need a decision in the next 60 minutes. If those rules were violated, it was a scam regardless of how real the voice sounded.

The post The Grandparent Scam: How It Works and How to Stop It appeared first on SafeCadence.

]]>
https://safecadence.com/grandparent-scam-how-to-stop-it/feed/ 0
SPF, DKIM, DMARC Explained for Small Business Owners https://safecadence.com/spf-dkim-dmarc-explained/ https://safecadence.com/spf-dkim-dmarc-explained/#respond Sat, 25 Apr 2026 17:38:35 +0000 https://safecadence.com/?p=151 A jargon-free walkthrough of the three email-authentication records your business needs, why most SMBs have them wrong, and how to check yours in 30 seconds.

The post SPF, DKIM, DMARC Explained for Small Business Owners appeared first on SafeCadence.

]]>
If you run a small business with a custom-domain email address, your outgoing mail is being judged by three DNS records you probably haven’t looked at: SPF, DKIM, and DMARC. Together they decide whether your invoices land in inboxes or spam folders, and whether spoofers can impersonate your brand. This article explains each in plain English, then tells you exactly what to check.

Why email needs authentication at all

Email was invented in the 1970s, and the protocol trusted anyone who claimed to be someone. Today you can send an email that claims to be from ceo@yourdomain.com to anyone in the world with zero authentication. SPF, DKIM, and DMARC exist to close that gap — they let receiving mail servers verify that a message claiming to be from your domain was actually sent by someone you authorized.

SPF: who is allowed to send from your domain

SPF (Sender Policy Framework) is a text record in your domain’s DNS that lists the IP addresses and services allowed to send mail using your domain name. A typical SPF record looks like:

v=spf1 include:_spf.google.com include:mailgun.org -all

This record says: Google Workspace and Mailgun are allowed to send mail for this domain. Anyone else is forbidden (-all = hard fail). When a mail server receives a message claiming to be from your domain, it checks your SPF record and either accepts or rejects based on the sending IP.

Check your SPF record with our free SPF Checker. It flags the three mistakes we see 90% of the time: missing -all, too many include: statements (SPF allows only 10 DNS lookups), and records that allow far too wide a sending range.

DKIM: proving the message wasn’t tampered with

DKIM (DomainKeys Identified Mail) adds a cryptographic signature to every outgoing email. The receiving server checks the signature against a public key you publish in DNS. If the signature checks out, the message wasn’t modified in transit and was signed by someone with your private key.

Google, Microsoft 365, and most major email services set up DKIM for you automatically — you just need to enable it in the admin console. The record lives at selector._domainkey.yourdomain.com where “selector” is a label your mail service picks (often google, k1, or s1). Check yours with our DKIM Checker.

DMARC: the policy that ties SPF and DKIM together

DMARC sits on top of SPF and DKIM and says: “If SPF and DKIM fail, here’s what to do with the message.” A DMARC record looks like:

v=DMARC1; p=reject; rua=mailto:dmarc@yourdomain.com

p=reject tells receiving servers to reject mail that fails both SPF and DKIM. rua= tells them where to send daily aggregate reports so you can see how much fake mail claims to come from your domain.

Three policy levels exist:

  • p=none — monitor-only. Receivers still deliver everything but send you reports. Good for your first month to discover all legitimate senders.
  • p=quarantine — unauthenticated mail goes to spam. Good intermediate step.
  • p=reject — unauthenticated mail is rejected at the gateway. Your production target.

Check yours with our DMARC Checker — it surfaces the most common mistakes and tells you what to change.

The 5-minute audit

Right now, for any domain you use for business email:

  1. Run our SPF Checker — verify you have a record and it ends in -all or ~all.
  2. Run our DKIM Checker — verify your mail provider’s selector is published and the key is at least 1024 bits (2048 recommended).
  3. Run our DMARC Checker — verify you have at minimum p=none with a rua= reporting address.
  4. If your email is via Google Workspace or Microsoft 365, enable DKIM in their admin console if it’s not on.
  5. Commit to moving DMARC from p=nonep=quarantinep=reject over the next 90 days.

Why this matters for your business

Without DMARC at p=reject, anyone can send mail that looks like it’s from billing@yourcompany.com. That is how wire-fraud attacks start: a fake invoice to your customer saying to update the bank details. Our Business Email Compromise Checker covers the attack patterns in detail. DMARC at reject kills the entire category of attack before the phish reaches anyone.

FAQ

I use Gmail for business — do I still need these?

If you use Gmail at a Gmail.com address, SPF/DKIM/DMARC are handled entirely by Google. If you use Google Workspace with a custom domain (you@yourcompany.com), you do need to set these up yourself. Google’s admin console walks you through each one.

What’s the single most important thing to fix first?

Publish a DMARC record at p=none with an rua reporting address. That gets you daily aggregate reports showing exactly who is sending mail using your domain — both real senders you forgot about and spoofers. From there, you can tighten to quarantine and then reject.

Will tightening DMARC break legitimate mail?

It can, if you have senders you forgot about (a CRM, a support tool, a newsletter service). That’s why you start at p=none and review reports for 30-60 days before tightening. Every legitimate sender needs to be added to SPF or configured to sign with DKIM.

The post SPF, DKIM, DMARC Explained for Small Business Owners appeared first on SafeCadence.

]]>
https://safecadence.com/spf-dkim-dmarc-explained/feed/ 0