# CyberFurl Full AI Index

Canonical: https://cyberfurl.com/llms-full.txt
Short index: https://cyberfurl.com/llms.txt

## Core Routes
- https://cyberfurl.com/security-report
- https://cyberfurl.com/features
- https://cyberfurl.com/pricing
- https://cyberfurl.com/dns-tools

## Feature Pages
- [DNS Posture](https://cyberfurl.com/features/dns-posture)
- [Email Authentication](https://cyberfurl.com/features/email-authentication)
- [Web Security Headers](https://cyberfurl.com/features/web-security-headers)
- [SSL / TLS](https://cyberfurl.com/features/ssl-tls)
- [Vulnerability Surface](https://cyberfurl.com/features/vulnerability-surface)
- [Breach Exposure](https://cyberfurl.com/features/breach-exposure)
- [Malware Intelligence](https://cyberfurl.com/features/malware-intelligence)
- [DNS Hijacking & Drift](https://cyberfurl.com/features/dns-hijacking)
- [Subdomain Discovery](https://cyberfurl.com/features/subdomain-discovery)
- [Continuous Monitoring](https://cyberfurl.com/features/continuous-monitoring)

## Vertical Pages
- [MSPs](https://cyberfurl.com/for/msps)
- [Ecommerce](https://cyberfurl.com/for/ecommerce)
- [SaaS](https://cyberfurl.com/for/saas)
- [Real Estate](https://cyberfurl.com/for/real-estate)
- [Insurance](https://cyberfurl.com/for/insurance)
- [Government](https://cyberfurl.com/for/government)
- [Healthcare](https://cyberfurl.com/for/healthcare)
- [Finance](https://cyberfurl.com/for/finance)
- [Agencies](https://cyberfurl.com/for/agencies)

## Learn Pages
- [DMARC](https://cyberfurl.com/learn/dmarc)
- [SPF](https://cyberfurl.com/learn/spf)
- [DKIM](https://cyberfurl.com/learn/dkim)
- [BIMI](https://cyberfurl.com/learn/bimi)
- [MTA-STS](https://cyberfurl.com/learn/mta-sts)
- [TLS-RPT](https://cyberfurl.com/learn/tls-rpt)
- [DANE](https://cyberfurl.com/learn/dane)
- [ARC](https://cyberfurl.com/learn/arc)
- [Email Spoofing](https://cyberfurl.com/learn/email-spoofing)
- [Phishing](https://cyberfurl.com/learn/phishing)
- [DNSSEC](https://cyberfurl.com/learn/dnssec)
- [Zone Walking](https://cyberfurl.com/learn/zone-walking)
- [Cache Poisoning](https://cyberfurl.com/learn/cache-poisoning)
- [DNS Hijacking](https://cyberfurl.com/learn/dns-hijacking)
- [NS Drift](https://cyberfurl.com/learn/ns-drift)
- [Dangling CNAME](https://cyberfurl.com/learn/dangling-cname)
- [DNS Tunneling](https://cyberfurl.com/learn/dns-tunneling)
- [CAA Records](https://cyberfurl.com/learn/caa-records)
- [Content Security Policy](https://cyberfurl.com/learn/csp)
- [HSTS](https://cyberfurl.com/learn/hsts)
- [X-Frame-Options](https://cyberfurl.com/learn/x-frame-options)
- [Referrer-Policy](https://cyberfurl.com/learn/referrer-policy)
- [Permissions-Policy](https://cyberfurl.com/learn/permissions-policy)
- [SSL / TLS](https://cyberfurl.com/learn/ssl-tls)
- [Certificate Transparency](https://cyberfurl.com/learn/certificate-transparency)
- [Subdomain Takeover](https://cyberfurl.com/learn/subdomain-takeover)
- [Credential Stuffing](https://cyberfurl.com/learn/credential-stuffing)
- [Typosquatting](https://cyberfurl.com/learn/typosquatting)
- [Data Breach](https://cyberfurl.com/learn/data-breach)
- [Attack Surface Management](https://cyberfurl.com/learn/attack-surface-management)

---
## DNS Posture
Source: https://cyberfurl.com/features/dns-posture.md

# DNS Security Monitoring and Posture Checks

Track live records, delegation, DNSSEC, and drift on the domains that carry your web, mail, and brand traffic.

## What this DNS posture page actually gives a team

- Authoritative DNS is public infrastructure. Attackers, vendors, customers, and search engines all see the same exposed state.
- Small record drift often lands before anyone opens a ticket: broken mail routing, stale nameservers, partial cutovers, and missing trust controls.
- Teams need one place to review records, nameservers, DNSSEC, consistency, and uptime instead of stitching together resolver output by hand.

## What this page covers

The DNS posture page is not a decorative summary. It is where a team can verify the live record set, see whether DNSSEC is actually present, inspect authoritative nameservers, and confirm that the public DNS layer matches what production is supposed to look like.
That matters during migrations, registrar changes, mail-routing work, and incident review. Instead of pasting together `dig` output, DNS tools, and screenshots, the page keeps the public DNS evidence in one place and makes it usable for both engineering and security owners.

## Key stats

- **A/AAAA** Record visibility: Track the IPs and routes buyers and attackers can already see.
- **DNSSEC** Trust control: Verify whether zone signing and validation are actually present.
- **NS drift** Delegation risk: Catch nameserver movement before it becomes an outage or takeover issue.
- **24/7** Monitoring path: Move from one-off inspection into recurring DNS posture checks.

## Coverage areas

### Record and delegation coverage

Review the authoritative layer that defines how the domain is reached.

- A, AAAA, CNAME, MX, NS, SOA, and TXT records
- Nameserver delegation visibility
- Resolver consistency and propagation checks

### Integrity and trust controls

Validate the controls that reduce silent DNS manipulation risk.

- DNSSEC validation
- Zone transfer exposure checks
- Wildcard and amplification detection

### Operational workflow

Use the same posture view for baselining, handoff, and monitoring.

- Readable posture summaries
- Useful evidence for remediation tickets
- Monitoring-ready checks for recurring drift

## Common use cases

- Pre-launch DNS review before migrations or registrar changes
- Recurring DNS security monitoring for production domains
- Executive-ready DNS posture snapshots for clients or internal owners

## Research findings

### External attack-surface guidance now treats DNS as a first-class risk layer

OWASP ASM Top 10 explicitly calls out insecure DNS configurations, dangling records, domain hijacking, and forgotten subdomains as external attack-surface risks that can become entry points even when the main application is otherwise well maintained.

Action: Teams usually buy DNS monitoring when they need record inventory, takeover-prone record detection, and baseline drift alerts in one place rather than scattered lookups.

Source: [OWASP Attack Surface Management Top 10](https://owasp.org/www-project-attack-surface-management-top-10/)

### CISA frames DNS failures around integrity, availability, and implementation errors

CISA’s DNS risk assessment treats DNS as critical infrastructure where record integrity, service availability, and implementation mistakes can all create real operational and security failures.

Action: The operational move is to baseline authoritative NS, SOA, DNSSEC, MX, and TXT state before migrations, then keep change history so drift is explainable when incidents happen.

Source: [CISA DNS Risk Assessment](https://www.cisa.gov/sites/default/files/publications/DNS_Risk_Assessment.pdf)

### Abuse feeds are useful, but they are not a verdict on their own

ICANN’s DAAR project uses high-confidence feeds for phishing, malware, spam, and botnet activity, but it also notes that the data does not distinguish maliciously registered domains from compromised ones and is not a mitigation-speed measure.

Action: That is why buyers should want DNS evidence, mail evidence, and abuse signals on the same page so a reputation flag can be tested against real public configuration instead of treated as automatic proof.

Source: [ICANN Domain Abuse Activity Reporting](https://www.icann.org/octo-ssr/daar)

## FAQ

### What does DNS security monitoring include?

CyberFurl groups record visibility, nameserver delegation, DNSSEC validation, consistency checks, propagation, uptime, and other externally visible DNS controls into one workflow.

### Why is DNS posture different from a basic DNS lookup?

A lookup shows isolated records. DNS posture monitoring connects those records to trust controls, delegation state, drift risk, and repeatable monitoring so teams can act on what changed.

### Who needs a DNS posture page most?

Security teams, infrastructure owners, MSPs, and technical buyers use DNS posture pages to understand whether a domain has weak trust controls, unstable delegation, or public routing issues before those problems become incidents.

### Can CyberFurl help monitor DNS changes after an audit?

Yes. The same DNS posture workflow can feed into recurring monitoring so record drift, nameserver changes, and trust-control regressions stay visible over time.

## Related links

- [DNS Intelligence workspace](https://cyberfurl.com/dns-intelligence)
- [DNS records lookup](https://cyberfurl.com/dns-tools/dns-records)
- [DNSSEC validation](https://cyberfurl.com/dns-tools/dnssec)
- [DNS monitoring](https://cyberfurl.com/monitoring)


---
## Email Authentication
Source: https://cyberfurl.com/features/email-authentication.md

# SPF, DKIM, and DMARC Checker for Email Authentication

Audit SPF, DKIM, DMARC, MX, MTA-STS, TLS-RPT, and BIMI in one place so spoofing resistance and deliverability are easier to own.

## What an email-authentication page needs to show

- Mail authentication failures hurt deliverability, brand trust, and phishing resilience at the same time.
- Most teams need more than a green or red badge. They need the full SPF record, selector coverage, DMARC policy, and transport controls in one view.
- Email controls change quietly during provider migrations, vendor additions, and DNS cleanups. Scheduled checks keep those regressions visible.

## What this page covers

This page should answer the questions teams actually ask during mail incidents: Is SPF too broad? Which DKIM selectors are present? Is DMARC enforcing or just reporting? What MX hosts are live? Are MTA-STS and TLS-RPT configured or missing?
That is why the page groups authentication, routing, and transport controls together. It gives security, IT, and deliverability owners one place to validate sender trust instead of juggling separate SPF, DKIM, DMARC, and MX utilities.

## Key stats

- **SPF** Sender policy: Inspect the full SPF record and lookup count pressure.
- **DKIM** Signing coverage: Review selectors, discovered keys, and signing hygiene.
- **DMARC** Policy visibility: Parse enforcement, subdomain policy, reporting, and coverage.
- **Mail stack** Supporting controls: Include MX, BIMI, MTA-STS, TLS-RPT, and DANE context.

## Coverage areas

### Authentication controls

Review the records that determine whether sender identity is trusted.

- SPF validation and flattening pressure
- DKIM selector discovery and key context
- DMARC policy parsing and enforcement guidance

### Transport and brand trust

Surface the controls that support secure delivery beyond basic authentication.

- MX routing visibility
- MTA-STS and TLS-RPT checks
- BIMI and DANE support checks

### Monitoring and operations

Keep email trust posture visible after migrations, vendor changes, and DNS edits.

- Monitoring-ready email controls
- Evidence for deliverability debugging
- Clear remediation narrative for DNS owners

## Common use cases

- Diagnosing phishing resilience gaps for customer-facing domains
- Validating email posture during Google Workspace or Microsoft 365 changes
- Monitoring DMARC and DKIM regressions over time

## Research findings

### Google now treats email authentication as a delivery requirement, not a nice-to-have

Google’s sender guidelines require all senders to publish SPF or DKIM, and bulk senders to publish SPF, DKIM, and DMARC. Messages that miss those controls are more likely to be rejected or sent to spam.

Action: That turns missing DMARC, broken DKIM, or incomplete SPF coverage into a production issue for any brand that depends on email trust or revenue email.

Source: [Google Email Sender Guidelines](https://support.google.com/a/answer/81126?hl=en)

### Key length and domain alignment are becoming practical buying criteria

Google says email sent to personal Gmail accounts needs a DKIM key of at least 1024 bits and recommends 2048-bit keys, and its sender FAQ says full DMARC alignment is likely to become a stronger requirement over time.

Action: The useful product requirement is selector discovery, key-length visibility, and clear SPF-versus-DKIM alignment output rather than a single generic pass badge.

Source: [Google Sender Guidelines FAQ](https://support.google.com/a/answer/14229414?hl=en)

### Google’s own DMARC rollout guidance is staged and measurable

Google recommends setting up SPF and DKIM at least 48 hours before DMARC, running p=none for about a week, then moving to quarantine in small percentages such as 1% for large senders or 10% for small organizations while reviewing reports daily.

Action: Teams buy email posture tooling when they need the page to show p=, sp=, pct=, rua=, ruf=, alignment issues, and rollout-ready next steps instead of just “DMARC present.”

Source: [Google Recommended DMARC Rollout](https://knowledge.workspace.google.com/admin/security/recommended-dmarc-rollout?hl=en)

## FAQ

### What is the best way to check SPF, DKIM, and DMARC together?

Use a workflow that inspects SPF, DKIM, DMARC, MX, and supporting transport controls together so you can see whether the mail stack is actually coherent, not just whether one record exists.

### Does CyberFurl cover more than DMARC?

Yes. It includes SPF, DKIM, MX, BIMI, MTA-STS, TLS-RPT, DANE, PTR, and related evidence so the email security narrative is complete.

### Why do SPF, DKIM, and DMARC need to be reviewed together?

Because sender trust depends on how those controls work together. A DMARC record alone is not enough if SPF is weak, DKIM coverage is inconsistent, or mail routing is misconfigured.

### Can this page help with email deliverability problems?

Yes. It is useful for both security and deliverability work because it shows authentication posture, policy gaps, and transport signals that often explain why mail trust is breaking.

## Related links

- [Email Intelligence workspace](https://cyberfurl.com/email-intelligence)
- [SPF checker](https://cyberfurl.com/email-tools/spf)
- [DKIM checker](https://cyberfurl.com/email-tools/dkim)
- [DMARC checker](https://cyberfurl.com/email-tools/dmarc)


---
## Web Security Headers
Source: https://cyberfurl.com/features/web-security-headers.md

# Security Headers Scanner for Public Web Security Reviews

Inspect the exact headers your public edge serves, including CSP, HSTS, framing, and referrer policy, and catch regressions after releases or CDN changes.

## What a useful security-headers page should answer

- HTTP response headers often reveal whether a public surface has basic browser-side defenses in place.
- Teams want one report that ties header gaps to the live target, TLS state, and broader public posture instead of isolated scanner output.
- Header checks are especially useful during releases, CDN changes, and platform migrations where defaults shift underneath the app.

## What this page covers

A real headers page should show whether the production edge is serving the policies the team expects, not just whether a scanner can print header names. Teams need to know if HSTS is live, if CSP exists, whether framing protections are present, and whether recent releases changed that posture.
This page is meant for release review, security review, and regression checking. It turns public response headers into something engineers can act on quickly instead of leaving them as raw scanner output.

## Key stats

- **CSP** Execution boundaries: Review script, frame, and source control posture.
- **HSTS** Transport enforcement: Check whether browsers are pinned to HTTPS correctly.
- **Headers** Public web checks: Track the controls exposed on customer-facing targets.
- **Evidence** Readable output: Explain what is missing and why it matters.

## Coverage areas

### Header visibility

Scan the policy controls that browsers consume directly.

- Content-Security-Policy
- Strict-Transport-Security
- X-Frame-Options and Referrer-Policy

### Release-safe review

Use the same surface during deployments, CDN changes, and security reviews.

- Live target inspection
- Repeatable regression checks
- Friendly summaries for engineering tickets

### Cross-signal context

Connect header posture to the wider external footprint.

- Pair with SSL/TLS checks
- Review alongside vulnerability surface
- Use within recurring monitoring workflows

## Common use cases

- Checking whether a new frontend release preserved browser defenses
- Auditing customer-facing sites before procurement or security review
- Monitoring header regressions after CDN or proxy changes

## Research findings

### HSTS only protects future visits after the browser has seen it once

MDN notes that HSTS takes effect only after a browser has made a secure connection and received the Strict-Transport-Security header, and that includeSubDomains and preload materially change how broadly that policy is enforced.

Action: Teams usually need a header review that shows whether HSTS is present, whether it applies to subdomains, and whether max-age is strong enough for the way the brand actually serves traffic.

Source: [MDN Strict-Transport-Security](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Strict-Transport-Security)

### CISA’s exposure guidance treats internet-visible configuration as something to reassess routinely

CISA’s Internet Exposure Reduction Guidance tells organizations to identify exposed assets, decide which exposures are actually necessary, harden the ones that must stay public, and then establish routine assessments as the environment changes.

Action: Header posture belongs in that release and edge-review loop, especially after CDN, reverse-proxy, or framework changes that silently alter browser policy.

Source: [CISA Internet Exposure Reduction Guidance](https://www.cisa.gov/resources-tools/resources/exposure-reduction)

### HTTPS posture supports both security trust and search trust

Google has long preferred HTTPS pages by default in indexing and recommends redirecting HTTP to HTTPS and implementing HSTS to make the secure version unambiguous.

Action: That makes header hygiene more than an appsec nicety. It supports browser safety, user confidence, and the public trust signals that surround search-facing pages.

Source: [Google HTTPS by Default](https://developers.google.com/search/blog/2015/12/indexing-https-pages-by-default)

## FAQ

### What does a security headers scanner usually check?

It typically reviews exposed HTTP response headers such as CSP, HSTS, X-Frame-Options, Referrer-Policy, and related browser security controls on public web targets.

### Why should header checks live with other external posture checks?

Headers only tell part of the story. Pairing them with TLS, DNS, and exposure checks helps teams prioritize remediation based on the real public attack surface.

### Which web security headers matter most on public sites?

The highest-signal headers usually include Content-Security-Policy, Strict-Transport-Security, X-Frame-Options, Referrer-Policy, and related browser-facing controls that shape how the site can be used or abused.

### When should teams re-check security headers?

Teams should re-check headers after frontend releases, CDN or reverse-proxy changes, and any infrastructure work that could silently change the public response policy.

## Related links

- [Infrastructure tools](https://cyberfurl.com/infrastructure/port-scan)
- [Security report index](https://cyberfurl.com/security-report)
- [SSL/TLS page](https://cyberfurl.com/features/ssl-tls)
- [Monitoring](https://cyberfurl.com/monitoring)


---
## SSL / TLS
Source: https://cyberfurl.com/features/ssl-tls.md

# SSL TLS Scanner for Certificate and Protocol Posture

Monitor certificate trust, expiry, protocol support, and HTTPS enforcement so renewals and edge changes do not become public outages.

## What teams should get from an SSL/TLS page

- Certificate problems are public, user-facing, and reputation-damaging the moment they hit production.
- Teams need issuer, validity window, supported protocols, and surrounding trust signals in one place rather than a raw handshake dump.
- TLS posture changes during renewals, proxy swaps, and edge migrations. Continuous checks keep certificate drift visible.

## What this page covers

The SSL/TLS page is for more than a letter grade. It should tell a team whether the public certificate is trusted, when it expires, what protocols are enabled, and whether HTTPS posture is stable enough for customer-facing traffic.
That makes it useful during renewals, CDN migrations, proxy changes, and vendor review. The page keeps certificate detail, transport posture, and follow-up monitoring in one workflow so the trust story is complete.

## Key stats

- **Issuer** Certificate trust: Review who issued the cert and when it expires.
- **TLS 1.2/1.3** Protocol posture: Track supported versions and remove outdated transport.
- **Expiry** Renewal risk: Watch days remaining before production impact hits.
- **HTTPS** Public trust signal: Keep customer-facing encryption posture visible.

## Coverage areas

### Certificate detail

Review the identity and lifespan of the exposed certificate.

- Issuer and trust chain context
- Valid from and valid to dates
- Days remaining before expiry

### Protocol and transport

Check whether the public edge is serving modern TLS correctly.

- Supported TLS versions
- Cipher posture context
- HSTS and related trust hints

### Monitoring path

Keep certificate and transport changes from turning into outages.

- Expiry-aware workflows
- Readable escalation context
- Useful pairing with uptime monitoring

## Common use cases

- Reviewing cert posture before renewals or CDN changes
- Tracking TLS regression after load balancer or proxy work
- Showing HTTPS trust posture inside public security reports

## Research findings

### Unencrypted HTTP is still readable in transit

Let’s Encrypt’s HTTPS guidance is blunt: plain HTTP traffic can be viewed in transit, which means even “non-sensitive” pages can leak sessions, content, and user behavior to any system on the network path.

Action: Teams buy TLS monitoring when they want certificate hygiene and HTTPS trust handled as a baseline customer requirement instead of an occasional maintenance task.

Source: [Let's Encrypt Why All Websites Should Use HTTPS](https://letsencrypt.org/docs/why-all-https/)

### Google’s HTTPS preference makes transport posture visible beyond security teams

Google’s HTTPS-by-default indexing guidance ties redirects and secure preference directly to how the web surface is understood by search systems, not just browsers.

Action: That is why expiry windows, TLS versions, redirect correctness, and HSTS deserve a permanent place in external monitoring for revenue and trust-critical domains.

Source: [Google HTTPS by Default](https://developers.google.com/search/blog/2015/12/indexing-https-pages-by-default)

### HSTS settings change the blast radius of HTTPS mistakes

MDN documents that preload requires a one-year max-age and includeSubDomains, and that once a browser stores HSTS it will not let users bypass certain certificate errors for future visits.

Action: A worthwhile TLS panel therefore needs to show more than expiry. It should explain whether HSTS exists, how broadly it applies, and whether transport policy is strong enough for the brand footprint.

Source: [MDN Strict-Transport-Security](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Strict-Transport-Security)

## FAQ

### What should an SSL TLS scanner show?

A useful SSL/TLS scanner should show certificate issuer, validity dates, days remaining, supported protocol versions, and the surrounding public trust posture rather than only a raw technical grade.

### Does TLS posture need continuous monitoring?

Yes. Certificate expiry, proxy changes, and edge reconfiguration can break HTTPS unexpectedly, so scheduled checks help catch regressions early.

### Why is certificate expiry monitoring important?

Because HTTPS trust breaks immediately when a certificate expires. Expiry monitoring gives teams time to renew or fix edge configuration before users start seeing browser warnings.

### Can SSL/TLS posture affect customer trust directly?

Yes. Weak protocol support, broken trust chains, or expired certificates are visible to users and browsers right away, so SSL/TLS posture has direct reputational and operational impact.

## Related links

- [Security reports](https://cyberfurl.com/security-report)
- [Monitoring](https://cyberfurl.com/monitoring)
- [Web security headers](https://cyberfurl.com/features/web-security-headers)
- [Pricing](https://cyberfurl.com/pricing)


---
## Vulnerability Surface
Source: https://cyberfurl.com/features/vulnerability-surface.md

# Attack Surface Monitoring for Public Vulnerability Exposure

Review exposed routes, detectable technologies, and weak public web surfaces, then keep that footprint on a schedule as releases and vendors change.

## What this vulnerability-surface page is for

- Exposed paths, admin surfaces, old frameworks, and weak public controls often drift between releases without a dedicated owner noticing.
- Security buyers want to see how public risk changes over time, not just whether a scan found something once.
- The strongest workflow combines exposure checks with DNS, TLS, headers, and monitoring so public surface changes are easier to trust and triage.

## What this page covers

The vulnerability-surface page is where teams review the public web layer that drifted into existence over time: exposed paths, framework clues, weak routes, and other signals that suggest the external surface is broader than expected.
That is useful before customer demos, procurement reviews, platform launches, and routine exposure checks. Instead of a one-time scan result with no follow-up path, the page helps teams baseline public risk and watch for changes later.

## Key stats

- **Exposure** Public paths: Review externally visible routes and weak entry points.
- **Frameworks** Detectable stack: See what public technology signals are exposed.
- **Change** Surface drift: Track whether public risk expanded after releases.
- **Context** Cross-signal review: Tie exposure to DNS, TLS, and web control posture.

## Coverage areas

### Public-facing path review

Focus on the web surfaces that expand risk first.

- Exposed paths and admin surfaces
- Framework and CMS hints
- Publicly detectable weak spots

### Security triage context

Make findings easier to trust and route internally.

- Readable summaries
- Evidence designed for remediation
- Useful alongside score and posture views

### Monitoring-ready design

Treat attack surface monitoring as a living workflow.

- Scheduled re-checks
- Change-awareness over time
- Useful for release and vendor review cycles

## Common use cases

- Comparing public exposure before and after a major release
- Baseline reviews for customer-facing web properties
- Recurring attack surface monitoring for high-visibility domains

## Research findings

### Public-facing application exploitation is climbing, not shrinking

IBM’s 2026 X-Force Threat Intelligence Index says exploitation of public-facing applications was the most common initial access vector in its 2025 incident-response and investigation data, up 44% from the prior year.

Action: That makes internet-facing surface review a recurring operating control, not something teams postpone until the next annual pentest.

Source: [IBM X-Force Threat Intelligence Index 2026](https://www.ibm.com/think/x-force/threat-intelligence-index-2026-securing-identities-ai-detection-risk-management)

### Unauthenticated weaknesses deserve first priority on the public edge

The same IBM research says 56% of disclosed vulnerabilities observed in that threat landscape did not require authentication to exploit successfully.

Action: Buyers should want tooling that helps them inventory and re-check unauthenticated public paths, weak admin surfaces, and exposed frameworks before those routes become the easiest way in.

Source: [IBM X-Force Threat Intelligence Index 2026](https://www.ibm.com/think/x-force/threat-intelligence-index-2026-securing-identities-ai-detection-risk-management)

### Good attack-surface work starts with asset inventory discipline

OWASP’s infrastructure risk guidance says accurate asset inventory and regular audits are crucial because poor documentation makes it hard to enforce security policy, scope incidents, and map affected systems quickly.

Action: That is why the valuable product output is not just “we found a path.” It is an exposure workflow that gives teams a durable inventory, ownership handoff, and repeatable change review.

Source: [OWASP Insufficient Asset Management and Documentation](https://owasp.org/www-project-top-10-infrastructure-security-risks/docs/2024/ISR10_2024-Insufficient_Asset_Management_and_Documentation)

## FAQ

### What is attack surface monitoring?

Attack surface monitoring is the recurring review of exposed public assets, endpoints, technologies, and weak spots so teams can detect surface changes instead of relying on a one-time scan.

### How is CyberFurl different from a one-off vulnerability scan?

CyberFurl ties public exposure checks to posture, reporting, and monitoring workflows so teams can see what is exposed now and what changed later.

### What kinds of public exposures does this page focus on?

It focuses on exposed web paths, detectable frameworks, public-facing weak spots, and other internet-visible signals that help teams understand how their attack surface is changing.

### Why do teams monitor attack surface changes over time?

Because public exposure shifts after releases, migrations, and vendor changes. Monitoring helps teams catch new weak spots that were not present in the last review.

## Related links

- [Threat intelligence](https://cyberfurl.com/threat-intelligence/vulnerability)
- [Security reports](https://cyberfurl.com/security-report)
- [Monitoring](https://cyberfurl.com/monitoring)
- [Web security headers](https://cyberfurl.com/features/web-security-headers)


---
## Breach Exposure
Source: https://cyberfurl.com/features/breach-exposure.md

# Credential Breach Checker for Domain and Exposure Reviews

See whether identities tied to the domain appear in known breach datasets and use that signal to prioritize phishing resilience, MFA, and brand-trust work.

## What a breach-exposure page should clarify

- Credential exposure matters most when it can be tied back to the domain, user trust, and phishing resilience.
- A breach checker becomes more useful when it sits next to email authentication and brand-abuse signals instead of isolated leak counts.
- Teams often use breach exposure data as an executive signal: is the brand appearing in places it should not, and does that map to wider trust issues?

## What this page covers

A breach-exposure page should not pretend every leaked credential means the domain infrastructure was compromised. What it should do is show whether identities tied to the domain appear in known datasets and explain why that matters for phishing, impersonation, and trust.
That makes the page useful for security reviews, client conversations, and executive summaries. It helps teams connect identity exposure to email authentication and brand-abuse concerns instead of treating it as an isolated number.

## Key stats

- **Breach** Exposure signal: Review whether public identities tied to the domain appear in known breach datasets.
- **Context** Identity narrative: Use breach evidence with email and impersonation signals.
- **Trust** Brand impact: Translate raw exposure into user-trust and phishing context.
- **Workflow** Operator output: Turn public exposure into security follow-up, not just a count.

## Coverage areas

### Exposure visibility

Show where breach-related signals are attached to the domain or brand.

- Credential breach lookups
- Exposure-aware summaries
- Useful domain-centered context

### Cross-signal prioritization

Review breach signals next to the controls that reduce abuse.

- Email authentication overlap
- Brand abuse adjacency
- Risk narrative for trust owners

### Reporting and follow-up

Make breach exposure easier to communicate internally.

- Readable findings
- Support for handoff to security and IT owners
- Fits into monitoring and reporting workflows

## Common use cases

- Checking whether a domain’s public identities show up in breach datasets
- Adding breach context to customer-facing security reports
- Prioritizing phishing resilience work for exposed brands

## Research findings

### The average breach cost is still moving in the wrong direction

IBM’s 2024 Cost of a Data Breach findings put the average global breach cost at USD 4.88 million, a material jump from the prior year.

Action: That cost context makes credential and identity exposure worth surfacing to buyers early, especially when the domain is customer-facing or tied to high-trust communications.

Source: [IBM Cost of a Data Breach 2024](https://www.ibm.com/think/insights/whats-new-2024-cost-of-a-data-breach-report)

### Organizations that detect issues themselves tend to contain cost better

IBM’s 2024 summary says 42% of organizations identified breaches with their own teams and tools, and those organizations saw nearly USD 1 million lower average breach costs than breaches first identified by the attacker.

Action: That is the buying case for continuous external visibility: domain-linked identity exposure should be detected by your team, correlated with mail controls, and escalated before an attacker turns it into leverage.

Source: [IBM Cost of a Data Breach 2024](https://www.ibm.com/think/insights/whats-new-2024-cost-of-a-data-breach-report)

### Credential exposure now reaches beyond classic enterprise apps

IBM’s 2026 X-Force Index reports more than 300,000 AI chatbot credentials advertised for sale on the dark web, highlighting how credential theft spills across consumer, workforce, and new SaaS surfaces.

Action: Useful breach-exposure tooling should therefore connect exposed identities back to the domain, email trust, and downstream reset or MFA workflows instead of stopping at a leak count.

Source: [IBM X-Force Threat Intelligence Index 2026](https://www.ibm.com/think/x-force/threat-intelligence-index-2026-securing-identities-ai-detection-risk-management)

## FAQ

### What does a credential breach checker tell you?

It helps show whether public identities tied to a domain appear in known breach datasets, which can support phishing risk analysis and identity-exposure review.

### Why pair breach exposure with email security?

Because exposed identities and weak email authentication increase phishing and impersonation risk together. Looking at both signals improves prioritization.

### Does breach exposure always mean the domain itself was hacked?

No. Breach exposure usually means identities, accounts, or credentials connected to the domain appeared in known datasets. That is an important signal, but it is not the same as proving a direct compromise of the domain infrastructure.

### Who benefits from a breach exposure page?

Security teams, MSPs, sales engineers, and risk reviewers use breach exposure pages to explain identity risk, phishing exposure, and trust implications in a way non-specialists can understand.

## Related links

- [Security report](https://cyberfurl.com/security-report)
- [Email authentication](https://cyberfurl.com/features/email-authentication)
- [Malware intelligence](https://cyberfurl.com/features/malware-intelligence)
- [Pricing](https://cyberfurl.com/pricing)


---
## Malware Intelligence
Source: https://cyberfurl.com/features/malware-intelligence.md

# Domain Malware Checker and Threat Intelligence Signals

Check abuse and malware feeds for the domain, see how strong the evidence really is, and compare those signals against DNS, web, and email posture before escalating.

## What this malware-intelligence page needs to do

- Malware and blacklist intelligence can create false positives if adjacent signals are presented as confirmed malicious evidence.
- Teams need a domain-centered view that distinguishes direct threat feeds, limited-detail flags, and unavailable evidence clearly.
- The strongest workflow pairs threat intelligence with DNS, web, email, and infrastructure posture so a single signal does not distort the whole story.

## What this page covers

The malware-intelligence page should help a team answer a simple question: what threat signals are actually attached to this domain, and how much confidence should we place in them? That means direct findings should be separate from weak feed correlation or limited-detail reputation noise.
It is valuable during customer trust reviews, false-positive triage, and threat-intelligence handoff. Instead of vague scare copy, the page should explain what evidence exists and how it lines up with the rest of the public posture.

## Key stats

- **Signals** Threat feeds: Review domain-level abuse or malware intelligence with clear context.
- **Truthful** Risk framing: Separate direct findings from limited or unavailable evidence.
- **Public** Externally visible: Keep the analysis grounded in the signals visible from outside the org.
- **Cross-check** Broader posture: Tie threat signals back to DNS, web, and infrastructure behavior.

## Coverage areas

### Threat-intelligence context

Review malware-adjacent domain signals without inflating them.

- Domain malware intelligence
- Abuse and reputation context
- Clear distinction between direct and limited-detail evidence

### Operator-safe output

Reduce false positives by showing what is known, inferred, or unavailable.

- Explicit evidence framing
- Readable findings for reports
- Works with trust-first public preview logic

### Connected posture review

See whether the rest of the public footprint supports or weakens the signal.

- Pair with DNS and email posture
- Use alongside vulnerability surface checks
- Track changes over time with monitoring

## Common use cases

- Reviewing domain risk without overstating weak threat-feed evidence
- Adding truthful malware intelligence to public-facing reports
- Comparing reputation signals against the rest of a domain’s public posture

## Research findings

### Abuse datasets are valuable only when their limits are made explicit

ICANN’s DAAR methodology uses high-confidence threat feeds for phishing, malware, spam, and botnet command-and-control, but it also says the data does not itself distinguish malicious registrations from compromised domains.

Action: That is why a credible malware page should show evidence confidence and feed scope clearly instead of collapsing every flag into “malicious.”

Source: [ICANN Domain Abuse Activity Reporting](https://www.icann.org/octo-ssr/daar)

### The attacker market is getting more fragmented and more active

IBM’s 2026 X-Force research reports a 49% increase in active ransomware groups versus the prior year and describes an ecosystem with lower barriers to entry and more opportunistic operators.

Action: Buyers should prefer continuously refreshed threat context and change history over static blacklist snapshots that age out the moment the landscape shifts.

Source: [IBM X-Force Threat Intelligence Index 2026](https://www.ibm.com/think/x-force/threat-intelligence-index-2026-securing-identities-ai-detection-risk-management)

### Public-facing exploitation is a stronger risk signal than isolated reputation noise

IBM also says exploitation of public-facing applications rose 44% year over year, which means domain risk should be cross-checked against exposed apps, weak web controls, and public infrastructure behavior.

Action: The practical buying signal is a platform that pairs malware intelligence with DNS, web, and email posture so teams can validate whether a threat flag matches the rest of the external footprint.

Source: [IBM X-Force Threat Intelligence Index 2026](https://www.ibm.com/think/x-force/threat-intelligence-index-2026-securing-identities-ai-detection-risk-management)

## FAQ

### What should a domain malware checker include?

It should include domain-level abuse or malware intelligence, clearly label evidence quality, and connect threat signals to the rest of the public posture so teams can judge whether the finding is trustworthy.

### How do you reduce malware false positives?

Use direct evidence where available, avoid deriving malicious verdicts from weak adjacent signals alone, and show limited-detail or unavailable states honestly.

### Why can malware intelligence be misleading without context?

Because some feeds provide limited detail or weak correlation. Without context, teams can mistake a low-confidence signal for proof of malicious activity.

### What makes a malware intelligence page trustworthy?

A trustworthy page separates direct evidence from inferred or limited-detail signals, explains what is actually known, and connects the result to the wider public posture.

## Related links

- [Threat intelligence workspace](https://cyberfurl.com/threat-intelligence/malware)
- [Security reports](https://cyberfurl.com/security-report)
- [Breach exposure](https://cyberfurl.com/features/breach-exposure)
- [Vulnerability surface](https://cyberfurl.com/features/vulnerability-surface)


---
## DNS Hijacking & Drift
Source: https://cyberfurl.com/features/dns-hijacking.md

# NS Drift Detection and DNS Hijacking Monitoring

Watch nameserver delegation, registrar-adjacent context, and DNS drift so silent ownership or routing changes show up before they break mail or web traffic.

## What this DNS hijacking and drift page should show

- Nameserver and delegation changes are easy to miss until email breaks, web routes fail, or ownership questions escalate.
- DNS hijacking and drift review needs more than a raw NS lookup. Teams need context about expected posture, registrar state, and change detection.
- For high-value domains, nameserver stability is one of the highest-leverage recurring checks in the external footprint.

## What this page covers

The DNS hijacking and drift page is about change visibility. Teams need to know when nameservers move, when delegation stops matching the expected baseline, and when registrar-side context suggests the change deserves investigation.
That makes the page useful during incident review, high-value domain monitoring, and registrar change control. It gives operators concrete signals to review instead of a generic warning that hijacking might be possible.

## Key stats

- **NS** Delegation state: Track the nameservers currently authoritative for the domain.
- **Drift** Change visibility: Spot movement away from the expected DNS baseline.
- **Registrar** Ownership context: Use registrar-adjacent signals when nameserver state changes.
- **Alerting** Monitoring path: Move high-value domains into recurring drift detection.

## Coverage areas

### Delegation monitoring

Review whether the nameserver layer is stable and expected.

- Current nameserver visibility
- Delegation inconsistency checks
- Change-aware posture review

### Takeover and outage risk

Catch the signals that often precede more visible failures.

- Nameserver drift
- Registrar and ownership context
- Useful DNSSEC and DNS posture adjacency

### Operator workflow

Use drift detection as a recurring control instead of a manual lookup.

- Monitoring-ready checks
- Evidence for DNS owners
- Designed for critical domain workflows

## Common use cases

- Watching critical domains for unexpected nameserver movement
- Adding DNS hijacking monitoring to premium monitoring workflows
- Investigating whether a domain’s delegation changed during an incident

## Research findings

### OWASP now puts insecure DNS and hijackable records on the attack-surface map

OWASP ASM Top 10 explicitly lists insecure DNS configurations and domain hijacking risk, including dangling DNS records and misconfigured MX or NS records that can lead to takeovers.

Action: The product value is continuous monitoring of delegation, nameserver movement, and hijack-prone DNS state rather than occasional manual NS lookups.

Source: [OWASP Attack Surface Management Top 10](https://owasp.org/www-project-attack-surface-management-top-10/)

### CISA treats DNS change control as an integrity and availability problem

CISA’s DNS risk material frames DNS issues around data integrity, availability, and implementation error, which means unexplained NS drift is both a security and an operations signal.

Action: Teams buying drift detection usually need baseline comparison, registrar-adjacent context, and change history that makes a suspicious DNS move immediately reviewable.

Source: [CISA DNS Risk Assessment](https://www.cisa.gov/sites/default/files/publications/DNS_Risk_Assessment.pdf)

### Hijacking risk is amplified when monitoring is weak across adjacent surfaces

OWASP’s ASM project also highlights fake domains, impersonation attacks, and lack of continuous monitoring as core external risks, which is why DNS drift rarely exists in isolation.

Action: The useful workflow ties nameserver changes to related subdomains, mail-routing changes, and brand-abuse signals so defenders can tell whether drift is operational or adversarial.

Source: [OWASP Attack Surface Management Top 10](https://owasp.org/www-project-attack-surface-management-top-10/)

## FAQ

### What is NS drift detection?

NS drift detection is the monitoring of nameserver changes and delegation movement so teams can identify unexpected DNS transitions before they impact trust, routing, or ownership confidence.

### Can nameserver drift indicate DNS hijacking risk?

Yes. Unexpected nameserver movement can be an early sign of misconfiguration, unauthorized change, or takeover-adjacent risk, especially when it does not match the expected baseline.

### Why do DNS hijacking and drift checks matter for business-critical domains?

Because quiet delegation changes can disrupt websites, email, and customer trust before teams realize anything changed. High-value domains need nameserver stability and change visibility.

### What should teams investigate after unexpected nameserver movement?

Teams should review registrar activity, expected DNS baselines, related DNSSEC posture, and whether the change matches an authorized migration or an unexplained drift event.

## Related links

- [DNS posture page](https://cyberfurl.com/features/dns-posture)
- [Nameserver analysis](https://cyberfurl.com/dns-intelligence/nameservers)
- [Monitoring](https://cyberfurl.com/monitoring)
- [WHOIS lookup](https://cyberfurl.com/domain-recon/whois)


---
## Subdomain Discovery
Source: https://cyberfurl.com/features/subdomain-discovery.md

# Subdomain Finder for Public Asset Discovery

Discover public hosts around the domain, connect them to DNS and infrastructure context, and turn those findings into a real monitoring scope.

## What makes this subdomain-discovery page useful

- Teams often know the apex domain but miss the broader host footprint that supports the brand.
- Subdomain discovery is more useful when it connects directly to DNS posture, certificates, web exposure, and monitoring instead of staying a raw host list.
- Even a small recovered footprint helps security teams prioritize what deserves deeper validation next.

## What this page covers

A good subdomain-discovery page should help teams expand scope immediately. It should show what hosts are known, what those hosts imply about the surrounding infrastructure, and which ones deserve deeper DNS, TLS, or web validation next.
That is valuable before audits, during recon, and while building a monitoring scope. The page turns discovered hosts into actionable follow-up instead of leaving the user with a dead-end hostname list.

## Key stats

- **Hosts** Recovered footprint: Discover the public hostnames connected to a domain.
- **Recon** Asset context: Use host findings alongside DNS, TLS, and infrastructure checks.
- **Prioritization** What to scan next: Turn discovery into follow-up instead of a dead-end list.
- **Monitoring** Ongoing visibility: Track important hosts inside broader monitoring workflows.

## Coverage areas

### Discovery surface

Find more than just the apex domain.

- Subdomain enumeration
- Supporting host clues from MX and NS data
- Infrastructure-aware footprint recovery

### Security follow-up

Use discovered hosts as the starting point for deeper review.

- Route into DNS and TLS checks
- Support web exposure validation
- Useful for scope expansion during audits

### Reporting value

Make public asset discovery readable for non-specialists too.

- Readable host tables
- Useful evidence in public reports
- Works inside recurring monitoring stories

## Common use cases

- Mapping the public footprint around a customer-facing brand
- Finding additional hosts to validate during DNS or web reviews
- Prioritizing which discovered subdomains deserve recurring checks

## Research findings

### Unknown and unmanaged assets remain a core external-risk category

OWASP ASM Top 10 specifically identifies unmanaged and unknown external assets, forgotten subdomains, and untracked cloud resources as attack-surface expansion points.

Action: That makes subdomain discovery valuable when it feeds a living asset inventory instead of ending as a one-time recon list.

Source: [OWASP Attack Surface Management Top 10](https://owasp.org/www-project-attack-surface-management-top-10/)

### Asset discovery is only useful if it improves inventory quality

OWASP’s asset-management guidance says complete inventory and regular audits are crucial because weak documentation makes it harder to enforce security policy, detect weaknesses, and respond to incidents accurately.

Action: Teams buy subdomain coverage when the output can move straight into owner mapping, review queues, and monitoring scope rather than living in a spreadsheet.

Source: [OWASP Insufficient Asset Management and Documentation](https://owasp.org/www-project-top-10-infrastructure-security-risks/docs/2024/ISR10_2024-Insufficient_Asset_Management_and_Documentation)

### CISA recommends deciding whether each exposed asset needs to exist at all

CISA’s exposure-reduction guidance starts by identifying internet-accessible assets and then evaluating whether each one truly needs to remain exposed.

Action: The strongest subdomain workflow therefore ends with keep, harden, or retire decisions for every discovered host instead of treating discovery as a purely informational step.

Source: [CISA Internet Exposure Reduction Guidance](https://www.cisa.gov/resources-tools/resources/exposure-reduction)

## FAQ

### What does a subdomain finder help with?

A subdomain finder helps teams discover public hostnames connected to a domain so they can expand audit scope, validate exposure, and understand the broader external footprint.

### Why should subdomain discovery connect to DNS and infrastructure scans?

Because a host list alone is not enough. Teams usually need to know what those hosts resolve to, what services they expose, and whether they should be monitored.

### Why is subdomain discovery useful before a security review?

It helps teams expand scope before the review starts, so important public hosts are not missed just because the audit began with only the apex domain.

### Can discovered subdomains feed into deeper validation?

Yes. Subdomain discovery is often the starting point for DNS checks, TLS reviews, web exposure validation, and continuous monitoring of the most important recovered hosts.

## Related links

- [Subdomain enumeration](https://cyberfurl.com/domain-recon/subdomains)
- [DNS posture](https://cyberfurl.com/features/dns-posture)
- [Vulnerability surface](https://cyberfurl.com/features/vulnerability-surface)
- [Monitoring](https://cyberfurl.com/monitoring)


---
## Continuous Monitoring
Source: https://cyberfurl.com/features/continuous-monitoring.md

# Continuous Domain Monitoring for External Security Controls

Put DNS, email, TLS, uptime, and drift checks on a schedule with history and context so external posture changes are caught before customers do.

## What teams should expect from continuous monitoring

- External posture changes after launches, renewals, provider changes, and DNS edits, not just during formal security reviews.
- The highest-value recurring checks are usually DNS, email authentication, TLS, nameserver drift, and uptime.
- Teams need alerting, history, and actionability, not another passive dashboard.

## What this page covers

The continuous-monitoring page should explain which external controls can be re-checked on a schedule and why that matters operationally. Teams care about DNS drift, email trust regressions, certificate expiry, uptime, and public change events that do not wait for quarterly review cycles.
This page is the bridge from audit into operations. It shows how the same posture checks used in public reports can become recurring controls with history and alerting instead of being forgotten after a single scan.

## Key stats

- **DNS** Drift controls: Watch records, nameservers, and trust posture over time.
- **Email** Trust controls: Monitor SPF, DKIM, DMARC, and mail-routing changes.
- **TLS** Expiry and transport: Track certificate posture and protocol changes.
- **Alerts** Operator workflow: Move from passive reporting into recurring action.

## Coverage areas

### High-value recurring checks

Focus monitoring on the controls that drift first.

- DNS posture and nameserver movement
- Email authentication and routing controls
- TLS expiry and uptime visibility

### Operational output

Use schedules and history to reduce guesswork.

- Alert-ready workflows
- Historical context
- Useful for incident response and routine review

### Conversion path

Move from public audit into ongoing protection.

- Natural follow-up from report pages
- Good fit for premium monitoring workflows
- Connects platform features into one story

## Common use cases

- Recurring monitoring for high-visibility production domains
- Post-migration checks after DNS, CDN, or mail-provider changes
- Executive monitoring views for customer-facing trust controls

## Research findings

### CISA explicitly recommends routine reassessment of public exposure

CISA’s Internet Exposure Reduction Guidance says organizations should establish routine assessments because environments evolve and new internet-facing exposure appears over time.

Action: That is the practical reason continuous monitoring sells: the public footprint changes after launches, renewals, CDN swaps, and provider changes even when nobody schedules a formal audit.

Source: [CISA Internet Exposure Reduction Guidance](https://www.cisa.gov/resources-tools/resources/exposure-reduction)

### Early self-detection is materially cheaper than attacker-led discovery

IBM’s 2024 breach-cost summary says organizations that identified breaches with their own teams and tools saw nearly USD 1 million lower average breach costs than cases first identified by the attacker.

Action: Buyers should read that as a budget argument for scheduled DNS, email, TLS, and exposure checks that tell internal teams something changed before an adversary or customer does.

Source: [IBM Cost of a Data Breach 2024](https://www.ibm.com/think/insights/whats-new-2024-cost-of-a-data-breach-report)

### Lack of continuous attack-surface monitoring is now itself called out as a risk

OWASP ASM Top 10 lists lack of continuous attack-surface monitoring alongside unknown assets, exposed APIs, and insecure DNS as one of the core problems modern security teams must address.

Action: The actionable product requirement is unified change tracking across DNS, email authentication, TLS, headers, subdomains, and drift so teams can keep the external surface stable between audits.

Source: [OWASP Attack Surface Management Top 10](https://owasp.org/www-project-attack-surface-management-top-10/)

## FAQ

### What is continuous domain monitoring?

Continuous domain monitoring is the scheduled re-checking of externally visible controls such as DNS, email authentication, TLS, uptime, and registrar-adjacent changes so posture drift is caught early.

### Which checks are most useful to monitor continuously?

For most teams, DNS posture, nameserver drift, SPF, DKIM, DMARC, TLS health, and uptime produce the strongest operational signal.

### Why is continuous monitoring better than one-off audits for production domains?

Because production domains change constantly. Continuous monitoring catches drift after launches, renewals, provider changes, and infrastructure edits that a one-time audit will miss.

### Who should use continuous domain monitoring?

Teams responsible for customer-facing trust, uptime, email security, and external posture benefit most because they need early warning when visible controls regress.

## Related links

- [Monitoring page](https://cyberfurl.com/monitoring)
- [Security report index](https://cyberfurl.com/security-report)
- [DNS posture](https://cyberfurl.com/features/dns-posture)
- [Email authentication](https://cyberfurl.com/features/email-authentication)


---
## MSPs
Source: https://cyberfurl.com/for/msps.md

# CyberFurl for MSPs

## One compromised RMM can turn into every client calling at once.

MSPs do not lose on a single phishing email. They lose when exposed admin services, weak email trust, and silent subdomain drift give attackers one path into many client relationships. CyberFurl runs 50+ external checks across six suites, then keeps 24/7 watch on DNS, SPF, DKIM, DMARC, MX, and subdomains.

## Three numbers that matter

- **30%** of SMB breaches involved a third party ([Verizon DBIR 2025 SMB Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-smb-snapshot.pdf)).
- **20%** of SMB breaches started with vulnerability exploitation ([Verizon DBIR 2025 SMB Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-smb-snapshot.pdf)).
- **88%** of basic web app breaches involved stolen credentials ([Verizon DBIR 2025](https://www.verizon.com/business/resources/reports/2025-dbir-data-breach-investigations-report.pdf)).

## Why generic scanners fail for MSPs

### Single-tenant scanners miss client blast radius.

MSPs need to spot the same DNS, mail, and exposed-service weakness repeating across many customer domains. A one-domain scanner does not show which clients drifted after a migration or which subdomain suddenly appeared on a shared vendor.

### Helpdesk and RMM abuse starts outside the firewall.

Attackers probe public login panels, backup paths, mail spoofing gaps, and exposed staging hosts before they ever touch an endpoint. If your tool only looks for CVEs on one IP, it misses the trust chain that lets a fake technician email become access.

### Most tools stop watching after the first report.

MSPs need ongoing visibility into DNS, SPF, DKIM, DMARC, MX, and subdomains because those are the parts clients change constantly. Everything else should be easy to rescan on a schedule without pretending it is live telemetry.

## Eight ranked controls

1. **DNS Intelligence**: Inventory A, AAAA, CNAME, MX, NS, SOA, and TXT records before a client cutover leaves drift behind.
2. **Email Intelligence**: Validate SPF and flatten lookup-heavy records before forwarded client mail turns into spoofing cover.
3. **Email Intelligence**: Check DKIM selectors and rotation so client mail keeps signed trust during provider changes.
4. **Email Intelligence**: Review DMARC policy and reporting alignment so fake client-domain mail stops slipping through.
5. **Domain Recon**: Enumerate passive and active subdomains to catch forgotten portals, old agent hosts, and reseller leftovers.
6. **Infrastructure**: Run port scans, service detection, header checks, and admin-path discovery on exposed MSP and client surfaces.
7. **Threat Intelligence**: Watch HIBP breach exposure and leaked credentials before reused passwords become shared-tenant entry points.
8. **Monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains; use scheduled rescans for the rest.

## Real-world case study

### Kaseya VSA, 2021

The Kaseya VSA incident showed what makes MSP attacks brutal: one exposed management path can cascade into many downstream customers at once.

**Root cause:** attackers exploited an internet-facing management product and then used the provider relationship to spread impact across customer environments.

**Where CyberFurl maps cleanly:**

- Infrastructure scans surface exposed admin services, weak HTTP headers, and sensitive paths that should not be public.
- Domain Recon catches forgotten subdomains and old support portals that stay reachable long after teams think they are gone.
- Email Intelligence closes the spoofing gaps attackers use during follow-on client communications and fake support escalations.

## Three-step workflow

1. **Scan**: Run the domain through CyberFurl and collect the DNS, email, threat, recon, infrastructure, and monitoring findings in one place.
2. **Review report**: Use the ranked findings to explain what attackers can see right now: spoofing gaps, exposed services, variants, and subdomain drift.
3. **Schedule monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains. Use scheduled rescans for infrastructure, threat, and variant reviews.

## FAQ

### What can CyberFurl show an MSP that a client-facing vulnerability scan usually misses?

It shows the public trust layer around the client estate: DNS drift, mail authentication gaps, exposed admin paths, typosquat risk, subdomain growth, and breach exposure that attackers can see without logging in.

### Can I use one workspace for many customer domains?

Yes. The point is to rank shared patterns fast so you can see which customers have weak SPF, broken DKIM, exposed services, or newly discovered subdomains first.

### Which checks stay under 24/7 monitoring today?

DNS, SPF, DKIM, DMARC, MX, and subdomains are the live monitoring scope today. Infrastructure, threat intel, and the rest of domain recon should be scheduled to rescan.

### Does CyberFurl replace my RMM or PSA?

No. It gives you an external posture layer you can use before tickets pile up, especially during migrations, mail changes, new client onboarding, and incident review.

### Can I hand a customer a shareable report without a long explanation?

Yes. The report is useful because it names the exposed signal directly: weak DMARC, too many SPF lookups, an exposed admin path, a newly found subdomain, or a breach-exposed identity.

### Why does breach exposure matter for MSPs if the breach happened somewhere else?

Because attackers reuse leaked usernames and passwords against portals, helpdesks, and remote-access pages. If a client-domain identity is already in a dump, you want to know before the spray starts.

## Lead magnet

**MSP Multi-Tenant Attack Surface Audit**

## Useful links

- Features: [/features/continuous-monitoring](/features/continuous-monitoring), [/features/subdomain-discovery](/features/subdomain-discovery)
- Learn: [/learn/dmarc](/learn/dmarc), [/learn/ns-drift](/learn/ns-drift)
- Tool: [/tools/dns-benchmark](/tools/dns-benchmark)
- Pricing: [/pricing](/pricing)
- Suites: [/dns-intelligence](/dns-intelligence), [/email-intelligence](/email-intelligence), [/uptime-monitoring](/uptime-monitoring)

---
## Ecommerce
Source: https://cyberfurl.com/for/ecommerce.md

# CyberFurl for Ecommerce

## A checkout skimmer does not need permission. It just needs one weak edge.

Ecommerce teams get punished in public: broken DNS, weak CSP, exposed backup paths, malicious redirects, and abandoned subdomains all show up where customers pay. CyberFurl helps you see the public attack surface around checkout before card theft, redirects, and trust loss become tomorrow’s screenshots.

## Three numbers that matter

- **837** retail incidents were investigated in Verizon snapshot data ([Verizon DBIR 2025 Retail Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-retail-snapshot.pdf)).
- **93%** of retail breaches fell into system intrusion, social engineering, and basic web apps ([Verizon DBIR 2025 Retail Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-retail-snapshot.pdf)).
- **46%** of compromised systems with corporate logins were non-managed devices ([Verizon DBIR 2025 Retail Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-retail-snapshot.pdf)).

## Why generic scanners fail for ecommerce

### Checkout trust breaks across DNS, scripts, and headers at the same time.

Retail attackers move between nameserver changes, malicious redirects, stolen scripts, and stale subdomains that keep looking harmless until payment traffic starts flowing through them.

### One-time scans miss the domains attackers register after launch day.

Skimmer crews and redirect operators rotate infrastructure constantly. You need recurring subdomain discovery, certificate transparency, and threat-intel checks, not a static report from the week the storefront shipped.

### Mail spoofing matters even when the breach story starts at checkout.

Refund fraud, fake support mail, and account-reset abuse often follow retail incidents. If SPF, DKIM, or DMARC are weak, the same public brand trust attackers abuse in payment flows gets reused in customer communications.

## Eight ranked controls

1. **DNS Intelligence**: Audit DNS records, nameserver delegation, and propagation before a registrar or CDN change silently reroutes traffic.
2. **DNS Intelligence**: Validate DNSSEC and inspect cache-poisoning and zone-transfer exposure around high-traffic storefront domains.
3. **Infrastructure**: Check CSP, HSTS, X-Frame-Options, and sensitive paths so checkout pages do not advertise avoidable browser trust gaps.
4. **Threat Intelligence**: Run malicious redirect, script/skimmer, Safe Browsing, VirusTotal, URLhaus, and OpenPhish checks on public endpoints.
5. **Domain Recon**: Enumerate subdomains and certificate transparency results to catch forgotten staging hosts and abandoned campaign domains.
6. **Email Intelligence**: Review MX redundancy, STARTTLS, PTR, and DNSBL status before brand-spoof mail rides customer support workflows.
7. **Threat Intelligence**: Use HIBP breach exposure and credential-leak checks to spot identities attackers can reuse against admin and support paths.
8. **Monitoring**: Keep 24/7 watch on DNS, SPF, DKIM, DMARC, MX, and subdomains; use scheduled rescans for storefront headers and threat checks.

## Real-world case study

### British Airways Magecart, 2018

The British Airways breach is still the simplest retail lesson: a public checkout flow only needs one compromised edge before card data starts moving somewhere it should not.

**Root cause:** attackers injected a skimming script into the payment flow and harvested customer data during normal checkout sessions.

**Where CyberFurl maps cleanly:**

- Threat Intelligence checks for script/skimmer activity, malicious redirects, and external reputation signals tied to public payment URLs.
- Infrastructure checks surface weak CSP and missing browser protections that make checkout script trust harder to control.
- Domain Recon finds side domains and forgotten hosts attackers can use for collection, staging, or redirect chains.

## Three-step workflow

1. **Scan**: Run the storefront domain through CyberFurl and collect the public DNS, email, threat, recon, and infrastructure findings in one place.
2. **Review report**: Use the ranked findings to explain where attackers can route traffic, hide skimmers, spoof support mail, or exploit stale assets.
3. **Schedule monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains. Use scheduled rescans for headers, exposed services, and threat sweeps.

## FAQ

### Can CyberFurl tell me whether my checkout is exposed from the outside?

Yes. It focuses on the public layer: DNS routing, script and redirect reputation, exposed paths, browser trust headers, mail trust, and the extra domains that keep appearing around storefront operations.

### Does this replace a browser-side script integrity review?

No. It complements it by showing the internet-facing trust problems around the store that attackers often touch first or abuse next.

### Why include mail checks on an ecommerce page?

Because fake refund mail, fake delivery updates, and support impersonation get easier when SPF, DKIM, and DMARC are weak after a retail incident.

### Which parts can stay under 24/7 monitoring today?

DNS, SPF, DKIM, DMARC, MX, and subdomains are the live monitoring set. Threat-intel, headers, and infrastructure checks should run on-demand or on a schedule.

### What kind of hidden assets does CyberFurl usually surface for retail teams?

Old campaign subdomains, unused checkout experiments, backup paths, preview stores, and mail records that never got cleaned up after a provider change.

### How should I use the lead magnet with my team?

Use it as a weekly review checklist: public DNS, mail trust, exposed services, subdomain drift, and known-malicious reputation around the pages that handle revenue.

## Lead magnet

**Ecommerce Checkout Skimmer & Script Integrity Checklist**

## Useful links

- Features: [/features/web-security-headers](/features/web-security-headers), [/features/malware-intelligence](/features/malware-intelligence)
- Learn: [/learn/csp](/learn/csp), [/learn/dns-hijacking](/learn/dns-hijacking)
- Tool: [/tools/dns-speed-test](/tools/dns-speed-test)
- Pricing: [/pricing](/pricing)
- Suites: [/threat-intelligence/malware](/threat-intelligence/malware), [/infrastructure/port-scan](/infrastructure/port-scan), [/dns-intelligence](/dns-intelligence)

---
## SaaS
Source: https://cyberfurl.com/for/saas.md

# CyberFurl for SaaS

## Your next deal does not die in the questionnaire. It dies in the exposed basics.

Buyers ask the same public-surface questions again and again: do you enforce DMARC, do you expose stale subdomains, do you have obvious admin paths, did your identities show up in breach dumps, and can your domain be spoofed? CyberFurl gives revenue teams a concrete external answer instead of a hand-wavy promise.

## Three numbers that matter

- **88%** of basic web app breaches involved stolen credentials ([Verizon DBIR 2025](https://www.verizon.com/business/resources/reports/2025-dbir-data-breach-investigations-report.pdf)).
- **30%** of SMB breaches involved a third party ([Verizon DBIR 2025 SMB Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-smb-snapshot.pdf)).
- **$16.6B** in cybercrime losses were reported in the United States ([FBI IC3 2024](https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf)).

## Why generic scanners fail for SaaS

### Buyers care about what they can verify from the outside.

Questionnaires ask about process, but deals stall when the public evidence is messy: missing DMARC, leaking identities, stale subdomains, bad headers, or obvious admin paths.

### The problem is posture drift, not a one-day screenshot.

SaaS teams constantly add vendors, staging hosts, marketing domains, support mail flows, and customer-facing apps. If the scan does not revisit DNS, mail trust, and subdomains, the answer buyers saw last quarter stops being true.

### Exposure spans more than the main app URL.

Attackers and technical buyers both inspect the whole footprint: MX, SPF, DKIM, DMARC, CT logs, forgotten portals, exposed services, and leaked identities tied to the brand.

## Eight ranked controls

1. **Email Intelligence**: Validate SPF, DKIM, and DMARC so buyer security teams see that your domain cannot be trivially spoofed.
2. **Email Intelligence**: Check MX redundancy, MTA-STS, TLS-RPT, STARTTLS, and PTR so mail trust survives provider changes.
3. **DNS Intelligence**: Inventory public DNS records and nameserver delegation before launch leftovers confuse buyers and attackers alike.
4. **Domain Recon**: Enumerate subdomains, certificate transparency entries, and variants to expose product sprawl and lookalike risk.
5. **Infrastructure**: Run service detection, header checks, admin-panel discovery, and sensitive-path checks on internet-facing apps.
6. **Threat Intelligence**: Check HIBP and credential-leak exposure before reused identities become the story buyers find first.
7. **Threat Intelligence**: Use Safe Browsing, VirusTotal, malicious redirect, and exposed-path checks to spot public trust damage around your brand.
8. **Monitoring**: Monitor DNS, SPF, DKIM, DMARC, MX, and subdomains around releases; schedule rescans for headers, exposed services, and threat checks.

## Real-world case study

### Okta source-code breach, 2022

The Okta source-code incident reminded every SaaS buyer that trust can erode fast when exposed suppliers, identities, and public reassurances do not line up cleanly.

**Root cause:** attackers compromised a third-party support environment and used that access to reach sensitive internal material tied to a widely trusted SaaS provider.

**Where CyberFurl maps cleanly:**

- Threat Intelligence helps surface breach exposure and compromised identities tied to the brand before buyer trust conversations start.
- Email Intelligence shows whether the public domain can be spoofed during customer-notification windows.
- Domain Recon and Infrastructure make it easier to prove what is actually exposed on the internet instead of hand-waving through a questionnaire.

## Three-step workflow

1. **Scan**: Run the domain through CyberFurl and collect the DNS, email, threat, recon, infrastructure, and monitoring findings in one place.
2. **Review report**: Use the ranked findings to explain what buyers can verify right now: spoofing gaps, exposed hosts, variants, and breach-linked identities.
3. **Schedule monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains. Use scheduled rescans for infrastructure and threat sweeps.

## FAQ

### How is this different from a filled-out buyer questionnaire?

A questionnaire is a claim. This page shows the public evidence behind it: DNS posture, mail trust, exposed hosts, known-malicious signals, and breach exposure tied to the brand.

### Can I share a CyberFurl report with a prospect during security review?

Yes. That is the point of this vertical page: give revenue teams a tight external answer they can hand to security buyers without inventing features that do not exist.

### Does CyberFurl inspect my internal codebase?

No. It is an external posture platform. It shows what buyers and attackers can observe from your domains, mail stack, internet-facing services, variants, and threat exposure.

### Which checks stay under 24/7 monitoring?

DNS, SPF, DKIM, DMARC, MX, and subdomains. Everything else can be rescanned on a schedule so the public story stays fresh before a renewal or procurement review.

### Why include breach exposure on a sales-focused page?

Because buyers ask about leaked identities, brand abuse, and public trust damage. If your organization already appears in breach data, that should be handled before the prospect discovers it first.

### What does the one-pager usually help a SaaS team answer fastest?

Whether the domain is spoofable, whether staging or abandoned subdomains exist, whether the app exposes obvious trust gaps, and whether public intelligence already points to compromise or abuse.

## Lead magnet

**SaaS External Attack Surface One-Pager for Buyers**

## Useful links

- Features: [/features/email-authentication](/features/email-authentication), [/features/subdomain-discovery](/features/subdomain-discovery)
- Learn: [/learn/attack-surface-management](/learn/attack-surface-management), [/learn/credential-stuffing](/learn/credential-stuffing)
- Tool: [/tools/dns-caching](/tools/dns-caching)
- Pricing: [/pricing](/pricing)
- Suites: [/email-intelligence](/email-intelligence), [/domain-recon/whois](/domain-recon/whois), [/threat-intelligence/malware](/threat-intelligence/malware)

---
## Real Estate
Source: https://cyberfurl.com/for/real-estate.md

# CyberFurl for Real Estate

## Wire-fraud mail hits hardest when your domain still looks easy to spoof.

Closings move fast, money moves faster, and attackers only need one believable message thread. CyberFurl helps brokerages, title teams, and lenders lock down the public signals behind closing mail: SPF, DKIM, DMARC, lookalike domains, exposed portals, and the DNS changes that make fraud hard to spot until funds are gone.

## Three numbers that matter

- **21,442** business email compromise complaints reached IC3 ([FBI IC3 2024](https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf)).
- **$2.77B** in reported losses were tied to business email compromise ([FBI IC3 2024](https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf)).
- **339** real-estate incidents were investigated in Verizon data ([Verizon DBIR 2025](https://www.verizon.com/business/resources/reports/2025-dbir-executive-summary.pdf)).

## Why generic scanners fail for real estate

### Wire fraud looks like ordinary mail until trust controls fail.

Closing fraud usually rides real message threads, spoofed brands, or lookalike domains. A scanner that never checks SPF, DKIM, DMARC, typosquatting, and MX routing misses the exact public layer the attacker abuses.

### Fraudsters exploit short windows and new domains.

A lookalike domain registered on Monday can be used in a closing conversation on Tuesday. If you only run a one-time scan, you miss the domain changes and subdomain additions that matter most during active deals.

### Agents, title teams, and lenders all inherit the same public trust problem.

Real-estate transactions span multiple brands and inboxes. Weak mail trust or a spoofable domain on any side of the chain can make a fake wire update look credible enough to act on.

## Eight ranked controls

1. **Email Intelligence**: Validate SPF so only approved sending services can represent your brokerage or title brand.
2. **Email Intelligence**: Review DKIM selectors and signing gaps before forwarded closing mail loses authenticity.
3. **Email Intelligence**: Check DMARC policy and reporting so spoofed closing messages are rejected instead of merely observed.
4. **Email Intelligence**: Inspect MX redundancy, PTR, DNSBL status, and STARTTLS around critical mail routing.
5. **Domain Recon**: Find typosquat variants, registered lookalikes, and risky domain spellings attackers can use in closings.
6. **DNS Intelligence**: Audit DNS records and nameserver delegation so registrar or forwarding changes do not create blind spots.
7. **Infrastructure**: Scan exposed portals, admin paths, and HTTP security headers on transaction and document-delivery sites.
8. **Monitoring**: Keep 24/7 watch on DNS, SPF, DKIM, DMARC, MX, and subdomains during live transactions and partner changes.

## Real-world case study

### FBI IC3 closing-fraud trend

The real-estate version of business email compromise is brutally simple: spoof a trusted party in an active transaction, swap the wire instructions, and make the victim act before anyone speaks live.

**Root cause:** weak mail authentication, lookalike domains, and transaction pressure make fraudulent wire-instruction mail believable at exactly the worst moment.

**Where CyberFurl maps cleanly:**

- Email Intelligence shows whether SPF, DKIM, and DMARC actually stop spoofed closing mail.
- Domain Recon surfaces lookalike and variant domains that can be weaponized in transaction threads.
- Monitoring keeps DNS, mail-auth, and subdomain drift visible while active closings are underway.

## Three-step workflow

1. **Scan**: Run the domain through CyberFurl and collect the DNS, email, recon, infrastructure, and monitoring findings in one place.
2. **Review report**: Use the findings to explain whether the brand is spoofable, whether lookalikes exist, and whether public portals expose avoidable risk.
3. **Schedule monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains. Use scheduled rescans for the rest.

## FAQ

### Why focus so heavily on mail controls for real estate?

Because the biggest money-moving attacks in this vertical are still message-based: fake wire instructions, fake account updates, fake escrow notices, and fake title coordination.

### Can CyberFurl tell me if my domain is easy to spoof today?

Yes. That is exactly what SPF, DKIM, and DMARC checks are for, along with MX, PTR, and DNSBL visibility around the sending environment.

### What does typosquat monitoring add beyond DMARC?

DMARC protects your exact domain. Typosquat discovery shows the lookalike domains attackers may register to fool buyers, sellers, or agents who are moving quickly.

### Which checks are live monitored right now?

DNS, SPF, DKIM, DMARC, MX, and subdomains. Other internet-facing checks should run as scheduled rescans around active transaction periods.

### Can a brokerage use this across offices and brands?

Yes. That is usually where the value shows up first because regional brands and acquired domains often keep old mail and DNS setups longer than anyone realizes.

### What should a team do first if the report shows weak DMARC?

Fix SPF and DKIM alignment, then move DMARC toward enforcement instead of staying in observation mode while attackers keep trying your brand.

## Lead magnet

**Real Estate Closing Email Security Checklist**

## Useful links

- Features: [/features/email-authentication](/features/email-authentication), [/features/dns-posture](/features/dns-posture)
- Learn: [/learn/dmarc](/learn/dmarc), [/learn/typosquatting](/learn/typosquatting)
- Tool: [/tools/edns-support](/tools/edns-support)
- Pricing: [/pricing](/pricing)
- Suites: [/email-intelligence](/email-intelligence), [/domain-recon/whois](/domain-recon/whois), [/uptime-monitoring](/uptime-monitoring)

---
## Insurance
Source: https://cyberfurl.com/for/insurance.md

# CyberFurl for Insurance

## Identity-driven ransomware starts long before the ransom note.

Carriers, broker portals, and partner logins attract attackers because the public surface is crowded and time-sensitive. CyberFurl helps insurance teams see the public weakness stack that often comes first: breach-exposed identities, spoofable mail, exposed portals, weak headers, stale subdomains, and DNS drift around core brands.

## Three numbers that matter

- **927** finance breaches were logged in Verizon snapshot data ([Verizon DBIR 2025 Finance Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-finance-snapshot.pdf)).
- **74%** of finance breaches landed in system intrusion, social engineering, and basic web apps ([Verizon DBIR 2025 Finance Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-finance-snapshot.pdf)).
- **15%** of employees in finance were seen accessing gen-AI systems on corporate devices ([Verizon DBIR 2025 Finance Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-finance-snapshot.pdf)).

## Why generic scanners fail for insurance

### Insurance attack paths span brand, mail, portals, and partners.

A carrier can look fine at the homepage while exposing old broker portals, weak mail alignment, and subdomains left behind after product or MGA changes.

### Credential abuse is usually visible before ransomware is.

Leaked identities, spoofable domains, and admin-path exposure create the opening attackers need. If your tool only labels hosts by port and never checks breach exposure or email trust, you miss the earliest public warning signs.

### The live part is small but critical.

Insurance teams should keep 24/7 watch on DNS, SPF, DKIM, DMARC, MX, and subdomains because broker ecosystems change constantly. The rest of the surface still matters, but it should be rescanned deliberately instead of described like a live detector.

## Eight ranked controls

1. **Threat Intelligence**: Check HIBP breach exposure and leaked credentials tied to insurance brands, brokers, and service accounts.
2. **Email Intelligence**: Validate SPF, DKIM, and DMARC before claims, billing, or renewal mail becomes spoofing bait.
3. **Email Intelligence**: Inspect MX routing, DNSBL status, PTR, STARTTLS, and banner signals around production mail.
4. **Domain Recon**: Enumerate subdomains and CT results to find broker portals, sandbox environments, and abandoned quote systems.
5. **Infrastructure**: Run port scans, service detection, admin-panel checks, and sensitive-path checks on public broker and carrier portals.
6. **DNS Intelligence**: Audit DNS records, nameserver delegation, and propagation to catch silent routing drift during vendor or portal changes.
7. **Threat Intelligence**: Use Safe Browsing, VirusTotal, malicious redirect, and exposed-path checks to spot trust damage on quote and service domains.
8. **Monitoring**: Monitor DNS, SPF, DKIM, DMARC, MX, and subdomains continuously; schedule rescans for infrastructure and threat sweeps.

## Real-world case study

### MGM and Caesars identity-driven attacks, 2023

The 2023 attacks on MGM and Caesars were not insurance incidents, but they are exactly the kind of identity-first playbook carriers should care about: social engineering plus exposed public pathways that let one foothold become a business outage.

**Root cause:** attackers used social engineering and identity compromise to reach privileged systems, then turned that access into broad operational disruption.

**Where CyberFurl maps cleanly:**

- Threat Intelligence highlights leaked identities and compromised-brand exposure that often precede credential abuse.
- Email Intelligence reduces spoofing room during helpdesk and account-recovery workflows.
- Infrastructure and Domain Recon show which public portals and subdomains are still available for probing and impersonation.

## Three-step workflow

1. **Scan**: Run the public insurance-facing domains through CyberFurl and collect the mail, DNS, threat, recon, and infrastructure findings in one place.
2. **Review report**: Prioritize the exposed portal, mail-trust, and breach-linked identity issues that attackers can touch first.
3. **Schedule monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains. Use scheduled rescans for infrastructure and threat reviews.

## FAQ

### Why use a finance data source on an insurance page?

Because insurance lives inside the same public trust problems: identity abuse, public portals, wire and billing communications, and customer-facing systems that attract the same attacker patterns.

### Can CyberFurl see inside my policy platform?

No. It is built for the external layer around it: the domains, mail posture, public services, subdomains, and reputation signals attackers can inspect before they ever authenticate.

### What should insurance teams monitor live right now?

DNS, SPF, DKIM, DMARC, MX, and subdomains. The rest should be rescanned on a schedule that matches portal launches, broker onboarding, and product changes.

### How does breach exposure help a carrier?

It tells you whether identities tied to the brand are already circulating in public breach data, which is often the starting point for password spraying and account takeover.

### Does this help with broker and partner ecosystems too?

Yes. That is usually where stale subdomains, old portals, and mail-routing drift stay alive longest, which makes the external scan more useful than a narrow core-domain check.

### What is the fastest win an insurance team usually gets from the report?

Finding a public portal or mail-trust gap nobody thought was still reachable, then fixing it before it turns into a broker support incident or a ransomware entry point.

## Lead magnet

**Insurance Carrier External Exposure Checklist**

## Useful links

- Features: [/features/breach-exposure](/features/breach-exposure), [/features/vulnerability-surface](/features/vulnerability-surface)
- Learn: [/learn/credential-stuffing](/learn/credential-stuffing), [/learn/subdomain-takeover](/learn/subdomain-takeover)
- Tool: [/tools/dns-benchmark](/tools/dns-benchmark)
- Pricing: [/pricing](/pricing)
- Suites: [/threat-intelligence/malware](/threat-intelligence/malware), [/infrastructure/port-scan](/infrastructure/port-scan), [/uptime-monitoring](/uptime-monitoring)

---
## Government
Source: https://cyberfurl.com/for/government.md

# CyberFurl for Government

## One weak vendor domain can still turn into a public-sector incident.

Government teams inherit risk from contractors, citizen-facing portals, and legacy domains that stay online far longer than anyone planned. CyberFurl helps teams verify the public layer around those relationships: DNS integrity, email trust, subdomains, exposed services, and reputation signals that make vendor and agency surfaces easier to triage.

## Three numbers that matter

- **132** public-sector breaches were logged in Verizon snapshot data ([Verizon DBIR 2025 Public Sector Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-public-sector-snapshot.pdf)).
- **50%** of public-sector breaches involved a third party ([Verizon DBIR 2025 Public Sector Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-public-sector-snapshot.pdf)).
- **28%** of public-sector breaches involved espionage motives ([Verizon DBIR 2025 Public Sector Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-public-sector-snapshot.pdf)).

## Why generic scanners fail for government

### Government exposure lives across agency and vendor boundaries.

A public-sector incident rarely belongs to one hostname. It moves through contractor domains, legacy portals, mail-routing gaps, and old subdomains that are still reachable because nobody owns the cleanup end to end.

### Procurement paperwork does not show internet truth.

Agencies still need to see the live DNS, mail, and exposed-service state that attackers can enumerate. A scanner that never checks domain variants, CT logs, or nameserver drift gives too little context when a vendor changes hands or infrastructure.

### The wrong monitoring promise is worse than no promise.

The live monitoring scope today is DNS, SPF, DKIM, DMARC, MX, and subdomains. Public-sector teams still benefit from rescanning other suites, but pretending every signal is live only makes incident triage noisier later.

## Eight ranked controls

1. **DNS Intelligence**: Audit DNS records, DNSSEC, nameserver delegation, and propagation on citizen-facing and contractor domains.
2. **Email Intelligence**: Validate SPF, DKIM, DMARC, MX, and transport controls across official outbound mail domains.
3. **Domain Recon**: Enumerate subdomains, CT entries, and WHOIS details to find forgotten portals and unmanaged vendor-hosted assets.
4. **Infrastructure**: Run port scans, header checks, admin-path discovery, and availability checks on public web properties.
5. **Threat Intelligence**: Check Safe Browsing, VirusTotal, malicious redirects, and exposed paths on high-trust agency domains.
6. **Domain Recon**: Look for typosquat variants and registered lookalikes that can be used against citizens, vendors, or staff.
7. **Threat Intelligence**: Track HIBP exposure and leaked credentials tied to official domains before sprays and impersonation attempts ramp up.
8. **Monitoring**: Keep 24/7 watch on DNS, SPF, DKIM, DMARC, MX, and subdomains for agencies and vendors; rescan the rest on a schedule.

## Real-world case study

### SolarWinds, 2020

SolarWinds is still the clearest reminder that one vendor relationship can become a public-sector security event with national consequences.

**Root cause:** a supply-chain compromise let attackers ride trusted software relationships into downstream government environments.

**Where CyberFurl maps cleanly:**

- Domain Recon helps teams keep vendor domains, subdomains, and variants visible instead of assuming the supplier footprint is small.
- DNS Intelligence and Monitoring make nameserver and mail-auth drift on official and contractor domains much easier to catch.
- Infrastructure scans help agencies rescan exposed portals and public services after major vendor or release events.

## Three-step workflow

1. **Scan**: Run agency and contractor domains through CyberFurl and collect DNS, mail, recon, infrastructure, and threat findings in one place.
2. **Review report**: Prioritize the supplier and citizen-facing exposures that would be hardest to explain after an incident.
3. **Schedule monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains. Use scheduled rescans for public-service and threat reviews.

## FAQ

### Can CyberFurl be used on contractor and supplier domains too?

Yes. That is one of the strongest uses for this page because third-party domains often carry the exact public drift agencies do not see until after an incident.

### Why is DNS so central on a government page?

Because DNS, mail routing, and subdomain ownership are the public trust layer citizens, vendors, and attackers all rely on first.

### Does this replace internal agency security monitoring?

No. It covers the outside view: domains, mail trust, public services, variants, and reputation signals that exist before any internal sensor sees an event.

### Which checks are in the live monitoring scope right now?

DNS, SPF, DKIM, DMARC, MX, and subdomains. Infrastructure, domain variants, and threat sweeps should be scheduled to rescan.

### How should an agency use the hardening checklist?

Use it as a recurring review across official domains and high-trust contractors so ownership, DNS changes, and internet-facing services stay visible between procurements and incident surges.

### What is the most common public-sector surprise this surface uncovers?

Usually a forgotten subdomain, stale DNS delegation, or a contractor-hosted service that still looks official but no longer has active ownership.

## Lead magnet

**Public Sector External Surface Hardening Checklist**

## Useful links

- Features: [/features/dns-posture](/features/dns-posture), [/features/continuous-monitoring](/features/continuous-monitoring)
- Learn: [/learn/dnssec](/learn/dnssec), [/learn/certificate-transparency](/learn/certificate-transparency)
- Tool: [/tools/dot-support](/tools/dot-support)
- Pricing: [/pricing](/pricing)
- Suites: [/dns-intelligence](/dns-intelligence), [/domain-recon/whois](/domain-recon/whois), [/uptime-monitoring](/uptime-monitoring)

---
## Healthcare
Source: https://cyberfurl.com/for/healthcare.md

# CyberFurl for Healthcare

## A healthcare outage starts with the public doors attackers can already see.

When patient access, claims, and pharmacy flows rely on public domains, weak mail trust and exposed portals become operational risk fast. CyberFurl helps healthcare teams verify the outside layer around those systems: DNS, email authentication, public services, subdomains, and breach exposure that attackers can probe before they trigger a shutdown.

## Three numbers that matter

- **190M** people were estimated as impacted by the Change Healthcare breach ([UnitedHealth Group 2025](https://www.unitedhealthgroup.com/newsroom/2025/2025-01-24-chc-cyberattack-update.html)).
- **$22M** in ransom was paid after the Change Healthcare attack ([Senate Finance Committee 2024](https://www.finance.senate.gov/hearings/examining-the-cybersecurity-vulnerabilities-within-the-united-states-health-care-system)).
- **77%** of healthcare breaches involved system intrusion or social engineering ([Verizon DBIR 2025 Healthcare Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-healthcare-snapshot.pdf)).

## Why generic scanners fail for healthcare

### Healthcare outages often start outside the clinical system.

Attackers begin with public portals, weak mail trust, exposed support paths, and breach-exposed identities because those are easier to probe than a core application stack.

### Claims, pharmacy, and patient domains drift separately.

Healthcare organizations run many brands, acquisitions, and partner-hosted services. Subdomains, MX routes, and nameserver changes drift quietly, which is exactly how hidden exposure survives long enough to matter.

### Ransomware playbooks love weak identity trust.

Mail spoofing, leaked credentials, and exposed admin panels give attackers the footholds they need before any encryption or outage starts.

## Eight ranked controls

1. **Email Intelligence**: Validate SPF, DKIM, and DMARC across patient, claims, and pharmacy mail domains.
2. **Email Intelligence**: Inspect MX, PTR, STARTTLS, TLS-RPT, and MTA-STS on high-trust healthcare mail routes.
3. **Threat Intelligence**: Run breach-exposure and leaked-credential checks tied to healthcare brands and identities.
4. **Domain Recon**: Enumerate subdomains and CT entries to find patient or partner portals that still look live from the outside.
5. **Infrastructure**: Scan exposed services, admin paths, headers, availability, and response-time signals on public healthcare systems.
6. **DNS Intelligence**: Audit DNS records, DNSSEC, nameservers, and propagation during acquisitions and vendor changes.
7. **Threat Intelligence**: Check Safe Browsing, VirusTotal, malicious redirects, and exposed paths on trusted patient-facing domains.
8. **Monitoring**: Monitor DNS, SPF, DKIM, DMARC, MX, and subdomains continuously; schedule rescans for infrastructure and threat sweeps.

## Real-world case study

### Change Healthcare, 2024

Change Healthcare showed how a healthcare cyber event becomes a national operations problem when public trust, identity, and internet-facing dependencies fail at the same time.

**Root cause:** the attackers used compromised credentials and weak identity protections to reach a core platform that many providers depended on.

**Where CyberFurl maps cleanly:**

- Threat Intelligence surfaces leaked identities and credential exposure tied to healthcare brands.
- Email Intelligence shows whether critical domains can still be spoofed during outage and recovery communications.
- Infrastructure and Domain Recon help teams find exposed public portals and stale partner-facing systems before attackers do.

## Three-step workflow

1. **Scan**: Run the public healthcare domains through CyberFurl and collect DNS, mail, threat, recon, and infrastructure findings in one place.
2. **Review report**: Prioritize the mail-trust gaps, leak exposure, and public services that would make an outage worse.
3. **Schedule monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains. Use scheduled rescans for infrastructure and threat reviews.

## FAQ

### Why anchor this page on DNS and email when healthcare incidents feel identity-driven?

Because identity-driven attacks still rely on public trust signals around domains, mail, and reachable portals. Those are the surfaces attackers inspect and exploit before they ever move deeper.

### Can CyberFurl help across acquired brands and affiliate domains?

Yes. That is one of the highest-value use cases because old mail records, forgotten subdomains, and legacy portals often stay visible long after ownership changes.

### Which checks stay under live monitoring today?

DNS, SPF, DKIM, DMARC, MX, and subdomains. Infrastructure and threat checks should be rescanned on a schedule, especially before major launches or vendor cutovers.

### What is the quickest win for a healthcare security team?

Usually cleaning up mail trust and forgotten public assets first, because those changes reduce spoofing room and eliminate exposure nobody is actively using.

### Does the report only help security teams?

No. IT operations, messaging owners, vendor-management teams, and patient-portal owners can all act on the findings because they are phrased in plain public-surface terms.

### Why include a checklist instead of another generic guide?

Because healthcare teams need an operational hardening pass across domains, mail, and public systems, not another abstract article about ransomware.

## Lead magnet

**Healthcare Domain & Email Hardening Checklist**

## Useful links

- Features: [/features/breach-exposure](/features/breach-exposure), [/features/email-authentication](/features/email-authentication)
- Learn: [/learn/email-spoofing](/learn/email-spoofing), [/learn/data-breach](/learn/data-breach)
- Tool: [/tools/dns-speed-test](/tools/dns-speed-test)
- Pricing: [/pricing](/pricing)
- Suites: [/email-intelligence](/email-intelligence), [/threat-intelligence/malware](/threat-intelligence/malware), [/uptime-monitoring](/uptime-monitoring)

---
## Finance
Source: https://cyberfurl.com/for/finance.md

# CyberFurl for Finance

## Four business days is not much time if your public surface is already a mess.

Banks and fintechs do not get to discover their public posture during a crisis. CyberFurl helps teams keep the outside view ready before disclosure pressure lands: DNS integrity, mail trust, exposed internet services, leak exposure, variants, and the subdomains nobody remembers until they matter.

## Three numbers that matter

- **4 days** is the SEC timeline for material cyber incident disclosure ([SEC 2023](https://www.sec.gov/newsroom/press-releases/2023-139)).
- **927** finance breaches were logged in Verizon snapshot data ([Verizon DBIR 2025 Finance Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-finance-snapshot.pdf)).
- **$2.77B** in reported losses were tied to business email compromise ([FBI IC3 2024](https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf)).

## Why generic scanners fail for finance

### Finance teams need internet truth before legal pressure starts.

When the question becomes what was exposed, when it changed, and whether the domain or mail stack could be abused, a generic scanner is too shallow.

### Fraud and disclosure risk share the same public weak spots.

Spoofable mail, exposed admin paths, old subdomains, weak headers, and leaked credentials create both customer fraud problems and ugly executive-response problems.

### The monitored layer has to stay narrow and believable.

Finance teams should keep 24/7 visibility on DNS, SPF, DKIM, DMARC, MX, and subdomains while rescanning other suites around launches, incident response, and high-risk change windows.

## Eight ranked controls

1. **Email Intelligence**: Validate SPF, DKIM, and DMARC on customer-facing and transactional mail domains before fraudsters spoof them.
2. **Email Intelligence**: Inspect MX, PTR, MTA-STS, TLS-RPT, STARTTLS, and DNSBL signals around high-trust payment and alert mail.
3. **Threat Intelligence**: Run breach-exposure and leaked-credential checks against finance-associated identities and domains.
4. **DNS Intelligence**: Audit DNS records, nameserver delegation, DNSSEC, and propagation across core and campaign domains.
5. **Infrastructure**: Scan ports, headers, admin panels, sensitive paths, uptime, and response times on public banking surfaces.
6. **Domain Recon**: Enumerate subdomains, CT entries, and registered variants to catch shadow launches and lookalike risk.
7. **Threat Intelligence**: Check Safe Browsing, VirusTotal, OpenPhish, malicious redirects, and exposed paths on high-trust finance brands.
8. **Monitoring**: Keep 24/7 watch on DNS, SPF, DKIM, DMARC, MX, and subdomains; schedule rescans for infra and threat sweeps.

## Real-world case study

### Capital One, 2019

Capital One is the reminder that a well-known financial brand can still end up in headlines because exposed public-facing controls and cloud trust assumptions break in ways customers do not care to distinguish.

**Root cause:** a cloud-facing web application weakness allowed the attacker to reach sensitive data through a publicly exposed path and misconfigured controls.

**Where CyberFurl maps cleanly:**

- Infrastructure checks help teams find exposed services, weak headers, and sensitive paths that deserve immediate review.
- Domain Recon keeps shadow assets and stale subdomains from becoming forgotten internet entry points.
- Email Intelligence and Threat Intelligence reduce the brand-spoofing and customer-fraud fallout that usually follows a headline incident.

## Three-step workflow

1. **Scan**: Run the public finance domains through CyberFurl and collect DNS, mail, threat, recon, and infrastructure findings in one place.
2. **Review report**: Prioritize the exposed service, mail-trust, variant, and breach-linked identity issues that would be hardest to explain later.
3. **Schedule monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains. Use scheduled rescans for the rest.

## FAQ

### Why emphasize external posture on a finance page?

Because that is the layer customers, fraud operators, journalists, and attackers can all inspect without permission. It is the fastest way to understand what needs executive attention first.

### Can this help before a board or disclosure conversation?

Yes. The report is most useful before the crisis because it shows whether the public story around DNS, mail trust, exposed services, and leak exposure is already clean or obviously weak.

### Does CyberFurl perform fraud monitoring on transactions?

No. It is an external posture platform. It helps by reducing spoofing room, public service exposure, and identity leak visibility that fraud rings often abuse first.

### Which checks remain under live monitoring?

DNS, SPF, DKIM, DMARC, MX, and subdomains. The rest of the suites are still valuable, but they should run as on-demand or scheduled rescans.

### What should a fintech do first if the report shows many stale subdomains?

Confirm ownership, remove what is dead, and rescan the rest. Forgotten subdomains become support headaches, shadow-launch evidence, or takeover candidates faster than teams expect.

### Why include BEC numbers on a finance page?

Because finance brands are prime targets for payment-redirection and account-alert spoofing, and that risk gets worse the moment mail trust is weak.

## Lead magnet

**Bank & Fintech External Exposure One-Pager**

## Useful links

- Features: [/features/vulnerability-surface](/features/vulnerability-surface), [/features/email-authentication](/features/email-authentication)
- Learn: [/learn/credential-stuffing](/learn/credential-stuffing), [/learn/hsts](/learn/hsts)
- Tool: [/tools/dns-leak](/tools/dns-leak)
- Pricing: [/pricing](/pricing)
- Suites: [/infrastructure/port-scan](/infrastructure/port-scan), [/email-intelligence](/email-intelligence), [/threat-intelligence/malware](/threat-intelligence/malware)

---
## Agencies
Source: https://cyberfurl.com/for/agencies.md

# CyberFurl for Agencies

## A deepfake call becomes expensive when your domain still does the rest of the lying.

Agencies already manage trust for clients; attackers know that. CyberFurl helps agencies reduce the public gaps that make impersonation campaigns believable: spoofable mail, lookalike domains, forgotten microsites, exposed admin paths, and brand-linked breach exposure that can turn one fake executive message into a client incident.

## Three numbers that matter

- **216** professional-services breaches were logged in Verizon snapshot data ([Verizon DBIR 2025 Professional Services Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-professional-services-snapshot.pdf)).
- **67%** of professional-services breaches involved human actions ([Verizon DBIR 2025 Professional Services Snapshot](https://www.verizon.com/business/resources/infographics/2025-dbir-professional-services-snapshot.pdf)).
- **2x** growth in synthetically generated text inside malicious emails was recorded over two years ([Verizon DBIR 2025](https://www.verizon.com/business/resources/reports/2025-dbir-data-breach-investigations-report.pdf)).

## Why generic scanners fail for agencies

### Brand abuse spreads across many campaign and client-owned domains.

Campaign microsites, client handoff domains, old landing pages, and preview hosts all create room for spoofing, confusion, and abuse if nobody keeps inventory.

### Impersonation risk is not just an inbox problem.

Deepfake-led scams often land the emotional blow elsewhere, then finish the job with spoofed email, lookalike domains, or a fake portal. If you do not review DNS, variants, and mail trust together, you are only solving half the attack.

### Client trust erodes on public evidence.

An agency can sound security-conscious and still expose forgotten subdomains, weak DMARC, or obvious admin panels. Clients notice that fast when a campaign domain breaks or a fake mail thread starts circulating.

## Eight ranked controls

1. **Email Intelligence**: Validate SPF, DKIM, and DMARC across agency and client-facing mail domains before spoofing campaigns land.
2. **Email Intelligence**: Review BIMI, MX, PTR, DNSBL, and STARTTLS around high-visibility brand mail.
3. **Domain Recon**: Find registered variants, typosquats, and CT-discovered domains that can impersonate your agency or clients.
4. **Domain Recon**: Enumerate subdomains to catch stale campaign hosts, preview environments, and forgotten handoff assets.
5. **Infrastructure**: Scan exposed services, headers, admin paths, and backup files on public campaign and CMS surfaces.
6. **DNS Intelligence**: Audit DNS records, nameserver delegation, and propagation across agency and client-owned launch domains.
7. **Threat Intelligence**: Use Safe Browsing, VirusTotal, OpenPhish, malicious redirect, and skimmer checks on public brand domains.
8. **Monitoring**: Keep 24/7 watch on DNS, SPF, DKIM, DMARC, MX, and subdomains around launches and client-domain changes.

## Real-world case study

### WPP deepfake attempt

The attempted WPP deepfake scam mattered because it showed how fast executive impersonation can move from a convincing voice to a convincing follow-up message or fake domain.

**Root cause:** attackers used executive impersonation and urgency to create a believable request path aimed at money and trust.

**Where CyberFurl maps cleanly:**

- Email Intelligence closes the spoofing gaps that make a fake follow-up domain or sender harder to spot.
- Domain Recon finds lookalike and campaign-adjacent domains before attackers can lean on them.
- Infrastructure and Threat Intelligence help agencies keep public brand assets clean before client trust takes the hit.

## Three-step workflow

1. **Scan**: Run the agency and client-owned domains through CyberFurl and collect DNS, mail, recon, infrastructure, and threat findings in one place.
2. **Review report**: Prioritize the spoofing, lookalike, stale-asset, and exposed-service issues that could embarrass the agency or the client next.
3. **Schedule monitoring**: Keep 24/7 monitoring on DNS, SPF, DKIM, DMARC, MX, and subdomains. Use scheduled rescans for the rest.

## FAQ

### Why is this vertical aimed at agencies and not just internal security teams?

Because agencies inherit trust for many brands at once, and that makes spoofing, fake domains, and stale campaign infrastructure much more dangerous operationally and commercially.

### Can CyberFurl help on client-owned domains too?

Yes. That is often the best use case because client launch domains, redirected microsites, and preview environments drift quickly and are easy to forget after delivery.

### What part of the platform stays under live monitoring?

DNS, SPF, DKIM, DMARC, MX, and subdomains. The rest of the public surface should be rescanned whenever campaigns launch, client ownership changes, or major site updates go out.

### Why include BIMI and DMARC on an agency page?

Because agencies are reputation businesses. If the mail trust stack is weak, attackers can borrow your client relationships and brand familiarity in ways that cost both trust and revenue.

### Does this help with deepfake voice scams directly?

Not the audio itself. It helps with the public trust layer attackers often use immediately after the voice call: spoofed mail, lookalike domains, fake portals, and stale brand assets.

### What is the fastest client-facing win from the checklist?

Showing a client which domains, mail records, and public assets are still reachable today, then cleaning up the obvious trust gaps before the next launch cycle.

## Lead magnet

**Agency Brand & Client-Domain Protection Checklist**

## Useful links

- Features: [/features/subdomain-discovery](/features/subdomain-discovery), [/features/email-authentication](/features/email-authentication)
- Learn: [/learn/typosquatting](/learn/typosquatting), [/learn/phishing](/learn/phishing)
- Tool: [/tools/dns-caching](/tools/dns-caching)
- Pricing: [/pricing](/pricing)
- Suites: [/domain-recon/whois](/domain-recon/whois), [/email-intelligence](/email-intelligence), [/uptime-monitoring](/uptime-monitoring)

---
## DMARC
Source: https://cyberfurl.com/learn/dmarc.md

## What is DMARC?

DMARC is the email standard that stops spoofing and phishing. DMARC sits in the part of the mail flow where identity, sender reputation, and enforcement meet. The details matter because one weak link can undo the work done by the other controls.

If you are already working through [SPF](/learn/spf) and [DKIM](/learn/dkim), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl DMARC check](/email-tools/dmarc) and then use the [See the email authentication feature](/features/email-authentication) page to see where it fits in the wider CyberFurl workflow.

## How DMARC works (5-step flow with diagram)

A working DMARC flow starts with the visible `From` domain, not with the envelope sender. The receiver checks whether SPF or DKIM passed, then asks a second question: does that authenticated identity align with the domain the user actually sees? Only aligned results count toward DMARC.

If alignment passes, the message can satisfy DMARC even if only one mechanism succeeded. If alignment fails, the receiver falls back to the domain's published policy and reporting tags. That is why a domain can have SPF and DKIM in place and still see DMARC failures in the wild.

## The three DMARC policies: none, quarantine, reject

The difference between these three values is the difference between observation and enforcement. `p=none` collects data but asks receivers not to change delivery. `p=quarantine` tells receivers suspicious mail should be treated as risky. `p=reject` is the strongest setting and tells receivers that failing mail should not be accepted as normal inbox traffic.

Teams usually move through them in stages because the hard part is not publishing the tag, but making sure every legitimate sender aligns before enforcement gets stricter.

## DMARC alignment (SPF and DKIM alignment modes)

Alignment is where DMARC becomes more than a reporting layer. SPF only helps DMARC when the domain that passed SPF aligns with the visible `From` domain. DKIM only helps when the signing domain in `d=` aligns with that same visible identity.

Relaxed alignment allows subdomain relationships; strict alignment expects an exact domain match. The rollout question is not which mode sounds safer in theory, but whether your real senders can satisfy it consistently.

## DMARC reports: aggregate (RUA) vs forensic (RUF)

Aggregate reports, usually sent to the `rua` address, tell you who is sending on behalf of the domain at scale: source IPs, pass/fail patterns, and alignment outcomes over time. Forensic or failure reports, tied to `ruf`, are far less universally delivered today and can raise privacy concerns, but they can still help in narrow debugging cases.

For most teams, aggregate reporting is the backbone of the rollout. It gives the inventory and trend data needed to move from monitoring to quarantine or reject without guessing.

## Why DMARC matters: spoofing, BEC, brand protection

DMARC matters because attackers and misconfigurations both exploit the same blind spot: the gap between what a team thinks is configured and what the public internet can actually see. When this topic is weak, the impact usually appears as trust failure, data exposure, delivery problems, or unnecessary incident noise.

That is also why good coverage here pays off beyond a single scan. It gives engineering, security, and operations a shared explanation for whether the domain is ready for enforcement, safe to migrate, or still carrying hidden debt.

## How to set up DMARC (HowTo, 6 steps)

DMARC only becomes clear when you follow the full path from configuration to observed behavior. The DNS record, header, or protocol setting is not the outcome by itself. The outcome is what the receiving system, browser, or resolver actually does after it sees that signal.

The details in this section usually come down to HowTo, 6 steps. Those are the parts that decide whether DMARC is merely present on paper or reliable enough to trust in production. That is why the best review pairs the raw configuration with live evidence from [CyberFurl DMARC check](/email-tools/dmarc) or the surrounding [See the email authentication feature](/features/email-authentication) workflow.

<HowToSteps />

## Common DMARC mistakes

Most failures around DMARC are less about the standard and more about operations: copied examples, stale providers, undocumented exceptions, or rollout steps that were never verified from the outside.

These issues are easiest to catch when the review is evidence-led. Look at what the domain is really publishing or sending, then ask where the trust chain can be altered, bypassed, or silently downgraded.

- Missing ownership: nobody can clearly name which team or provider owns the live DMARC behavior.
- Drift after change: a migration, proxy, vendor switch, or DNS edit quietly changed the result.
- Weak enforcement: the control exists, but the chosen value is too permissive to change risk meaningfully.
- No live verification: the rollout was declared done without checking what the public internet now sees.

## DMARC vs SPF vs DKIM (comparison table)

SPF answers “was this server allowed to send?” DKIM answers “does the signed content still match what the signer sent?” DMARC answers “does either authenticated identity line up with the visible `From` domain, and what should receivers do if it does not?”

That distinction matters because each control catches a different failure. Strong email posture comes from using them together, not from treating DMARC as a replacement for the other two.

## Tools to check your DMARC

Use the [CyberFurl DMARC check](/email-tools/dmarc) when you want to see the live signal on a real domain, and then step back to the [See the email authentication feature](/features/email-authentication) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl DMARC check](/email-tools/dmarc)
- [See the email authentication feature](/features/email-authentication)
- [SPF](/learn/spf)
- [DKIM](/learn/dkim)
- [BIMI](/learn/bimi)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 7489: DMARC](https://www.rfc-editor.org/rfc/rfc7489)
- [Google bulk sender requirements](https://support.google.com/a/answer/81126)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## SPF
Source: https://cyberfurl.com/learn/spf.md

## What is SPF?

SPF tells the world which servers can send email for your domain. SPF sits in the part of the mail flow where identity, sender reputation, and enforcement meet. The details matter because one weak link can undo the work done by the other controls.

If you are already working through [DMARC](/learn/dmarc) and [DKIM](/learn/dkim), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl SPF lookup](/email-tools/spf) and then use the [See the email authentication feature](/features/email-authentication) page to see where it fits in the wider CyberFurl workflow.

## Anatomy of an SPF record (v=spf1, mechanisms, qualifiers)

Every SPF record begins with `v=spf1`, which tells receivers they are looking at an SPF policy. After that come mechanisms and qualifiers that describe who is allowed to send and how strictly failures should be interpreted.

The important operational point is that SPF is evaluated left to right. A record that is technically valid can still be messy, over-broad, or fragile if it grew through years of vendor additions without cleanup.

## SPF mechanisms: include, a, mx, ip4, ip6, ~all, -all

These mechanisms are how a domain expresses trust. `include` says “pull in another sender's policy.” `a` and `mx` trust the domain's own web or mail hosts. `ip4` and `ip6` trust specific networks. The ending qualifiers such as `~all` and `-all` decide whether unauthorized senders are treated as soft failures or hard failures.

The biggest mistake is not syntax. It is leaving old providers in place long after they stopped sending mail, which turns SPF into a permissive allowlist nobody fully understands.

## The 10-DNS-lookup limit (and how to fix it)

SPF processing has a hard limit on the number of DNS lookups a receiver is expected to perform. Long `include` chains, nested providers, and overuse of `a` or `mx` mechanisms can push a record over that limit.

When that happens, the fix is usually structural: remove stale senders, flatten only where you can maintain it safely, and stop treating vendor convenience as a free pass to publish endless includes.

## SPF flattening explained

Flattening replaces recursive includes with a more direct list of IPs so the receiver has fewer lookups to perform. It solves one class of delivery failure, but it creates another operational burden: once you flatten a provider's SPF, you own the refresh cycle.

That means flattening is useful when done intentionally, not blindly. If you choose it, you also need a process to keep the flattened data current.

## Common SPF errors and how to debug

The classic SPF failures are multiple SPF records, too many lookups, stale includes, and expectations that forwarding will behave like direct delivery. Debugging starts by reading the live TXT record and tracing which part of the mail path the receiver actually evaluated.

If the message is real, check the sending IP, the envelope sender, and the exact authentication result returned by the receiver. That evidence tells you much more than a green “SPF exists” badge ever will.

## SPF vs DMARC alignment

SPF can pass and still contribute nothing to DMARC if the authenticated domain does not align with the visible `From` domain. That is why teams often think SPF is healthy while DMARC reports still show failure.

In practice, SPF should be reviewed as one identity layer inside a larger email-authentication chain, not as a standalone checkbox.

## How to publish your SPF record (HowTo)

A good implementation plan for SPF starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl SPF lookup](/email-tools/spf).

<HowToSteps />

## Best practices for SPF in 2026

This part of SPF is usually where teams discover whether the control is genuinely working or just looks reasonable on paper. The useful lens is to connect the public signal to a real ownership boundary, user-visible behavior, or failure path on the live system.

If you are using CyberFurl for the investigation, confirm the external evidence first, compare it with the intended posture, and then decide whether the next move is cleanup, tighter enforcement, or ongoing monitoring through [CyberFurl SPF lookup](/email-tools/spf).

## Tools to check your SPF

Use the [CyberFurl SPF lookup](/email-tools/spf) when you want to see the live signal on a real domain, and then step back to the [See the email authentication feature](/features/email-authentication) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl SPF lookup](/email-tools/spf)
- [See the email authentication feature](/features/email-authentication)
- [DMARC](/learn/dmarc)
- [DKIM](/learn/dkim)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 7208: SPF](https://www.rfc-editor.org/rfc/rfc7208)
- [Google bulk sender requirements](https://support.google.com/a/answer/81126)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## DKIM
Source: https://cyberfurl.com/learn/dkim.md

## What is DKIM?

DKIM is the part of email authentication that proves a message was signed by a domain that controls a private key. It does not encrypt the message and it does not tell you whether the sender is honest on its own. What it does is give the receiver a way to verify that important headers and the signed body were not altered after the message left the signing system.

That is why DKIM matters so much in the same conversations as [SPF](/learn/spf) and [DMARC](/learn/dmarc). SPF tells receivers whether an IP was allowed to send. DKIM tells them whether the signed content still matches what the signer sent. DMARC then decides whether one of those authenticated identities aligns with the visible `From` domain. If you want to inspect that chain on a live domain, the quickest route is the [CyberFurl DKIM lookup](/email-tools/dkim).

## How DKIM works (sign → publish → verify)

A real DKIM flow is simpler than it first looks. The sending platform chooses which headers to sign, computes hashes, signs those values with its private key, and adds a `DKIM-Signature` header to the message. That header includes the signing domain in `d=` and the selector in `s=`.

When the message reaches the receiving server, the receiver builds the same canonicalized view of the signed content, fetches the public key from `selector._domainkey.example.com`, and verifies the signature. If the header list changed, if the body hash no longer matches, or if the DNS key is missing or malformed, validation fails. That is why a domain can have a DKIM record in DNS and still fail DKIM in real delivery.

## DKIM selectors explained

Selectors exist so one domain can publish more than one DKIM public key at the same time. That matters when different providers sign different mail streams, when you are rotating keys, or when you want to retire one vendor without breaking another.

Operationally, a selector is just the left-hand label that sits before `._domainkey`. A message signed with `s=mailgun` will cause the receiver to look up `mailgun._domainkey.example.com`. Good selector naming is less about aesthetics and more about safe change management: it should be obvious which provider or mail stream owns the key.

## Key strength: 1024-bit vs 2048-bit

This choice is no longer just academic. Older deployments often still carry 1024-bit RSA keys because they were easier to fit into DNS or because a provider never forced an upgrade. Modern guidance is moving toward 2048-bit keys where providers and DNS infrastructure support them reliably.

The practical rule is simple: use 2048-bit keys for new setups unless your platform has a documented limitation, and verify that your DNS provider publishes the record cleanly without truncation or formatting errors. If a legacy provider still insists on 1024-bit keys, treat that as technical debt and track when it can be removed.

## DKIM rotation best practices

Key rotation is where mature DKIM programs separate from checkbox deployments. The safe pattern is to publish a new selector, switch the sender to sign with that new selector, confirm that receivers are validating it, and only then retire the old key.

Teams get into trouble when they delete the old record too early, forget which provider owns which selector, or rotate keys without checking real signed mail afterward. Rotation should be routine enough that an employee change, vendor offboarding, or suspected compromise does not turn into a high-risk one-off project.

## Common DKIM failures (signature, body hash, expired)

When DKIM fails, the reason is usually mundane rather than mysterious. A provider may be signing with the wrong domain, the public key may not match the private key in use, the body may have been modified in transit, or the selector may have been removed during a rushed cleanup.

Forwarders and mailing lists are classic failure points because they often rewrite content after the original sender signed it. Receivers also fail validation when DNS is stale, split across providers, or published with formatting issues. The fix is almost always to look at a real signed message, inspect the exact `DKIM-Signature` header, and then compare that evidence with the live DNS record instead of guessing.

## How to publish a DKIM record (HowTo)

Publishing DKIM safely starts with knowing which system is doing the signing. Before you add anything to DNS, confirm the signing domain, the selector name, and the key size your provider expects. Then publish the TXT record exactly where the provider specifies, validate the record publicly, and only after that send test mail through the real production path.

The checklist below is the part most teams skip: verify actual signed messages after the DNS record is live. A record that exists is not the same thing as a sender that signs correctly.

<HowToSteps />

## DKIM and DMARC alignment

DKIM by itself proves that a domain signed the message, but DMARC decides whether that signing domain aligns closely enough with the visible `From` domain to count as authenticated for policy. That distinction matters because a message can pass DKIM and still fail DMARC if the signer is on a different domain or subdomain than the one the user sees.

This is why [DKIM](/learn/dkim) and [DMARC](/learn/dmarc) should always be reviewed together. If the domain is trying to move from `p=none` to enforcement, alignment mistakes are often the reason a rollout stalls. CyberFurl is most useful here when you review the DKIM result, the DMARC policy, and the visible sender identity in the same workflow.

## Tools to check your DKIM

The fastest way to turn this from theory into evidence is to run the [CyberFurl DKIM lookup](/email-tools/dkim) and compare that result with the domain's broader [email authentication posture](/features/email-authentication). If you are debugging a live sender, pair it with [SPF](/learn/spf), [DMARC](/learn/dmarc), and the public [security report](/security-report) so you can see whether the problem is the signature itself or the surrounding trust chain.

## Further reading inside CyberFurl

- [CyberFurl DKIM lookup](/email-tools/dkim)
- [See the email authentication feature](/features/email-authentication)
- [DMARC](/learn/dmarc)
- [SPF](/learn/spf)
- [ARC](/learn/arc)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 6376: DKIM](https://www.rfc-editor.org/rfc/rfc6376)
- [RFC 8301: DKIM key sizes](https://www.rfc-editor.org/rfc/rfc8301)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## BIMI
Source: https://cyberfurl.com/learn/bimi.md

## What is BIMI

BIMI puts your verified brand logo next to authenticated emails in Gmail and Apple Mail. BIMI sits in the part of the mail flow where identity, sender reputation, and enforcement meet. The details matter because one weak link can undo the work done by the other controls.

If you are already working through [DMARC](/learn/dmarc), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl BIMI validation](/email-tools/bimi) and then use the [See the email authentication feature](/features/email-authentication) page to see where it fits in the wider CyberFurl workflow.

## BIMI requirements (DMARC quarantine/reject + VMC)

BIMI only works after the sender has already done the harder trust work. In practice that means a strong DMARC posture, usually at `quarantine` or `reject`, and in many mailbox ecosystems a verified mark certificate or equivalent trust proof for the logo.

That is why BIMI is not an entry-level mail control. It sits on top of authentication maturity and turns that maturity into a user-visible brand signal.

## SVG Tiny PS format

Mailbox providers are strict about the logo format because they need a predictable, safe rendering standard. BIMI logos are typically published as SVG Tiny PS files, not arbitrary marketing assets exported from a design tool.

The operational lesson is simple: branding and mail engineering have to work together here. A visually correct logo can still fail BIMI if the technical file format is wrong.

## VMC vs CMC certificates

A VMC, or Verified Mark Certificate, is the better-known path for proving trademark ownership behind the displayed logo. Some ecosystems also discuss broader certificate approaches, but the core question is the same: who is attesting that the sender has the right to show this mark in the inbox?

For buyers and operators, the certificate step is what makes BIMI a governance project as much as a DNS project.

## BIMI in Gmail, Apple Mail, Yahoo

Support is not uniform across mailbox providers, which is why BIMI should be treated as a compatibility matrix, not a universal inbox feature. Gmail, Apple Mail, and Yahoo have all influenced adoption, but each client has its own enforcement details and display behavior.

That means the right implementation question is not just “did we publish BIMI?” but “where do our users actually see it, and under what conditions?”

## How to publish a BIMI record (HowTo)

A good implementation plan for BIMI starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl BIMI validation](/email-tools/bimi).

<HowToSteps />

## Common BIMI failures

Most BIMI failures trace back to prerequisites, not the BIMI TXT record itself. Weak DMARC posture, bad logo formatting, certificate issues, or provider-specific expectations are more common than a typo in the DNS label.

The safest workflow is to validate prerequisites first, publish second, and then test display behavior in the mailbox ecosystems your users actually rely on.

## ROI of BIMI

BIMI is not mainly about deliverability in the narrow sense. Its strongest value is trust signaling: helping recipients recognize legitimate mail faster and giving security teams another reason to keep authentication controls clean.

For some brands that visual recognition is worth the operational effort. For others, it is only worth doing after DMARC, DKIM, and sender inventory are already under control.

## Tools to check your BIMI

Use the [CyberFurl BIMI validation](/email-tools/bimi) when you want to see the live signal on a real domain, and then step back to the [See the email authentication feature](/features/email-authentication) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl BIMI validation](/email-tools/bimi)
- [See the email authentication feature](/features/email-authentication)
- [DMARC](/learn/dmarc)
- [CyberFurl public security report](/security-report)

## Standards and references

- [BIMI Group implementation guide](https://bimigroup.org/implementation-guide/)
- [Google BIMI sender requirements](https://support.google.com/a/answer/10911320)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## MTA-STS
Source: https://cyberfurl.com/learn/mta-sts.md

## What is MTA-STS

MTA-STS forces sending mail servers to use TLS when delivering to your domain — stopping downgrade attacks. MTA-STS sits in the part of the mail flow where identity, sender reputation, and enforcement meet. The details matter because one weak link can undo the work done by the other controls.

If you are already working through [DANE](/learn/dane) and [TLS-RPT](/learn/tls-rpt), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl MTA-STS check](/email-tools/mta-sts) and then use the [See the email authentication feature](/features/email-authentication) page to see where it fits in the wider CyberFurl workflow.

## Downgrade attacks (and why MTA-STS exists)

Without MTA-STS, SMTP transport has historically been vulnerable to downgrade behavior where an attacker or broken path can prevent TLS from being negotiated cleanly. If the sending system accepts that downgrade, mail can still move, but it moves with weaker guarantees than the receiving domain expected.

MTA-STS exists to let the receiving domain publish a stricter policy: if you deliver mail here, you should require TLS and validate the MX hosts you are connecting to.

## The 3 modes: none, testing, enforce

The policy modes are designed for staged rollout. `none` effectively disables enforcement. `testing` publishes the policy while giving operators room to observe failures. `enforce` is the point where senders are expected to treat policy violations as delivery-blocking problems rather than best-effort warnings.

As with DMARC, the operational challenge is not switching the mode. It is making sure the real MX and certificate posture can support the stricter mode first.

## Required DNS + HTTPS policy file

MTA-STS depends on two public signals working together: a DNS record that tells senders a policy exists, and an HTTPS-hosted policy file under `.well-known` that describes the expected mode, MX hosts, and version.

That split is easy to overlook. A valid DNS record without a reachable, correct policy file is not a complete deployment.

## How to deploy MTA-STS (HowTo, 5 steps)

A good implementation plan for MTA-STS starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl MTA-STS check](/email-tools/mta-sts).

<HowToSteps />

## MTA-STS vs DANE

MTA-STS and DANE both try to make mail transport harder to downgrade or impersonate, but they do it differently. MTA-STS relies on DNS plus HTTPS-hosted policy. DANE relies on DNSSEC and TLSA records.

Which one is more realistic depends on the infrastructure the domain actually controls. Many teams find MTA-STS easier to adopt first; DANE often demands stronger DNSSEC maturity.

## Common errors

The usual failures are stale policy files, mismatched MX host expectations, certificate issues, or switching to enforcement before the real mail path is clean. The fix is to test the live path exactly as a sender would experience it, not just to confirm that the DNS record exists.

## Tools to check your MTA-STS

Use the [CyberFurl MTA-STS check](/email-tools/mta-sts) when you want to see the live signal on a real domain, and then step back to the [See the email authentication feature](/features/email-authentication) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl MTA-STS check](/email-tools/mta-sts)
- [See the email authentication feature](/features/email-authentication)
- [DANE](/learn/dane)
- [TLS-RPT](/learn/tls-rpt)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 8461: MTA-STS](https://www.rfc-editor.org/rfc/rfc8461)
- [RFC 8460: SMTP TLS Reporting](https://www.rfc-editor.org/rfc/rfc8460)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## TLS-RPT
Source: https://cyberfurl.com/learn/tls-rpt.md

## What is TLS-RPT

TLS-RPT delivers daily reports about TLS failures on your inbound email — helping you debug MTA-STS and DANE issues. TLS-RPT sits in the part of the mail flow where identity, sender reputation, and enforcement meet. The details matter because one weak link can undo the work done by the other controls.

If you are already working through [MTA-STS](/learn/mta-sts) and [DANE](/learn/dane), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl TLS-RPT check](/email-tools/tls-rpt) and then use the [See the email authentication feature](/features/email-authentication) page to see where it fits in the wider CyberFurl workflow.

## What's in a TLS-RPT report

A TLS-RPT report summarizes transport failures that senders encountered when trying to deliver mail with expected TLS protections. Depending on the reporter, it can show counts, failure reasons, affected MX hosts, and whether problems were tied to MTA-STS or DANE expectations.

That makes it useful less as a daily inbox item and more as an operational feedback loop for whether stricter transport policy is breaking or being bypassed in the wild.

## How TLS-RPT pairs with MTA-STS / DANE

TLS-RPT is the visibility layer that makes stricter transport policies manageable. MTA-STS and DANE tell senders what the receiving domain expects; TLS-RPT tells the receiving domain when those expectations were not met.

Without that feedback loop, teams can publish transport controls and still miss handshake failures, policy mismatches, or certificate problems affecting real senders.

## Setting up the DNS record (HowTo)

TLS-RPT is published as a DNS TXT record that tells reporters where they can send aggregate transport-failure data. The operational part is not the syntax itself, but making sure the reporting destination is monitored by someone who can tie failures back to mail infrastructure changes.

## Common report patterns

Most reports cluster around predictable issues: certificate mismatch, unreachable policy files, MX host changes, or senders that still cannot satisfy the published policy. Patterns over time matter more than one noisy report because they show whether the issue is isolated or systemic.

## Tools that parse TLS-RPT

Raw reports are useful, but they become far more valuable when teams can aggregate and interpret them alongside MTA-STS, MX, and certificate posture. That is the gap tooling should close: from raw transport telemetry to an actionable explanation of what broke.

## How to fix or implement TLS-RPT

A good implementation plan for TLS-RPT starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl TLS-RPT check](/email-tools/tls-rpt).

<HowToSteps />

## Tools to check your TLS-RPT

Use the [CyberFurl TLS-RPT check](/email-tools/tls-rpt) when you want to see the live signal on a real domain, and then step back to the [See the email authentication feature](/features/email-authentication) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl TLS-RPT check](/email-tools/tls-rpt)
- [See the email authentication feature](/features/email-authentication)
- [MTA-STS](/learn/mta-sts)
- [DANE](/learn/dane)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 8460: SMTP TLS Reporting](https://www.rfc-editor.org/rfc/rfc8460)
- [RFC 8461: MTA-STS](https://www.rfc-editor.org/rfc/rfc8461)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## DANE
Source: https://cyberfurl.com/learn/dane.md

## What is DANE

DANE uses DNSSEC to pin TLS certificates so attackers can't substitute fake ones. DANE sits in the part of the mail flow where identity, sender reputation, and enforcement meet. The details matter because one weak link can undo the work done by the other controls.

If you are already working through [DNSSEC](/learn/dnssec) and [MTA-STS](/learn/mta-sts), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl DANE lookup](/email-tools/dane) and then use the [See the email authentication feature](/features/email-authentication) page to see where it fits in the wider CyberFurl workflow.

## TLSA record anatomy

A TLSA record binds a service to expected certificate material through fields that describe usage, selector, and matching type. That gives the receiver a much more explicit trust statement than a general “this certificate chains to a public CA.”

The record only becomes meaningful when the domain also runs DNSSEC correctly, because DANE depends on signed DNS answers rather than unsigned trust hints.

## Usage, Selector, Matching Type fields

These three fields tell a receiver what exactly to compare and how strict the comparison should be. Usage decides the trust model, selector decides which part of the certificate is referenced, and matching type decides whether the comparison uses the full object or a digest.

DANE becomes manageable once teams stop treating these as abstract numbers and start mapping them to the certificate deployment they actually run.

## DANE for SMTP

SMTP is where DANE has the clearest operational story for many teams. It lets the receiving domain publish TLS expectations for MX delivery without leaning entirely on public CA trust the way web PKI does.

The benefit is tighter control. The cost is that DNSSEC quality and certificate lifecycle discipline now matter much more.

## DANE vs MTA-STS (comparison table)

The comparison only becomes useful when you look at what each side actually changes in the trust chain. Similar names can hide very different enforcement points, and that is usually where implementation mistakes start.

| Topic | What it mainly does | What you should verify |
| --- | --- | --- |
| DANE | Handles the primary decision described in this article | Check the live signal and the dependencies that can invalidate it |
| MTA-STS (comparison table) | Covers an adjacent but different trust problem | Verify where it enforces and where it can silently fail |
| CyberFurl workflow | Puts both views in one investigation path | Use [CyberFurl DANE lookup](/email-tools/dane) plus [See the email authentication feature](/features/email-authentication) to compare them in context |

## Why DANE needs DNSSEC

Without DNSSEC, a TLSA record is just another unsigned DNS answer that an attacker could tamper with. DNSSEC is what gives the receiver confidence that the TLSA data was really published by the domain's authority chain and not rewritten in transit.

## Deployment risks

DANE is powerful, but it is not forgiving. Certificate changes, DNSSEC issues, or wrong TLSA values can create hard delivery problems if the domain publishes strict expectations it cannot maintain cleanly.

## How to fix or implement DANE

A good implementation plan for DANE starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl DANE lookup](/email-tools/dane).

<HowToSteps />

## Tools to check your DANE

Use the [CyberFurl DANE lookup](/email-tools/dane) when you want to see the live signal on a real domain, and then step back to the [See the email authentication feature](/features/email-authentication) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl DANE lookup](/email-tools/dane)
- [See the email authentication feature](/features/email-authentication)
- [DNSSEC](/learn/dnssec)
- [MTA-STS](/learn/mta-sts)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 6698: DANE TLSA](https://www.rfc-editor.org/rfc/rfc6698)
- [RFC 7672: DANE for SMTP](https://www.rfc-editor.org/rfc/rfc7672)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## ARC
Source: https://cyberfurl.com/learn/arc.md

## Why DMARC breaks on forwarding

Forwarding breaks the assumptions behind SPF, and mailing-list or gateway changes can also invalidate DKIM. By the time the message reaches the next receiver, the authentication results the original sender depended on may no longer hold.

That is why DMARC can fail on legitimate forwarded mail even when the original sender was configured correctly.

## What is ARC

ARC preserves authentication results when email passes through forwarders, mailing lists, and security gateways. ARC sits in the part of the mail flow where identity, sender reputation, and enforcement meet. The details matter because one weak link can undo the work done by the other controls.

If you are already working through [DMARC](/learn/dmarc) and [DKIM](/learn/dkim), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl email authentication audit](/email-tools/email-audit) and then use the [See the email authentication feature](/features/email-authentication) page to see where it fits in the wider CyberFurl workflow.

## The 3 ARC headers

ARC works by carrying forward an authenticated story about what earlier hops saw. The three headers are `ARC-Authentication-Results`, which records what a system observed, `ARC-Message-Signature`, which signs the relevant content, and `ARC-Seal`, which seals the chain state for the next hop.

Together they do not “fix” forwarding by magic. They make forwarding behavior explainable and assessable to downstream receivers.

## How ARC chains work

Each trusted intermediary adds its own observation and seal, building a chain that later receivers can inspect. A receiver that trusts the intermediary can decide that even if SPF no longer passes at the final hop, the earlier authenticated state is still meaningful enough to consider.

## Who actually validates ARC

ARC is only as useful as the receivers and intermediaries that choose to honor it. Large mailbox providers and gateways may inspect ARC, but support and trust decisions are not universal.

That means ARC should be seen as a compatibility and trust-preservation layer, not a guarantee that every forwarding path will be accepted.

## Limitations

ARC preserves trust context; it does not create trust where none existed. If the intermediaries are weak, if the chain is broken, or if receivers do not trust the chain, ARC cannot rescue the message on its own.

## How to fix or implement ARC

A good implementation plan for ARC starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl email authentication audit](/email-tools/email-audit).

<HowToSteps />

## Tools to check your ARC

Use the [CyberFurl email authentication audit](/email-tools/email-audit) when you want to see the live signal on a real domain, and then step back to the [See the email authentication feature](/features/email-authentication) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl email authentication audit](/email-tools/email-audit)
- [See the email authentication feature](/features/email-authentication)
- [DMARC](/learn/dmarc)
- [DKIM](/learn/dkim)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 8617: ARC](https://www.rfc-editor.org/rfc/rfc8617)
- [Google ARC overview](https://support.google.com/a/answer/10234742)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Email Spoofing
Source: https://cyberfurl.com/learn/email-spoofing.md

## What is email spoofing

Email spoofing forges the From address to impersonate trusted senders. Email Spoofing sits in the part of the mail flow where identity, sender reputation, and enforcement meet. The details matter because one weak link can undo the work done by the other controls.

If you are already working through [DMARC](/learn/dmarc) and [SPF](/learn/spf), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl public security report](/security-report) and then use the [See the email authentication feature](/features/email-authentication) page to see where it fits in the wider CyberFurl workflow.

## Common spoofing techniques (envelope vs header spoofing, lookalikes)

Spoofing usually starts with one of two tricks: lying about the sending identity inside the message itself, or using a deceptive but different domain that looks close enough to fool a human reader. Header spoofing, envelope spoofing, and lookalike domains each exploit a different layer of trust.

That is why defenses have to combine domain controls with user awareness. The attack surface is not only the protocol, but also the way people visually interpret sender identity.

## Real-world cases (Twitter, Crypto firms)

Well-known spoofing incidents show the same pattern repeatedly: a trusted name, a familiar-looking sender, and a workflow that depends on speed rather than careful inspection. Whether the lure is a brand, an executive, or a crypto platform, the damage usually comes from abusing existing trust rather than inventing a new exploit chain.

## BEC and CEO fraud

Business email compromise is the commercial form of spoofing that keeps working because the messages are simple, credible, and timed around routine requests. A spoofed finance escalation or executive request often succeeds not because the attacker beat a sophisticated filter, but because the message looked normal enough to get human compliance.

## How SPF/DKIM/DMARC stop spoofing

These controls do not remove deception from email, but they make unauthenticated impersonation harder. SPF restricts which infrastructure can send, DKIM protects message integrity, and DMARC tells receivers whether the authenticated identity matches the visible sender. Together they raise the cost of direct domain spoofing.

## What end users should look for

Users still need cues beyond the display name. Suspicious domain spelling, unexpected urgency, payment or credential requests, odd reply-to behavior, and messages that bypass normal process are still some of the strongest signals that a spoofed message made it through.

## How to fix or implement Email Spoofing

A good implementation plan for Email Spoofing starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl public security report](/security-report).

<HowToSteps />

## Tools to check your Email Spoofing

Use the [CyberFurl public security report](/security-report) when you want to see the live signal on a real domain, and then step back to the [See the email authentication feature](/features/email-authentication) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl public security report](/security-report)
- [See the email authentication feature](/features/email-authentication)
- [DMARC](/learn/dmarc)
- [SPF](/learn/spf)
- [DKIM](/learn/dkim)
- [Phishing](/learn/phishing)

## Standards and references

- [CISA: Email security best practices](https://www.cisa.gov/secure-our-world/use-strong-passwords)
- [RFC 7489: DMARC](https://www.rfc-editor.org/rfc/rfc7489)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Phishing
Source: https://cyberfurl.com/learn/phishing.md

## What is phishing

Phishing tricks people into giving up credentials or money via fake emails, links, or sites. Phishing sits in the part of the mail flow where identity, sender reputation, and enforcement meet. The details matter because one weak link can undo the work done by the other controls.

If you are already working through [Email Spoofing](/learn/email-spoofing) and [DMARC](/learn/dmarc), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl public security report](/security-report) and then use the [See the email authentication feature](/features/email-authentication) page to see where it fits in the wider CyberFurl workflow.

## Types: spear phishing, whaling, smishing, vishing, clone, business email compromise

Phishing is not one technique. Spear phishing is targeted, whaling focuses on senior leaders, smishing moves the lure to SMS, vishing uses voice, clone phishing copies a legitimate message pattern, and business email compromise turns trust in routine workflows into payment or credential theft.

The common thread is not the delivery channel. It is the attacker’s attempt to borrow legitimacy from a brand, a colleague, or a process the victim already trusts.

## How phishing attacks unfold (kill chain)

Most phishing campaigns follow a predictable sequence: reconnaissance, lure creation, delivery, interaction, credential capture or malware execution, and then post-compromise actions such as mailbox access, MFA fatigue, or internal escalation. The email is only the front door.

That is why strong response playbooks look past the clicked link itself and ask what access the attacker gained next.

## Real examples

The useful lesson from real phishing cases is rarely the brand name alone. It is usually the operational weakness exposed by the campaign: weak sender controls, over-trusted identity flows, poor user reporting, or gaps in post-click containment.

## Red flags to spot

Unexpected urgency, credential prompts, unusual payment requests, domain lookalikes, mismatched reply-to addresses, and links that do not fit the visible context remain some of the most reliable warning signs. A polished design is not evidence of legitimacy.

## 12 anti-phishing controls (technical + human)

The strongest anti-phishing posture combines mail authentication, secure email filtering, MFA, browser isolation or link protections where appropriate, user reporting paths, incident drills, and fast post-click response. No single layer carries the whole burden because phishing succeeds by looking normal enough to slip past the layer you relied on most.

## What to do if you clicked a phishing link

Treat the click as the start of the incident, not the whole incident. Reset credentials if needed, review active sessions, investigate mailbox or endpoint activity, check whether MFA was challenged, and preserve enough evidence to see whether the attacker went further than the initial lure.

## How to fix or implement Phishing

A good implementation plan for Phishing starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl public security report](/security-report).

<HowToSteps />

## Tools to check your Phishing

Use the [CyberFurl public security report](/security-report) when you want to see the live signal on a real domain, and then step back to the [See the email authentication feature](/features/email-authentication) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl public security report](/security-report)
- [See the email authentication feature](/features/email-authentication)
- [Email Spoofing](/learn/email-spoofing)
- [DMARC](/learn/dmarc)

## Standards and references

- [CISA phishing guidance](https://www.cisa.gov/secure-our-world/recognize-and-report-phishing)
- [NIST phishing-resistant MFA overview](https://pages.nist.gov/800-63-4/sp800-63b.html)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## DNSSEC
Source: https://cyberfurl.com/learn/dnssec.md

## What is DNSSEC

DNSSEC stops attackers from forging DNS answers with cryptographic signatures. DNSSEC sits close to the public DNS layer that resolvers, browsers, inbox providers, and attackers all see. That makes configuration quality and change control just as important as the underlying standard itself.

If you are already working through [Zone Walking](/learn/zone-walking) and [Cache Poisoning](/learn/cache-poisoning), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl DNSSEC check](/dns-tools/dnssec) and then use the [See the DNS posture feature](/features/dns-posture) page to see where it fits in the wider CyberFurl workflow.

## Why plain DNS is insecure (cache poisoning)

Plain DNS was designed for speed and reach, not for proving authenticity. If a resolver accepts a forged answer or caches bad data, users can be redirected without realizing anything is wrong. That is the class of problem DNSSEC was built to reduce.

## DNSSEC trust chain

DNSSEC works by chaining trust from the DNS root downward through signed delegations. A validating resolver checks signatures and delegation records to confirm that the answer it received was published by the right authority chain and was not altered in transit.

## Record types: DNSKEY, DS, RRSIG, NSEC/NSEC3

Each record type serves a different role in that chain. `DNSKEY` publishes the zone keys, `DS` links a child zone to its parent, `RRSIG` carries signatures, and `NSEC` or `NSEC3` proves non-existence. DNSSEC only makes sense once you understand how those records work together rather than as isolated labels in a zone file.

## KSK vs ZSK

The key-signing key and zone-signing key split exists to make long-term trust and routine signing operationally manageable. The KSK anchors trust more carefully and changes less often; the ZSK handles everyday zone signing and rotates more readily.

That separation reduces risk, but it also means key-roll workflows have to be documented and tested instead of improvised.

## How to enable DNSSEC (HowTo per registrar)

Enabling DNSSEC safely is usually a registrar-plus-authoritative-DNS workflow, not a one-click magic feature. The domain needs signed zone data, the parent needs the correct DS record, and the resolver path has to validate without hitting stale or mismatched data.

<HowToSteps />

## Common DNSSEC mistakes

The most damaging DNSSEC mistakes are mismatched DS records, failed key rollovers, partial provider changes, and assuming “signed” automatically means “healthy.” A broken DNSSEC deployment can create harder outages than an unsigned zone if nobody is watching validation results.

## DNSSEC validation in browsers/resolvers

Browsers usually rely on validating resolvers rather than validating every DNS answer themselves. That means the practical place to watch DNSSEC health is often in the resolver and public lookup path, not in the browser UI alone.

## Tools to check your DNSSEC

Use the [CyberFurl DNSSEC check](/dns-tools/dnssec) when you want to see the live signal on a real domain, and then step back to the [See the DNS posture feature](/features/dns-posture) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl DNSSEC check](/dns-tools/dnssec)
- [See the DNS posture feature](/features/dns-posture)
- [Zone Walking](/learn/zone-walking)
- [Cache Poisoning](/learn/cache-poisoning)
- [DANE](/learn/dane)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 9364: DNSSEC operational practices](https://www.rfc-editor.org/rfc/rfc9364)
- [Cloudflare DNSSEC primer](https://www.cloudflare.com/learning/dns/dnssec/)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Zone Walking
Source: https://cyberfurl.com/learn/zone-walking.md

## What is zone walking

Zone walking abuses DNSSEC's NSEC records to enumerate every subdomain in a zone. Zone Walking sits close to the public DNS layer that resolvers, browsers, inbox providers, and attackers all see. That makes configuration quality and change control just as important as the underlying standard itself.

If you are already working through [DNSSEC](/learn/dnssec) and [Subdomain Takeover](/learn/subdomain-takeover), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl DNS exposure checks](/dns-tools/zone-transfer) and then use the [See the DNS posture feature](/features/dns-posture) page to see where it fits in the wider CyberFurl workflow.

## How NSEC enables it

Classic NSEC records prove that a name does not exist by pointing to the next valid name in canonical order. That is operationally elegant, but it also leaks structure: if you can keep following those links, you can often learn which labels really exist in the zone.

## NSEC3 with hashing and salt

NSEC3 was introduced to make this harder by hashing names before publishing denial-of-existence records. The salt and iterations raise the cost of simple walking, but they do not remove the risk when names are predictable or when attackers can do useful offline guesses.

## NSEC3 walking attacks

Even with NSEC3, attackers can still recover meaningful inventory from a zone if the naming scheme is guessable enough. That is why teams should think of NSEC3 as a mitigation that raises the cost of enumeration, not as a guarantee that inventory cannot leak.

## Defenses: NSEC3 opt-out, white lies (RFC 4470, 7129)

NSEC3 opt-out and related defensive settings try to reduce how much structured information the zone reveals while still supporting signed responses. The right choice depends on how much operational complexity the team can absorb and how much inventory secrecy they actually need.

## Tools attackers use

Attackers and researchers use standard DNS tooling, custom walkers, and offline analysis to turn NSEC or NSEC3 responses into zone insight. That is why this topic belongs next to [DNSSEC](/learn/dnssec) and [Subdomain Takeover](/learn/subdomain-takeover), not in a purely theoretical DNS discussion.

## How to fix or implement Zone Walking

A good implementation plan for Zone Walking starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl DNS exposure checks](/dns-tools/zone-transfer).

<HowToSteps />

## Tools to check your Zone Walking

Use the [CyberFurl DNS exposure checks](/dns-tools/zone-transfer) when you want to see the live signal on a real domain, and then step back to the [See the DNS posture feature](/features/dns-posture) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl DNS exposure checks](/dns-tools/zone-transfer)
- [See the DNS posture feature](/features/dns-posture)
- [DNSSEC](/learn/dnssec)
- [Subdomain Takeover](/learn/subdomain-takeover)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 5155: NSEC3](https://www.rfc-editor.org/rfc/rfc5155)
- [RFC 4470: NSEC minimization](https://www.rfc-editor.org/rfc/rfc4470)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Cache Poisoning
Source: https://cyberfurl.com/learn/cache-poisoning.md

## What is cache poisoning

DNS cache poisoning injects fake records into resolver caches — sending users to attacker-controlled sites. Cache Poisoning sits close to the public DNS layer that resolvers, browsers, inbox providers, and attackers all see. That makes configuration quality and change control just as important as the underlying standard itself.

If you are already working through [DNSSEC](/learn/dnssec) and [Dns Hijacking](/learn/dns-hijacking), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl DNS caching checks](/dns-tools/dns-caching) and then use the [See the DNS posture feature](/features/dns-posture) page to see where it fits in the wider CyberFurl workflow.

## The Kaminsky attack (2008)

The Kaminsky attack made resolver poisoning impossible to dismiss as an academic edge case. By forcing many concurrent guesses against outstanding resolver queries, it showed how weak transaction randomness could let attackers inject bad records into cache at meaningful scale.

## SAD DNS (2020)

SAD DNS revived the conversation by showing that even after older hardening steps, side channels could still erode source-port protections. The broader lesson was that resolver hardening is not a single patch; it is a moving target that depends on both protocol design and implementation details.

## How transaction IDs and source ports help

Resolvers defend themselves partly by making queries harder to predict. Randomized transaction IDs and randomized source ports raise the guess space attackers have to hit before a forged response is accepted as legitimate.

## 0x20 randomization

0x20 encoding adds another small unpredictability layer by varying the case of letters in the query name and expecting the response to preserve it. On its own it is not enough, but layered with ID and port randomization it increases attacker cost.

## Why DNSSEC is the long-term fix

Randomness helps, but DNSSEC addresses the core authenticity problem by making answers verifiable instead of merely hard to guess. That is why cache-poisoning discussions eventually lead back to whether the zone and the resolver path support signed validation correctly.

## DoT/DoH role

DoT and DoH protect DNS traffic in transit from some on-path observation and interference risks, but they do not replace DNSSEC's authenticity model. They solve a different part of the trust story.

## How to fix or implement Cache Poisoning

A good implementation plan for Cache Poisoning starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl DNS caching checks](/dns-tools/dns-caching).

<HowToSteps />

## Tools to check your Cache Poisoning

Use the [CyberFurl DNS caching checks](/dns-tools/dns-caching) when you want to see the live signal on a real domain, and then step back to the [See the DNS posture feature](/features/dns-posture) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl DNS caching checks](/dns-tools/dns-caching)
- [See the DNS posture feature](/features/dns-posture)
- [DNSSEC](/learn/dnssec)
- [Dns Hijacking](/learn/dns-hijacking)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 5452: DNS resilience](https://www.rfc-editor.org/rfc/rfc5452)
- [SAD DNS research summary](https://saddns.net/)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## DNS Hijacking
Source: https://cyberfurl.com/learn/dns-hijacking.md

## What is DNS hijacking

DNS hijacking redirects domain traffic to attacker-controlled servers via registrar takeover, router malware, or rogue resolvers. Dns Hijacking sits close to the public DNS layer that resolvers, browsers, inbox providers, and attackers all see. That makes configuration quality and change control just as important as the underlying standard itself.

If you are already working through [DNSSEC](/learn/dnssec) and [Cache Poisoning](/learn/cache-poisoning), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl public security report](/security-report) and then use the [See the DNS posture feature](/features/dns-posture) page to see where it fits in the wider CyberFurl workflow.

## 4 types: registrar, local, router, ISP-level

DNS hijacking is not one path. It can start at the registrar, on the endpoint, inside the router, or at the provider level that answers DNS queries for the user. Each layer changes who the attacker has to compromise and how visible the damage is to defenders.

## Real cases: Sea Turtle, DNSpionage, MyEtherWallet

The best-known hijacking cases matter because they show how damaging DNS manipulation is when it hits the right brand or infrastructure target. Whether the objective is credential theft, surveillance, or cryptocurrency theft, the technique works because users still trust the name they typed.

## How to detect

Detection starts with comparing what the domain should be publishing to what the public internet actually sees: nameserver changes, registrar events, certificate surprises, unusual redirects, and user-path inconsistency across resolvers.

<HowToSteps />

## Defenses: registrar lock, 2FA, DNSSEC, NS monitoring

The strongest defenses sit at different layers. Registrar lock and MFA protect the control plane. DNSSEC hardens authenticity. Nameserver and record monitoring catch drift quickly. Good posture comes from using those controls together, not from assuming one of them is enough.

## Tools to check your Dns Hijacking

Use the [CyberFurl public security report](/security-report) when you want to see the live signal on a real domain, and then step back to the [See the DNS posture feature](/features/dns-posture) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl public security report](/security-report)
- [See the DNS posture feature](/features/dns-posture)
- [DNSSEC](/learn/dnssec)
- [Cache Poisoning](/learn/cache-poisoning)
- [NS Drift](/learn/ns-drift)

## Standards and references

- [CISA DNS security guidance](https://www.cisa.gov/news-events/news/domain-name-system-dns-security-enterprise)
- [ICANN registrar security basics](https://www.icann.org/resources/pages/dns-security-2012-02-25-en)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## NS Drift
Source: https://cyberfurl.com/learn/ns-drift.md

## What is NS drift

NS drift is when your domain's authoritative nameservers change unexpectedly — a strong signal of compromise or misconfig. NS Drift sits close to the public DNS layer that resolvers, browsers, inbox providers, and attackers all see. That makes configuration quality and change control just as important as the underlying standard itself.

If you are already working through [Dns Hijacking](/learn/dns-hijacking) and [DNSSEC](/learn/dnssec), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl monitoring](/monitoring) and then use the [See the DNS posture feature](/features/dns-posture) page to see where it fits in the wider CyberFurl workflow.

## Why NS changes are dangerous

Authoritative nameservers are not just another record. They define who gets to answer for the domain at all. That is why unexpected NS changes are such a strong signal: they can mean migration, misconfiguration, or a real compromise of the DNS control plane.

## Common causes (legitimate vs malicious)

Some NS changes are legitimate, especially during registrar moves, DNS provider changes, or multi-vendor consolidation. The problem is that malicious changes can look operational at first glance, which is why change history and ownership context matter so much.

## Real cases

The useful lesson from NS-drift incidents is not only that nameserver changes can be abused, but that teams often notice them too late because nobody was watching delegation state continuously.

## How to monitor NS records continuously

NS monitoring should be treated as a baseline control, not a luxury. The question is not whether nameservers ever change, but whether the change was expected, documented, and validated before customers or attackers discovered it first.

<HowToSteps />

## Registrar lock + 2FA

These controls reduce the chance that an attacker can make unauthorized control-plane changes. They do not replace monitoring, but they raise the bar enough that surprise NS movement becomes less likely and more suspicious when it happens anyway.

## Tools to check your NS Drift

Use the [CyberFurl monitoring](/monitoring) when you want to see the live signal on a real domain, and then step back to the [See the DNS posture feature](/features/dns-posture) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl monitoring](/monitoring)
- [See the DNS posture feature](/features/dns-posture)
- [Dns Hijacking](/learn/dns-hijacking)
- [DNSSEC](/learn/dnssec)
- [CyberFurl public security report](/security-report)

## Standards and references

- [ICANN nameserver basics](https://www.icann.org/resources/pages/what-is-a-domain-name-2017-07-28-en)
- [CISA DNS security guidance](https://www.cisa.gov/news-events/news/domain-name-system-dns-security-enterprise)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Dangling CNAME
Source: https://cyberfurl.com/learn/dangling-cname.md

## What is a dangling CNAME

A dangling CNAME points to a service you no longer control — letting attackers claim it and hijack your subdomain. Dangling CNAME sits close to the public DNS layer that resolvers, browsers, inbox providers, and attackers all see. That makes configuration quality and change control just as important as the underlying standard itself.

If you are already working through [Subdomain Takeover](/learn/subdomain-takeover), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl subdomain review](/domain-scan/subdomains) and then use the [See the DNS posture feature](/features/dns-posture) page to see where it fits in the wider CyberFurl workflow.

## How attackers exploit it

A dangling CNAME points at a service endpoint the domain no longer controls. If that external service namespace can be re-claimed, an attacker can stand up content under the old target and effectively take over the subdomain without touching the main registrar account.

## Affected providers (S3, Azure, Heroku, GitHub Pages, Shopify, etc.)

This risk shows up anywhere a third-party platform lets customers bind subdomains and later release them. Cloud storage, app platforms, page hosting, commerce tooling, and similar services have all produced real takeover cases over the years.

## Real takeover cases

The reason this issue keeps paying bug bounties is simple: the subdomain still carries the brand's trust. When an abandoned mapping is reclaimed, the attacker inherits a legitimate-looking hostname without having to spoof it.

## How to detect dangling CNAMEs

Detection starts with inventory and verification. You need to know which subdomains point to external platforms and whether those platforms still recognize the binding. Static DNS review alone is not enough if the application side has already been deprovisioned.

<HowToSteps />

## Remediation playbook

The cleanest fixes are to remove the record, re-claim the service intentionally, or replace the target with a provider you still control. The dangerous habit is leaving “temporary” CNAMEs in place after the owning team has moved on.

<HowToSteps />

## Tools to check your Dangling CNAME

Use the [CyberFurl subdomain review](/domain-scan/subdomains) when you want to see the live signal on a real domain, and then step back to the [See the DNS posture feature](/features/dns-posture) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl subdomain review](/domain-scan/subdomains)
- [See the DNS posture feature](/features/dns-posture)
- [Subdomain Takeover](/learn/subdomain-takeover)
- [CyberFurl public security report](/security-report)

## Standards and references

- [OWASP subdomain takeover overview](https://owasp.org/www-community/attacks/Subdomain_takeover)
- [can-i-take-over-xyz reference matrix](https://github.com/EdOverflow/can-i-take-over-xyz)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## DNS Tunneling
Source: https://cyberfurl.com/learn/dns-tunneling.md

## What is DNS tunneling

DNS tunneling encodes data inside DNS queries to bypass firewalls — used for C2 traffic and exfiltration. Dns Tunneling sits close to the public DNS layer that resolvers, browsers, inbox providers, and attackers all see. That makes configuration quality and change control just as important as the underlying standard itself.

If you are already working through [Cache Poisoning](/learn/cache-poisoning), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl public security report](/security-report) and then use the [See the DNS posture feature](/features/dns-posture) page to see where it fits in the wider CyberFurl workflow.

## Why DNS is abused (rarely blocked)

DNS is attractive to attackers because it is one of the few protocols many environments still allow almost everywhere. If defenders are not watching query patterns closely, DNS can become a covert path for command traffic or data movement without looking like classic malware egress.

## Common tools (iodine, dnscat2, DNSExfiltrator)

Well-known tunneling tools show how mature the technique already is. They encode data into labels, queries, or response patterns and rely on the fact that many networks still treat DNS as trusted background traffic.

## Real-world cases (DarkHydrus, OilRig)

Campaigns linked to espionage and long-dwell intrusion sets keep returning to DNS tunneling because the channel is flexible and blends into necessary infrastructure. The pattern matters more than the campaign names: attackers use whatever trusted channel defenders monitor the least.

## Detection: query rate, entropy, length, NXDOMAIN spikes

Useful detection starts with behavior rather than signatures alone. Unusually long labels, high-entropy subdomains, odd request volume, repetitive TXT use, or NXDOMAIN-heavy patterns can all point to abuse when they do not fit the domain's normal profile.

## Defenses

Good defenses combine resolver logging, egress controls, anomaly detection, and a willingness to question whether every endpoint really needs unrestricted external DNS resolution. If teams only look at destination reputation, they will miss many tunneling patterns.

## How to fix or implement Dns Tunneling

A good implementation plan for Dns Tunneling starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl public security report](/security-report).

<HowToSteps />

## Tools to check your Dns Tunneling

Use the [CyberFurl public security report](/security-report) when you want to see the live signal on a real domain, and then step back to the [See the DNS posture feature](/features/dns-posture) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl public security report](/security-report)
- [See the DNS posture feature](/features/dns-posture)
- [Cache Poisoning](/learn/cache-poisoning)

## Standards and references

- [MITRE ATT&CK T1071.004](https://attack.mitre.org/techniques/T1071/004/)
- [CISA DNS exfiltration guidance](https://www.cisa.gov/news-events/news/securing-domain-name-system-dns)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## CAA Records
Source: https://cyberfurl.com/learn/caa-records.md

## What is a CAA record

CAA DNS records tell certificate authorities which CAs are allowed to issue TLS certs for your domain — blocking rogue issuance. CAA Records sits close to the public DNS layer that resolvers, browsers, inbox providers, and attackers all see. That makes configuration quality and change control just as important as the underlying standard itself.

If you are already working through [SSL / TLS](/learn/ssl-tls) and [Certificate Transparency](/learn/certificate-transparency), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl SSL and certificate checks](/infrastructure/ssltest) and then use the [See the DNS posture feature](/features/dns-posture) page to see where it fits in the wider CyberFurl workflow.

## Syntax (flags issue ca)

CAA syntax looks simple because it is simple: a flag field, a tag such as `issue` or `issuewild`, and a value naming which CA is allowed. The danger is assuming simplicity means there is nothing operational to get wrong.

## Tags: issue, issuewild, iodef

`issue` controls ordinary issuance, `issuewild` narrows or expands wildcard behavior, and `iodef` gives CAs a place to send incident or policy feedback. Teams should treat those tags as issuance-governance controls, not as decorative DNS extras.

## Real CA scope (Let's Encrypt, DigiCert, Sectigo)

CAA only matters if it matches the certificate authorities the organization actually uses. The live issuance path should be intentional. If a CA no one remembers is still implicitly allowed, the domain has governance debt even if no abuse has happened yet.

## How to add CAA (HowTo)

Adding CAA safely starts with inventory: which teams, vendors, and automation systems can issue certificates today? Once that is clear, the DNS policy can narrow issuance to the CAs that really belong there.

<HowToSteps />

## Common mistakes

The common mistakes are forgetting wildcard behavior, publishing rules that do not match existing automation, or assuming CAA prevents all certificate surprises by itself. It is a useful control, but it still needs CT monitoring and lifecycle discipline around real certificates.

## Tools to check your CAA Records

Use the [CyberFurl SSL and certificate checks](/infrastructure/ssltest) when you want to see the live signal on a real domain, and then step back to the [See the DNS posture feature](/features/dns-posture) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl SSL and certificate checks](/infrastructure/ssltest)
- [See the DNS posture feature](/features/dns-posture)
- [SSL / TLS](/learn/ssl-tls)
- [Certificate Transparency](/learn/certificate-transparency)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 8659: CAA](https://www.rfc-editor.org/rfc/rfc8659)
- [Let’s Encrypt CAA guide](https://letsencrypt.org/docs/caa/)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Content Security Policy
Source: https://cyberfurl.com/learn/csp.md

## What is CSP

CSP is the HTTP header that blocks XSS, clickjacking, and unauthorized scripts. Content Security Policy is part of the browser-facing trust boundary. It shapes what the client is allowed to reveal, load, or trust before any backend incident response even starts.

If you are already working through [HSTS](/learn/hsts) and [X-Frame-Options](/learn/x-frame-options), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl security headers scan](/infra-scan/http-headers) and then use the [See the web security headers feature](/features/web-security-headers) page to see where it fits in the wider CyberFurl workflow.

## Anatomy of a CSP header

A CSP header is a collection of directives that tells the browser which origins and execution paths are allowed for different resource types. In practice the policy is only as strong as its most permissive directive, which is why teams need to read the whole header and not just confirm that one exists.

## Key directives (default-src, script-src, style-src, etc.)

`default-src` sets the fallback, while specific directives such as `script-src`, `style-src`, `img-src`, and `frame-ancestors` tighten behavior for the resource types most likely to matter. The useful review question is not “how many directives do we have?” but “which ones are carrying the real risk on this application?”

## nonce vs hash vs 'unsafe-inline'

Nonces and hashes let teams keep inline behaviors while still proving that the browser should trust a specific piece of code. `unsafe-inline` is the opposite: it tells the browser to stop making that distinction. That is why moving away from `unsafe-inline` is such a common milestone in a serious CSP rollout.

## strict-dynamic

`strict-dynamic` changes how trust propagates once a nonce- or hash-trusted script starts loading other scripts. Used well, it can simplify modern script-heavy applications. Used carelessly, it can make a policy harder to reason about if the team does not understand which scripts are becoming trust anchors.

## Reporting: report-uri, report-to

Reporting endpoints are what turn CSP from a static policy into a learning loop. They tell you which blocked loads and execution attempts the browser actually saw, which is essential during rollout. The mistake is treating reports as noise instead of triage input.

## Rollout strategy: report-only → enforce

The safest CSP rollouts start in report-only mode, learn from real blocked behaviors, and then tighten toward enforcement with deliberate exceptions. Teams that jump straight to enforcement usually discover their real dependency map the hard way.

<HowToSteps />

## Common pitfalls

Most failures around Content Security Policy are less about the standard and more about operations: copied examples, stale providers, undocumented exceptions, or rollout steps that were never verified from the outside.

These issues are easiest to catch when the review is evidence-led. Look at what the domain is really publishing or sending, then ask where the trust chain can be altered, bypassed, or silently downgraded.

- Missing ownership: nobody can clearly name which team or provider owns the live Content Security Policy behavior.
- Drift after change: a migration, proxy, vendor switch, or DNS edit quietly changed the result.
- Weak enforcement: the control exists, but the chosen value is too permissive to change risk meaningfully.
- No live verification: the rollout was declared done without checking what the public internet now sees.

## How to fix or implement Content Security Policy

A good implementation plan for Content Security Policy starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl security headers scan](/infra-scan/http-headers).

<HowToSteps />

## Tools to check your Content Security Policy

Use the [CyberFurl security headers scan](/infra-scan/http-headers) when you want to see the live signal on a real domain, and then step back to the [See the web security headers feature](/features/web-security-headers) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl security headers scan](/infra-scan/http-headers)
- [See the web security headers feature](/features/web-security-headers)
- [HSTS](/learn/hsts)
- [X-Frame-Options](/learn/x-frame-options)
- [CyberFurl public security report](/security-report)

## Standards and references

- [MDN CSP reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP)
- [OWASP CSP Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Content_Security_Policy_Cheat_Sheet.html)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## HSTS
Source: https://cyberfurl.com/learn/hsts.md

## What is HSTS

HSTS tells browsers to refuse HTTP forever for your domain. HSTS is part of the browser-facing trust boundary. It shapes what the client is allowed to reveal, load, or trust before any backend incident response even starts.

If you are already working through [Content Security Policy](/learn/csp) and [SSL / TLS](/learn/ssl-tls), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl security headers scan](/infra-scan/http-headers) and then use the [See the web security headers feature](/features/web-security-headers) page to see where it fits in the wider CyberFurl workflow.

## The header anatomy

HSTS is simple by design: a browser sees `Strict-Transport-Security` over HTTPS and remembers that the site should only be reached over HTTPS for a defined period. The power comes from that memory, not from the syntax alone.

## includeSubDomains and the rollback risk

`includeSubDomains` is powerful because it extends the rule to the whole namespace, but that is also what makes rollback harder. If forgotten internal hosts or legacy subdomains still rely on HTTP, the breakage is not theoretical; the browser will enforce the remembered rule anyway.

## HSTS Preload list ([hstspreload.org](http://hstspreload.org))

Preload takes the idea one step further by baking the HTTPS-only expectation into browser lists before the first request even happens. That removes the first-visit downgrade window, but it also raises the cost of mistakes because the browser no longer needs to learn the rule from the site itself.

## Removing yourself from preload (12 months+)

Undoing preload is intentionally slow and procedural because preload is meant to be a durable commitment. Teams should treat preload as something to earn after verifying the full subdomain surface, not as a cosmetic checkbox.

## HowTo: deploy HSTS safely

A safe rollout starts with clean HTTPS everywhere, then a conservative `max-age`, then broader coverage such as `includeSubDomains`, and only then preload consideration. The main job is dependency discovery, not header typing.

## How to fix or implement HSTS

A good implementation plan for HSTS starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl security headers scan](/infra-scan/http-headers).

<HowToSteps />

## Tools to check your HSTS

Use the [CyberFurl security headers scan](/infra-scan/http-headers) when you want to see the live signal on a real domain, and then step back to the [See the web security headers feature](/features/web-security-headers) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl security headers scan](/infra-scan/http-headers)
- [See the web security headers feature](/features/web-security-headers)
- [Content Security Policy](/learn/csp)
- [SSL / TLS](/learn/ssl-tls)
- [CyberFurl public security report](/security-report)

## Standards and references

- [MDN HSTS reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security)
- [HSTS preload guidance](https://hstspreload.org/)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## X-Frame-Options
Source: https://cyberfurl.com/learn/x-frame-options.md

## What is clickjacking

Clickjacking works by embedding a legitimate page in a deceptive frame so the user interacts with something they do not understand they are authorizing. The technical trick is simple; the value comes from abusing the user's trust in the framed site.

## X-Frame-Options values

`DENY` blocks framing entirely. `SAMEORIGIN` allows framing only from the same origin. Older partial options existed historically, but in modern practice most teams choose between those two depending on whether the application legitimately embeds itself.

## frame-ancestors (the modern CSP3 replacement)

`frame-ancestors` in CSP is more expressive than X-Frame-Options because it can describe specific allowed framing origins instead of the older binary model. That is why many teams now treat X-Frame-Options as baseline compatibility and `frame-ancestors` as the real policy surface.

## Why both?

Using both is still common because X-Frame-Options helps with older compatibility while CSP carries the modern policy. The point is not redundancy for its own sake; it is making sure clickjacking defenses hold across the browser mix the organization still cares about.

## Real clickjacking cases

Clickjacking remains relevant anywhere a page can trigger a meaningful action with one or two user interactions. Admin panels, payment confirmations, permission prompts, and embedded business tools are all more interesting targets than brochureware pages.

## How to fix or implement X-Frame-Options

A good implementation plan for X-Frame-Options starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl security headers scan](/infra-scan/http-headers).

<HowToSteps />

## Tools to check your X-Frame-Options

Use the [CyberFurl security headers scan](/infra-scan/http-headers) when you want to see the live signal on a real domain, and then step back to the [See the web security headers feature](/features/web-security-headers) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl security headers scan](/infra-scan/http-headers)
- [See the web security headers feature](/features/web-security-headers)
- [Content Security Policy](/learn/csp)
- [CyberFurl public security report](/security-report)

## Standards and references

- [MDN X-Frame-Options reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options)
- [OWASP clickjacking defense](https://cheatsheetseries.owasp.org/cheatsheets/Clickjacking_Defense_Cheat_Sheet.html)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Referrer-Policy
Source: https://cyberfurl.com/learn/referrer-policy.md

## What is the Referer header

Before Referrer-Policy makes sense, you need to understand the `Referer` header itself. Browsers send it when a user follows a link or when a page loads another resource, and the value can reveal more than many teams expect: not just the origin, but sometimes the full path and query string of the page the user came from.

That can be useful for analytics and debugging, but it can also leak internal paths, campaign parameters, password-reset URLs, search terms, or application state to third parties. Referrer-Policy exists so you can decide how much of that context leaves the page instead of accepting the browser default blindly.

## What Referrer-Policy controls

Referrer-Policy does one job: it decides how much referrer information the browser is allowed to send on each request. You can set it as an HTTP response header, with a `<meta>` tag for the page, or on specific elements and requests when you need tighter control around one link or one fetch.

The practical decision is not “do I want a referrer or no referrer?” It is “what should same-origin requests reveal, what should cross-origin requests reveal, and what should happen when a user moves from HTTPS to HTTP?” Once you frame it that way, the policy values become much easier to choose.

## The 8 directive values

The eight standard values are `no-referrer`, `no-referrer-when-downgrade`, `origin`, `origin-when-cross-origin`, `same-origin`, `strict-origin`, `strict-origin-when-cross-origin`, and `unsafe-url`. In real deployments, most teams spend their time comparing only a few of them: `same-origin`, `strict-origin`, and `strict-origin-when-cross-origin`.

The reason is simple. Those are the values that usually balance privacy with useful attribution. At the other extreme, `unsafe-url` is usually too permissive because it can send full paths and queries to other origins. At the stricter end, `no-referrer` is clear and safe, but it can remove too much context for analytics, support flows, or federated applications that legitimately depend on origin data.

## Which to choose

For most modern public applications, `strict-origin-when-cross-origin` is the default starting point because it preserves the full referrer on same-origin navigation, trims cross-origin requests down to the origin, and drops the referrer on insecure downgrades. That usually gives teams the right privacy boundary without losing normal attribution.

Choose stricter values such as `same-origin` or `no-referrer` when the application handles sensitive internal paths, user identifiers, or query parameters that should never leave the origin. Choose looser values only when you can defend exactly why the downstream system needs the extra detail.

## Privacy implications

Referrer-Policy is one of those headers that looks small until you think through the data it can leak. A full referrer can expose search strings, support-ticket IDs, password-reset flows, or internal product routes that were never meant for third parties.

That is why privacy teams and security teams both care about it. The question is not just whether the browser sends a referrer, but whether external analytics tags, CDNs, ad networks, or embedded content providers are learning more about the user's journey than they should. If you already review [CSP](/learn/csp) or other [web security headers](/features/web-security-headers), Referrer-Policy belongs in that same conversation.

## How to fix or implement Referrer-Policy

Start by checking what the live site already sends. Many teams assume the app has no policy when in reality the browser default is doing the work, or a proxy is injecting a value nobody documented. Once you know the current state, choose the policy based on the most sensitive URLs your users can load, not the least sensitive.

The safest rollout is to pick a conservative value, verify that login flows, payment flows, analytics, and third-party integrations still behave as expected, and then make exceptions only where you can justify them. CyberFurl's [security headers scan](/infra-scan/http-headers) is useful here because it lets you review the live header instead of relying on framework defaults.

<HowToSteps />

## Tools to check your Referrer-Policy

Run the [CyberFurl security headers scan](/infra-scan/http-headers) to see the live Referrer-Policy value alongside the rest of the site's browser-facing headers. Then compare it with the broader [web security headers feature page](/features/web-security-headers) and related controls like [CSP](/learn/csp) and [HSTS](/learn/hsts) so the header is reviewed as part of a real browser trust model, not as an isolated checklist item.

## Further reading inside CyberFurl

- [CyberFurl security headers scan](/infra-scan/http-headers)
- [See the web security headers feature](/features/web-security-headers)
- [Content Security Policy](/learn/csp)
- [CyberFurl public security report](/security-report)

## Standards and references

- [MDN Referrer-Policy reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy)
- [W3C Referrer Policy](https://www.w3.org/TR/referrer-policy/)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Permissions-Policy
Source: https://cyberfurl.com/learn/permissions-policy.md

## What is Permissions-Policy

Permissions-Policy (formerly Feature-Policy) controls which browser APIs (camera, mic, geolocation, FLoC) can be used on your site. Permissions-Policy is part of the browser-facing trust boundary. It shapes what the client is allowed to reveal, load, or trust before any backend incident response even starts.

If you are already working through [Content Security Policy](/learn/csp), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl security headers scan](/infra-scan/http-headers) and then use the [See the web security headers feature](/features/web-security-headers) page to see where it fits in the wider CyberFurl workflow.

## Why it replaced Feature-Policy

The rename from Feature-Policy to Permissions-Policy was not just cosmetic. It reflected a clearer model for deciding which browser features should be available to the document or embedded content at all.

## Common features to lock (camera, microphone, geolocation, payment, USB, FLoC)

The best candidates to disable are the ones the site does not need. Camera, microphone, geolocation, payment handlers, USB, and similar features are valuable when required, but they should not be ambiently available just because the browser supports them.

## Syntax

Permissions-Policy syntax expresses which origins, if any, are allowed to use a given browser capability. The practical goal is to make unnecessary capabilities unavailable by default and then grant them only where the real application needs them.

## How to test

Testing is not only about confirming the header exists. It is about checking real browser behavior in pages and iframes that would otherwise request those capabilities. A policy with no behavioral validation is just another string in a response header.

<HowToSteps />

## Tools to check your Permissions-Policy

Use the [CyberFurl security headers scan](/infra-scan/http-headers) when you want to see the live signal on a real domain, and then step back to the [See the web security headers feature](/features/web-security-headers) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl security headers scan](/infra-scan/http-headers)
- [See the web security headers feature](/features/web-security-headers)
- [Content Security Policy](/learn/csp)
- [CyberFurl public security report](/security-report)

## Standards and references

- [MDN Permissions-Policy reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Permissions-Policy)
- [W3C Permissions Policy draft](https://w3c.github.io/webappsec-permissions-policy/)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## SSL / TLS
Source: https://cyberfurl.com/learn/ssl-tls.md

## SSL vs TLS history

People still say “SSL” out of habit, but modern secure transport is TLS. The historical distinction matters because old protocol names often signal old operational assumptions too. Teams should think in terms of present TLS posture, not legacy branding.

## Why TLS 1.0/1.1 are deprecated

Older protocol versions are deprecated because they no longer offer the security margin expected on modern public services. Keeping them around usually means carrying compatibility debt that benefits only the weakest clients while increasing trust and compliance risk.

## The TLS handshake (1.2 vs 1.3)

The handshake is where the client and server agree on how to communicate securely, authenticate the server, and derive session keys. TLS 1.3 simplifies and hardens that process compared with 1.2, which is why it often reduces both complexity and attack surface.

## Cipher suites explained

Cipher suites define which cryptographic building blocks the session is allowed to use. For operators the key point is not memorizing names, but knowing whether the server is still offering weak or unnecessary choices that modern clients no longer need.

## Certificate chains and roots

A valid certificate is only part of the story. The browser also needs a chain it trusts back to a root in its trust store. Many “TLS issues” are really certificate chain or intermediate-delivery issues rather than protocol issues.

## OCSP and stapling

OCSP exists so clients can learn whether a certificate has been revoked, but fetching revocation data live has reliability and privacy trade-offs. Stapling improves that by letting the server deliver the status proof directly during the handshake.

## Common errors

Hostname mismatch, expired certificates, wrong intermediates, old protocols, and weak ciphers are still the day-to-day failures teams actually see. A good SSL/TLS review connects those findings to user-visible risk instead of treating them as abstract crypto hygiene.

## How to fix or implement SSL / TLS

A good implementation plan for SSL / TLS starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl SSL and TLS scan](/infrastructure/ssltest).

<HowToSteps />

## Tools to check your SSL / TLS

Use the [CyberFurl SSL and TLS scan](/infrastructure/ssltest) when you want to see the live signal on a real domain, and then step back to the [See the web security headers feature](/features/web-security-headers) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl SSL and TLS scan](/infrastructure/ssltest)
- [See the web security headers feature](/features/web-security-headers)
- [HSTS](/learn/hsts)
- [Certificate Transparency](/learn/certificate-transparency)
- [CAA Records](/learn/caa-records)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 8446: TLS 1.3](https://www.rfc-editor.org/rfc/rfc8446)
- [Mozilla server-side TLS guidelines](https://wiki.mozilla.org/Security/Server_Side_TLS)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Certificate Transparency
Source: https://cyberfurl.com/learn/certificate-transparency.md

## What is CT

Certificate Transparency (CT) is the public log of every TLS cert ever issued — letting domain owners catch unauthorized certs in minutes. Certificate Transparency is part of the browser-facing trust boundary. It shapes what the client is allowed to reveal, load, or trust before any backend incident response even starts.

If you are already working through [SSL / TLS](/learn/ssl-tls) and [CAA Records](/learn/caa-records), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl SSL and TLS scan](/infrastructure/ssltest) and then use the [See the web security headers feature](/features/web-security-headers) page to see where it fits in the wider CyberFurl workflow.

## Why it exists (DigiNotar, Symantec)

Certificate Transparency exists because the web PKI needed a better answer to unexpected certificate issuance. Incidents involving compromised or poorly governed certificate issuance showed that domains needed visibility into what had been issued for them, not just faith that the CA ecosystem would always behave perfectly.

## SCTs (Signed Certificate Timestamps)

SCTs are the proofs that a certificate was promised to a public CT log. Modern browsers rely on those proofs as part of the issuance ecosystem, which is why CT is not just a monitoring nice-to-have but a real part of web trust at scale.

## Reading CT logs ([crt.sh](http://crt.sh), censys, Cloudflare Merkle Town)

This part of Certificate Transparency is usually where teams discover whether the control is genuinely working or just looks reasonable on paper. The useful lens is to connect the public signal to a real ownership boundary, user-visible behavior, or failure path on the live system.

If you are using CyberFurl for the investigation, confirm the external evidence first, compare it with the intended posture, and then decide whether the next move is cleanup, tighter enforcement, or ongoing monitoring through [CyberFurl SSL and TLS scan](/infrastructure/ssltest).

## Setting up CT monitoring

CT monitoring should answer one operational question quickly: was this certificate expected? If the answer is no, the organization should be able to connect the issuance to the owning team, the related asset, or the incident queue without starting from scratch.

## CT for subdomain enumeration

Transparency is good for defenders, but attackers can read CT too. Newly logged certificates often reveal hostnames, environments, and acquisition patterns that were never meant to act as a public inventory feed. That is why CT belongs in both the certificate-governance and external-recon conversations.

## How to fix or implement Certificate Transparency

A good implementation plan for Certificate Transparency starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl SSL and TLS scan](/infrastructure/ssltest).

<HowToSteps />

## Tools to check your Certificate Transparency

Use the [CyberFurl SSL and TLS scan](/infrastructure/ssltest) when you want to see the live signal on a real domain, and then step back to the [See the web security headers feature](/features/web-security-headers) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl SSL and TLS scan](/infrastructure/ssltest)
- [See the web security headers feature](/features/web-security-headers)
- [SSL / TLS](/learn/ssl-tls)
- [CAA Records](/learn/caa-records)
- [CyberFurl public security report](/security-report)

## Standards and references

- [RFC 9162: Certificate Transparency](https://www.rfc-editor.org/rfc/rfc9162)
- [Google Certificate Transparency overview](https://certificate.transparency.dev/)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Subdomain Takeover
Source: https://cyberfurl.com/learn/subdomain-takeover.md

## What is subdomain takeover

Subdomain takeover lets attackers claim abandoned cloud services pointed to by your DNS. Subdomain Takeover belongs to the external exposure story: the set of signals attackers, customers, and monitoring systems can observe without logging into your environment.

If you are already working through [Dangling CNAME](/learn/dangling-cname), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl subdomain scan](/domain-scan/subdomains) and then use the [See the vulnerability surface feature](/features/vulnerability-surface) page to see where it fits in the wider CyberFurl workflow.

## The dangling-CNAME mechanism

Most subdomain takeovers start with stale DNS pointing at an external platform the organization no longer controls. If that platform lets someone else claim the old resource name, the attacker inherits the trust of the subdomain without touching the registrar account.

## Vulnerable services list (S3, GitHub Pages, Heroku, Shopify, Tumblr, Fastly, etc.)

The service list matters because the risk is tied to how each provider handles released bindings. Static-site hosts, app platforms, CDNs, and commerce services have all produced real takeover conditions when DNS outlived the application it used to point at.

## Real bug bounty payouts

This issue pays bug bounties because the impact is not theoretical. A taken-over branded subdomain can host phishing, serve malware, collect credentials, or undermine customer trust immediately, often with less effort than building a spoofed domain lookalike.

## Detection at scale

Detection is not just finding CNAME records. It is checking whether the target resource still exists and whether the provider's current behavior allows re-claiming the hostname. That is why live validation matters more than static DNS inventory alone.

## Remediation playbook

The clean fix is to remove the stale record or intentionally re-claim the destination before someone else does. The dangerous habit is leaving “temporary” mappings in place after migrations, sunsetting, or vendor exits.

<HowToSteps />

## How to fix or implement Subdomain Takeover

A good implementation plan for Subdomain Takeover starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl subdomain scan](/domain-scan/subdomains).

<HowToSteps />

## Tools to check your Subdomain Takeover

Use the [CyberFurl subdomain scan](/domain-scan/subdomains) when you want to see the live signal on a real domain, and then step back to the [See the vulnerability surface feature](/features/vulnerability-surface) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl subdomain scan](/domain-scan/subdomains)
- [See the vulnerability surface feature](/features/vulnerability-surface)
- [Dangling CNAME](/learn/dangling-cname)
- [CyberFurl public security report](/security-report)

## Standards and references

- [OWASP subdomain takeover overview](https://owasp.org/www-community/attacks/Subdomain_takeover)
- [can-i-take-over-xyz reference matrix](https://github.com/EdOverflow/can-i-take-over-xyz)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Credential Stuffing
Source: https://cyberfurl.com/learn/credential-stuffing.md

## What is credential stuffing

Credential stuffing automates login attempts using leaked password lists. Credential Stuffing belongs to the external exposure story: the set of signals attackers, customers, and monitoring systems can observe without logging into your environment.

If you are already working through [Data Breach](/learn/data-breach), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl breach exposure view](/threat-intelligence/breach) and then use the [See the vulnerability surface feature](/features/vulnerability-surface) page to see where it fits in the wider CyberFurl workflow.

## vs password spraying vs brute force

Credential stuffing is different from brute force because the attacker is not inventing passwords; they are replaying real username-password pairs leaked elsewhere. It is different from password spraying because spraying tests a few common passwords across many users, while stuffing tests many known pairs against the same service.

## Where attackers get lists

Attackers pull these lists from breach dumps, combo lists, infostealer logs, and criminal marketplaces that aggregate credential material from many incidents. The value comes from password reuse: one breach becomes leverage against many unrelated services.

## Real cases (Disney+, DoorDash, Spotify)

Major stuffing incidents matter because they show how account takeover can happen even when the attacked service was not the original breach source. The user's reused password is what links the two events together.

## Detection signals

Useful signals include bursts of failed logins, many usernames from a small infrastructure set, success after long failure sequences, impossible geographic mix, and reuse patterns that do not look like normal user behavior. Good detection is behavioral, not only signature-based.

## 10 defenses (MFA, rate-limit, breach-aware passwords, bot detection, etc.)

The strongest defenses are layered: MFA, bot controls, rate limiting, breached-password screening, anomaly detection, session review, and fast lockout or challenge paths. No single one removes the problem because stuffing attacks adapt to whichever control is weakest.

## How to fix or implement Credential Stuffing

A good implementation plan for Credential Stuffing starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl breach exposure view](/threat-intelligence/breach).

<HowToSteps />

## Tools to check your Credential Stuffing

Use the [CyberFurl breach exposure view](/threat-intelligence/breach) when you want to see the live signal on a real domain, and then step back to the [See the vulnerability surface feature](/features/vulnerability-surface) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl breach exposure view](/threat-intelligence/breach)
- [See the vulnerability surface feature](/features/vulnerability-surface)
- [Data Breach](/learn/data-breach)
- [CyberFurl public security report](/security-report)

## Standards and references

- [OWASP Credential Stuffing Prevention](https://cheatsheetseries.owasp.org/cheatsheets/Credential_Stuffing_Prevention_Cheat_Sheet.html)
- [NIST digital identity guidelines](https://pages.nist.gov/800-63-4/sp800-63b.html)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Typosquatting
Source: https://cyberfurl.com/learn/typosquatting.md

## What is typosquatting

Typosquatting registers misspelled or homograph variants of your domain to harvest traffic, host phishing, or distribute malware. Typosquatting belongs to the external exposure story: the set of signals attackers, customers, and monitoring systems can observe without logging into your environment.

If you are already working through [Phishing](/learn/phishing), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl typosquatting scan](/domain-recon/typosquatting) and then use the [See the vulnerability surface feature](/features/vulnerability-surface) page to see where it fits in the wider CyberFurl workflow.

## Variants: typo, homoglyph, IDN/Punycode, TLD-swap, bitsquatting

Typosquatting covers several different patterns. Some are plain misspellings, some abuse lookalike Unicode characters, some swap TLDs, and some rely on rarer technical edge cases such as bitsquatting. The common idea is to capture trust intended for the legitimate domain.

## Real cases ([gooogle.com](http://gooogle.com), npm typo packages, Crypto wallets)

This part of Typosquatting is usually where teams discover whether the control is genuinely working or just looks reasonable on paper. The useful lens is to connect the public signal to a real ownership boundary, user-visible behavior, or failure path on the live system.

If you are using CyberFurl for the investigation, confirm the external evidence first, compare it with the intended posture, and then decide whether the next move is cleanup, tighter enforcement, or ongoing monitoring through [CyberFurl typosquatting scan](/domain-recon/typosquatting).

## How to monitor at scale

Scale monitoring means watching candidate variants, not just waiting for customer complaints. That usually includes typo generation, homograph analysis, certificate activity, content review, and prioritization based on which variants are actually live and risky.

<HowToSteps />

## UDRP and legal options

Legal remedies such as UDRP can help, but they are slower than good detection and do not replace technical mitigations. The operational question is often whether to block, monitor, defend the brand publicly, or pursue formal takedown after evidence is collected.

## Tools to check your Typosquatting

Use the [CyberFurl typosquatting scan](/domain-recon/typosquatting) when you want to see the live signal on a real domain, and then step back to the [See the vulnerability surface feature](/features/vulnerability-surface) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl typosquatting scan](/domain-recon/typosquatting)
- [See the vulnerability surface feature](/features/vulnerability-surface)
- [Phishing](/learn/phishing)
- [CyberFurl public security report](/security-report)

## Standards and references

- [MITRE ATT&CK T1583.001](https://attack.mitre.org/techniques/T1583/001/)
- [ICANN UDRP overview](https://www.icann.org/resources/pages/help/dndr/udrp-en)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Data Breach
Source: https://cyberfurl.com/learn/data-breach.md

## What is a data breach

A data breach is the unauthorized exposure of confidential data. Data Breach belongs to the external exposure story: the set of signals attackers, customers, and monitoring systems can observe without logging into your environment.

If you are already working through [Credential Stuffing](/learn/credential-stuffing) and [Phishing](/learn/phishing), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl breach exposure view](/threat-intelligence/breach) and then use the [See the vulnerability surface feature](/features/vulnerability-surface) page to see where it fits in the wider CyberFurl workflow.

## Types (insider, ransomware, credential, misconfig cloud, supply chain)

“Data breach” is an outcome category, not a single cause. Insider misuse, ransomware, credential theft, exposed cloud storage, vulnerable SaaS integrations, and supplier compromise can all lead to the same end state: data leaving the control boundary without authorization.

## IBM Cost of Breach numbers

The exact annual number changes, but the lesson from cost-of-breach reporting is consistent: containment speed, visibility, and response maturity shape outcome as much as the original intrusion path. The cost compounds when teams discover too late what data was exposed and which systems were in scope.

## Notable breaches

The value of notable breaches is not just in the headline. They show recurring failure patterns: too much standing access, weak identity controls, missing asset visibility, poor logging, and delayed response after the first sign of compromise.

## Notification laws (GDPR 72h, CCPA, India DPDP)

Breach response is not only technical. Different jurisdictions impose different notification and disclosure expectations, which means teams need legal and operational coordination early. Waiting until evidence is perfect is often not an option once statutory clocks begin.

## 12 controls that reduce risk

The controls that matter most are usually boring and foundational: least privilege, strong authentication, asset visibility, encryption, logging, backup discipline, third-party review, and tested incident response. Their value shows up in the breach report after something goes wrong, not in marketing copy beforehand.

## How to fix or implement Data Breach

A good implementation plan for Data Breach starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl breach exposure view](/threat-intelligence/breach).

<HowToSteps />

## Tools to check your Data Breach

Use the [CyberFurl breach exposure view](/threat-intelligence/breach) when you want to see the live signal on a real domain, and then step back to the [See the vulnerability surface feature](/features/vulnerability-surface) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl breach exposure view](/threat-intelligence/breach)
- [See the vulnerability surface feature](/features/vulnerability-surface)
- [Credential Stuffing](/learn/credential-stuffing)
- [Phishing](/learn/phishing)
- [CyberFurl public security report](/security-report)

## Standards and references

- [IBM Cost of a Data Breach reports](https://www.ibm.com/reports/data-breach)
- [NIST incident response lifecycle](https://csrc.nist.gov/publications/detail/sp/800-61/rev-2/final)

## Frequently asked questions

<Faq />

<RelatedReading />

---
## Attack Surface Management
Source: https://cyberfurl.com/learn/attack-surface-management.md

## What is attack surface

Attack Surface Management (ASM) discovers and monitors every internet-facing asset attackers can see. Attack Surface Management belongs to the external exposure story: the set of signals attackers, customers, and monitoring systems can observe without logging into your environment.

If you are already working through [Subdomain Takeover](/learn/subdomain-takeover) and [Dangling CNAME](/learn/dangling-cname), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl public security report](/security-report) and then use the [See the vulnerability surface feature](/features/vulnerability-surface) page to see where it fits in the wider CyberFurl workflow.

## What is ASM

Attack Surface Management (ASM) discovers and monitors every internet-facing asset attackers can see. Attack Surface Management belongs to the external exposure story: the set of signals attackers, customers, and monitoring systems can observe without logging into your environment.

If you are already working through [Subdomain Takeover](/learn/subdomain-takeover) and [Dangling CNAME](/learn/dangling-cname), this topic gives you the missing layer between the raw signal and the decision you have to make. For a live check, start with the [CyberFurl public security report](/security-report) and then use the [See the vulnerability surface feature](/features/vulnerability-surface) page to see where it fits in the wider CyberFurl workflow.

## EASM vs CAASM vs DRP

The comparison only becomes useful when you look at what each side actually changes in the trust chain. Similar names can hide very different enforcement points, and that is usually where implementation mistakes start.

| Topic | What it mainly does | What you should verify |
| --- | --- | --- |
| EASM | Handles the primary decision described in this article | Check the live signal and the dependencies that can invalidate it |
| CAASM | Covers an adjacent but different trust problem | Verify where it enforces and where it can silently fail |
| CyberFurl workflow | Puts both views in one investigation path | Use [CyberFurl public security report](/security-report) plus [See the vulnerability surface feature](/features/vulnerability-surface) to compare them in context |

## What a good ASM platform discovers

A good ASM platform should find the assets the organization forgets first: unknown subdomains, stale services, exposed panels, unexpected certificates, shadow environments, and internet-facing systems no one currently owns well. Discovery quality matters more than dashboard polish because unknown assets are the reason the category exists.

## Continuous vs point-in-time

Point-in-time scans are useful for baselines and audits, but attack surface changes continuously as teams deploy, decommission, migrate, and experiment. That is why serious ASM programs care about drift over time, not just snapshots that looked good on the day they were taken.

## ASM in the SOC workflow

In a mature workflow, ASM feeds the SOC not with vague asset lists but with prioritized exposure changes that can be tied to owners, external risk, and response actions. The goal is to shorten the path from “this exists on the internet” to “someone is accountable for it.”

## Buying considerations

The buying conversation should center on coverage quality, false-positive discipline, asset correlation, change visibility, workflow integration, and whether the platform helps explain risk to engineering owners. A beautiful exposure map is not enough if it cannot drive action.

## How to fix or implement Attack Surface Management

A good implementation plan for Attack Surface Management starts with inventory, not with copying a sample policy. Teams need to know which providers, applications, mail paths, or DNS owners are already in the flow before they tighten anything.

From there the safe pattern is consistent: publish the smallest defensible change, validate the result from the outside, and keep monitoring after rollout so the control does not quietly regress after a vendor or infrastructure change. CyberFurl helps most when that validation is tied back to live evidence from [CyberFurl public security report](/security-report).

<HowToSteps />

## Tools to check your Attack Surface Management

Use the [CyberFurl public security report](/security-report) when you want to see the live signal on a real domain, and then step back to the [See the vulnerability surface feature](/features/vulnerability-surface) page when you need the wider workflow around posture, monitoring, or remediation. That combination is usually much more useful than reading the standard in isolation.

## Further reading inside CyberFurl

- [CyberFurl public security report](/security-report)
- [See the vulnerability surface feature](/features/vulnerability-surface)
- [Subdomain Takeover](/learn/subdomain-takeover)
- [Dangling CNAME](/learn/dangling-cname)

## Standards and references

- [MITRE ATT&CK reconnaissance matrix](https://attack.mitre.org/tactics/TA0043/)
- [CISA external exposure management guidance](https://www.cisa.gov/news-events/news/understanding-your-attack-surface)

## Frequently asked questions

<Faq />

<RelatedReading />
