MethodologyResultsServer MetadataNetwork TopologyHoneypot IndicatorsAbout

Honeypot Indicators

Electrum Observatory uses a structured heuristic framework to identify Electrum servers that may exhibit surveillance characteristics, modified protocol behavior, or honeypot-like traits. These indicators are signals, not definitive proof — but when combined, they can strongly suggest intentional monitoring or centralized control.

1. Certificate-Based Indicators

TLS certificates act as a cryptographic identity for Electrum servers. Because most Electrum communication occurs over SSL/TLS, certificate patterns often reveal the true operator behind a cluster of servers. Several certificate anomalies are strong signs of non-standard or suspicious deployments.

1.1 Certificate Reuse Across Many Hosts

When the same TLS certificate is used across multiple IPs, different countries, unrelated ASNs, and independent hosting providers, it indicates that all those servers are controlled by a single operator.

Why this is suspicious

Legitimate community servers usually use short-lived certificates issued automatically by Let's Encrypt. Reuse of the same certificate across many servers enables:

  • cross-server correlation of user traffic
  • xpub or address-based re-identification
  • fingerprinting of poll frequency
  • network-wide identity tracking

1.2 Suspicious Issuer or Internal CA

Certificates signed by enterprise interception appliances — such as Palo Alto, Blue Coat, Fortinet, or unknown internal CAs — are highly unusual for public Electrum servers.

Why this matters

Internal enterprise CAs are used for:

  • TLS interception
  • payload inspection
  • traffic logging
  • identity correlation

1.3 Long-Lifetime Certificates

Certificates valid for 10–30+ years are atypical and often generated by automated honeypots or “set-and-forget” monitoring nodes.

Why this is important

Long-lived certs make long-term tracking trivial:

  • no rotation → consistent fingerprint
  • easier user correlation
  • persistent identity tracking

2. Behavioral Indicators

Modifications in response structure, timing, or protocol consistency can reveal servers designed for fingerprinting, logging, or analytics.

2.1 Modified or Inconsistent JSON Responses

ElectrumX, Electrs, and Fulcrum produce stable, deterministic JSON. When a server deviates — adding fields, altering types, changing key order — it strongly suggests a custom fork or monitoring middleware.

2.2 Timing Anomalies

If latency spikes only for xpub or history queries, it may indicate logging or fingerprinting triggered by specific request types.

2.3 Selective Rate Limiting

Honeypots and analytics servers often apply rate limits only to scanning-like traffic, not regular wallet usage.

3. Infrastructure Indicators

3.1 Perfect or Near-Perfect Uptime

Community servers naturally reboot or fail occasionally. Surveillance-operated clusters often run with uptime >99.99% on redundant hosts.

3.2 Concentration in ASNs or Hosting Providers

When many servers share the same ASN, subnet, or hostname pattern, it usually means a single coordinated operator — often analytics firms or research groups.

3.3 TOR Exit Node Hosting

Servers hosted on TOR exits with ephemeral IPs are difficult to attribute and commonly used for scanning or surveillance operations.

4. Query-Handling & Fingerprinting Indicators

4.1 Discriminatory Responses by Client Type

Servers that behave differently depending on your claimed Electrum version or your request ordering are likely attempting active client fingerprinting.

4.2 Address-Type Discrimination

Differentiated behavior for Taproot or SegWit addresses may reveal selective logging or specialized monitoring logic.

4.3 Fingerprinting Behavior

These include timing variation tests, error-message differentiation, and subtle response modifications designed to classify scanners vs. real wallets.

5. Honeypot Suspicion Score (HSS)

Servers receive a cumulative score across four axes. The final score reflects the server's likelihood of operating as a monitoring node or analytics honeypot.

  • TLS Axis: reused certs, self-signed certs, unusual issuers
  • Behavior Axis: timing anomalies, JSON deviations
  • Infrastructure Axis: hosting clusters, TOR behavior, uptime
  • Query Handling Axis: fingerprinting, selective failure

0–30 LOW RISK
31–69 MEDIUM RISK
70–100 HIGH RISK — candidate for deeper forensic analysis.

Honeypot score JSON

5.1 Honeypot Score Distribution

This histogram shows the distribution of honeypot scores across all servers.

Latency Density

5.2 Top High-Risk Servers

Latency Density

5.3 Most Common Signals in High-Risk Servers

Latency Density