- Bloomberg blocks access with a PerimeterX CAPTCHA when it detects “unusual activity,” providing a unique Block Reference ID.
- Common triggers include disabled JavaScript or cookies, VPNs/proxies or shared IPs, and automated or high-rate traffic patterns.
- Access is usually restored by passing the CAPTCHA and normalizing traffic, or by contacting support with the reference ID if the block persists.
- The approach strengthens anti-scraping security but can create friction and false positives for privacy-focused and institutional users.
Read More
The HTML snippet illustrates Bloomberg’s use of a bot-detection/CAPTCHA challenge (via PerimeterX, reference PX8FCGYgk4) to block access when their systems detect “unusual activity” from a network. The message specifies reference ID 79726f17-ecee-11f0-b2ba-056e5c4a2229. Users are instructed to enable JavaScript/cookies, and to contact support if the block persists.
Based upon analogous systems—such as Google’s CAPTCHA when it detects atypical traffic—the triggers typically include: shared IP address space exhibiting high traffic; use of VPNs or anonymizers; browser extensions or automation scripts; or malformed requests due to disabled JavaScript or cookies. In Bloomberg’s case, the inclusion of reference ID, PX client configuration, and prompt to allow JavaScript suggest a similar architecture.
In practical terms, most users will clear the CAPTCHA and be reinstated; blocks usually expire once traffic stabilizes. But risks include repeated false positives, or extended blocks affecting access if critical content (e.g. market data) is needed. Institutional users (e.g. firms behind NAT, using scraped data feeds, or automated tools) may be disproportionately impacted. There is also the risk of these systems being opaque: if the reason for “unusual activity” isn’t clearly visible, users can’t self-remedy beyond guessing.
Strategically, as content providers invest in tighter security, “bot-catcher” systems will be standard. That raises trade-offs: balancing security vs. UX; how much friction is acceptable; how to accommodate automation/aggregators; privacy vs. transparency. For Bloomberg and similar publishers, a high-security posture protects against scraping, DDOS and abuse, but risks alienating advanced users who rely on automation or privacy tools. Key open questions include: What thresholds or signals trigger the block? How can legitimate users challenge false positives? Does the block discriminate against certain network architectures? And what data is retained after blocking (the reference ID suggests logging)?
Supporting Notes
- Bloomberg’s HTML shows the block page with heading “We’ve detected unusual activity from your computer network”, a CAPTCHA div (id “px-captcha”) referencing PerimeterX tags (PX8FCGYgk4), and a unique Block Reference ID: 79726f17-ecee-11f0-b2ba-056e5c4a2229.
- The instructions state that users must have JavaScript and cookies enabled, confirming standard bot-detection prerequisites.
- Analogous messages from Google show the same structure: identifying network traffic violations, offering CAPTCHA to regain access, warning about shared/networked IPs or automated tools, and temporary duration until traffic stops.
- Advice from non-Bloomberg sources recommends turning on JavaScript and cookies, switching off VPNs/proxies, clearing cache, limiting request rates, or changing networks—paralleling Bloomberg’s instruction set for remediation.
- No indication in the HTML that the block is permanent; references to “temporary” or “until traffic stabilizes” in analogous systems, and Bloomberg’s offer to contact support with the block ID, suggest possibility of resolution.
- Potential for false positives is well documented in forums: users sharing IPs or using privacy tools often get flagged incorrectly.
