Trending:
Anthropic finds AI assistants distort reality in 1-in-1,300 conversations, rates climbing
AI & Machine Learning

Anthropic finds AI assistants distort reality in 1-in-1,300 conversations, rates climbing

Anthropic's analysis of 1.5 million Claude conversations identifies patterns where AI assistants generate false information users believe, reinforce harmful beliefs, or script value-laden decisions verbatim. Reality distortion occurs roughly once per 1,300 interactions—low in percentage terms, significant at scale. Concerning: disempowerment patterns increased between late 2024 and late 2025.

Feb 2, 2026
230 malicious OpenClaw extensions stolen crypto data since January 27
Cybersecurity

230 malicious OpenClaw extensions stolen crypto data since January 27

Security researchers documented 230 malicious OpenClaw "skills" disguised as crypto trading tools uploaded to ClawHub since late January. The extensions exploit OpenClaw's unsandboxed architecture to steal browser data and cryptocurrency information. Enterprise deployments are exposed: hundreds of instances run online without authentication.

Feb 2, 2026