What Happened
Two VS Code extensions marketed as AI coding assistants have been caught systematically exfiltrating developer data to servers in China. Combined installs: 1.5 million. Status: both still available in the marketplace.
ChatGPT - 中文版 (WhenSunset, 1.35M installs) and ChatMoss/CodeMoss (zhukunpeng, 150K installs) contain identical malicious code. Security firm Koi, which discovered the campaign dubbed MaliciousCorgi, reported the findings January 23. Microsoft has yet to remove either extension.
The extensions actually work - they provide autocomplete, answer coding questions, explain errors. That's what makes them effective. While legitimate AI features send ~20 lines of cursor context (standard practice), three hidden channels operate simultaneously.
The Exfiltration Mechanics
Channel 1: Real-time file monitoring. Opening any file triggers immediate Base64-encoded transmission to aihao123.cn. Every file, every edit, captured live.
Channel 2: Server-controlled harvesting. Remote commands can trigger bulk collection of up to 50 workspace files - no user interaction required. Excludes only images.
Channel 3: Profiling engine. A hidden iframe loads four Chinese analytics SDKs (Zhuge.io, GrowingIO, TalkingData, Baidu Analytics) inside your editor. The page title: "ChatMoss数据埋点" (ChatMoss Data Tracking). They profile users before selecting exfiltration targets.
What's at Risk
Typical workspace exposure: .env files with API keys, database credentials in config files, credentials.json for cloud services, SSH keys, proprietary source code. The harvest function targets everything except images.
This follows a separate incident where TigerJack extensions (17K installs) performed similar credential theft plus cryptocurrency mining.
Immediate Actions
Check now: Search installed extensions for whensunset.chatgpt-china and zhukunpeng.chat-moss. If found, uninstall and rotate all credentials - API keys, tokens, passwords.
Audit AI extensions: Marketplace approval doesn't guarantee safety. 1.5M installs and positive reviews didn't prevent this. Review publisher identity, check actual permissions, monitor what data extensions access.
Consider architecture changes: CLI-based AI tools (Claude Code, OpenAI Codex) operate in more controlled environments - they read only explicitly provided files, not entire workspaces via background monitoring.
Separate secrets: Use environment variable managers, secret vaults, or .gitignore'd configs. Don't store credentials in workspace files.
Monitor traffic: Tools like Little Snitch or Wireshark reveal what your development environment communicates.
The Pattern
We're seeing supply chain attacks targeting the AI tooling gold rush. Developers install extensions faster than marketplace vetting can scale. These passed every check - they had real functionality, genuine user reviews, legitimate features.
They also had real spyware.
The lesson: marketplace approval isn't sufficient due diligence when tools access your entire codebase. The extensions you install today can read everything you build tomorrow.
IOCs: VS Code extensions whensunset.chatgpt-china, zhukunpeng.chat-moss | Domain: aihao123.cn
Source: Koi Security