Weekly Musings Top 10 AI Security Wrapup: Issue 23 January 23, 2026 - January 29, 2026
Fortinet Zero-Days, Moltbot's Shadow IT Crisis, and DeepSeek's Million-Record Leak
Another week, another Fortinet zero-day. At this point, I’m starting to wonder if Fortinet’s security team schedules their patch releases around my newsletter deadlines. But this week brought more than the usual perimeter carnage. 22% of enterprises have employees running Moltbot without IT approval, and the tool’s security architecture is a mess of exposed admin ports and plaintext credentials. North Korean hackers deployed what appears to be the first AI-generated APT malware caught in the wild. And DeepSeek left over a million user conversations sitting in a publicly accessible database. If you’re still wondering whether AI security belongs on your board’s agenda, stop wondering.
The through-line connecting this week’s chaos is execution speed. Attackers are automating faster than defenders can patch. AI coding assistants are spreading faster than security teams can evaluate them. Criminal infrastructure now operates at scales that require coordinated industry response. The gap between vulnerability disclosure and weaponization continues shrinking toward zero.
For CISOs, the implications are clear: your security program’s velocity determines your exposure window. Patch management isn’t a quarterly exercise anymore. Neither is shadow IT discovery. Both are competitive advantages.
1. Fortinet’s 14th Zero-Day in Four Years Proves Perimeter Security is a Leaky Boat
Fortinet disclosed CVE-2026-24858, a critical authentication bypass vulnerability in FortiCloud SSO affecting FortiOS, FortiManager, FortiAnalyzer, FortiProxy, and FortiWeb. The flaw earned a CVSS score of 9.4. Attackers exploited it in the wild before disclosure, creating backdoor admin accounts and exfiltrating device configurations. CISA added it to the Known Exploited Vulnerabilities catalog on January 27 with a February 13 remediation deadline. Arctic Wolf observed attacks executing within seconds of identifying vulnerable targets (BleepingComputer). Coalition Insurance noted that this is Fortinet’s 14th zero-day advisory in less than 4 years (CyberScoop).
Why it matters
Attackers targeted fully-patched systems, meaning regular patching alone provided no protection
Automated exploitation windows have collapsed to seconds, not hours or days
Configuration exfiltration enables follow-on attacks even after patching
What to do about it
Apply the January 28 patches immediately if you haven’t already
Audit all Fortinet admin accounts created since January 20 for unauthorized entries
Review VPN configurations for unexpected changes and rotate credentials for affected devices
Rock’s Musings
Fourteen zero-days in four years. Let that sink in. At some point, we need to have an honest conversation about whether network perimeter appliances create more attack surface than they protect. These devices sit at the edge of your network with broad privileges, and every vendor in this space has gotten hammered.
I’m not saying ditch your firewall tomorrow, but if your security strategy depends on the assumption that your perimeter devices are secure, you’re building on sand. The attackers who exploited this flaw created backdoor accounts and exfiltrated configs before Fortinet knew the vulnerability existed. Your zero-trust architecture shouldn’t be a slide deck. It should be your insurance policy for exactly this scenario. For more on building resilient security architecture, visit RockCyber.
2. Moltbot’s Exploding Security Crisis Threatens Every Organization Using AI Coding Assistants
While the fake VS Code extension grabbed headlines, the deeper story involves Moltbot’s own security architecture. Security researchers have found hundreds of exposed Moltbot instances running with unauthenticated admin ports accessible from the internet. Credentials and API keys appear in plaintext in configuration files. Token Security found that 22% of their enterprise customers have employees running Moltbot without IT knowledge or approval (The Register). A proof-of-concept supply chain attack via MoltHub achieved 4,000 downloads in under eight hours. Google VP of Security Engineering Heather Adkins publicly warned: “Don’t run Clawdbot.”
Why it matters
Moltbot has direct access to source code, development infrastructure, and secrets
Shadow IT adoption has outpaced security evaluation for these tools
The attack surface includes not just the tool itself but the entire ecosystem of extensions and integrations
What to do about it
Conduct an immediate audit of Moltbot usage across your organization including personal devices
If Moltbot is approved for use, ensure instances run behind authentication and network segmentation
Establish formal evaluation and approval processes for AI coding assistants before developers adopt them independently
Rock’s Musings
This is the story that should terrify every CISO, and it’s barely making news outside security circles.
Moltbot and similar AI coding assistants represent a category of tooling that didn’t exist two years ago. They’re now running on developer machines across your organization, often without security review, often with access to your most sensitive assets: source code, API keys, production credentials, deployment pipelines.
The 22% figure from Token Security means nearly a quarter of enterprise employees are running AI coding tools their IT departments don’t know about. These tools ask for broad file system access, network connectivity, and permission to execute code. When one of them is compromised or misconfigured, attackers get everything.
We’ve spent years hardening our software supply chains against malicious packages and compromised libraries. AI coding assistants represent the same class of risk but with broader access and less scrutiny. If you don’t have visibility into what AI tools your developers are running, you don’t understand your attack surface. Fix that before you become this year’s case study.
3. Sandworm Drops DynoWiper on Poland’s Power Grid, Marking Decade Anniversary of Ukraine Blackout
Russia’s GRU-linked Sandworm group deployed a new wiper malware called DynoWiper against Poland’s power grid on December 29-30, 2025. ESET publicly attributed the attack on January 24, 2026. The attack targeted two heat-and-power plants plus renewable energy management systems. Polish authorities thwarted the attack before it caused disruption, but stated it could have affected 500,000 people (ESET). The timing coincided with the 10-year anniversary of Sandworm’s 2015 BlackEnergy attack, which caused Ukraine’s first malware-induced blackout. Dragos reported that some equipment was damaged beyond repair, despite the defense's operational success (The Register).
Why it matters
This is the first destructive cyberattack against a NATO member’s critical infrastructure attributed to a nation-state
Sandworm’s operational tempo and capability continue escalating despite years of sanctions and indictments
Even unsuccessful attacks can cause physical equipment damage
What to do about it
Critical infrastructure operators should review network segmentation between IT and OT environments
Implement anomaly detection specifically for wiper malware behaviors, including mass file deletion and MBR overwrites
Coordinate with national CERTs and ISACs for threat intelligence on Sandworm TTPs
Rock’s Musings
Poland won this fight. That’s worth acknowledging. Their €1 billion cybersecurity budget and daily experience fending off 20-50 attacks created the defensive muscle memory needed to stop Sandworm’s wiper before it caused blackouts. Most countries facing this threat don’t have that operational maturity.
The anniversary timing isn’t a coincidence. It’s psychological warfare. 10 years after proving they could cut off Ukraine’s power, Sandworm wanted to demonstrate they could reach NATO infrastructure. They failed operationally but succeeded at the message: we’re here, we’re capable, and your critical infrastructure isn’t safe. If you run OT environments, assume you’re already a target and plan your defenses accordingly.
4. DeepSeek Leaves Over a Million User Conversations in Publicly Accessible Database
Wiz Research discovered on January 29 that the Chinese AI company DeepSeek had left a ClickHouse database publicly accessible and unauthenticated. The exposed data included over one million records containing chat histories, API keys, backend logs, and system metadata (Wiz). The database was secured after Wiz’s disclosure, but the exposure duration remains unknown. This discovery came amid ongoing regulatory scrutiny of DeepSeek’s data practices, including an Italian investigation into GDPR compliance and government bans in Australia, Taiwan, and South Korea (The Hacker News). Cisco security testing found DeepSeek’s R1 model failed to block any jailbreak attempts, suggesting broader security architecture issues.
Why it matters
Chat histories with an AI assistant can contain sensitive personal, business, and technical information
API key exposure enables unauthorized access to paid services and potential impersonation
The combination of weak application security and aggressive data collection creates compounding risks
What to do about it
Audit your organization for unauthorized DeepSeek usage, particularly among technical staff
If you’ve used DeepSeek, assume any data shared with it may have been exposed, and rotate relevant credentials
Update acceptable use policies to address emerging AI services and their data handling practices
Rock’s Musings
DeepSeek burst onto the scene as the cheap, capable Chinese alternative to OpenAI. Security researchers have been poking at it for weeks, and every examination reveals new problems. Cisco found it fails 100% of jailbreak tests. KELA demonstrated it can be manipulated to produce dangerous outputs. Now, Wiz shows they left the database unlocked.
This isn’t a case of sophisticated attackers finding obscure flaws. This is a basic operational security failure. If DeepSeek can’t keep its own database locked, why would you trust it with your organization’s conversations? I get the appeal of cost-effective AI tools. But cheap becomes expensive fast when your data ends up in an open database. For guidance on evaluating AI tool risks, check out Rock’s Cyber Musings.
5. Google Dismantles IPIDEA Proxy Network Used by 550+ Threat Groups
Google’s Threat Intelligence Group announced on January 29 the disruption of IPIDEA, one of the world’s largest residential proxy networks. In a single week in January 2026, Google observed more than 550 threat groups from China, DPRK, Iran, and Russia using IPIDEA exit nodes (Google Cloud Blog). The operation involved legal action against command-and-control domains, coordination with Cloudflare for DNS disruption, and Google Play Protect blocking infected apps. Google identified 600+ trojanized Android apps and 3,000+ Windows binaries distributing IPIDEA’s proxy software. The disruption reduced IPIDEA’s available device pool by millions (The Register).
Why it matters
Residential proxies allow attackers to hide malicious traffic behind legitimate consumer IP addresses, evading geographic and reputation-based blocking
The scale of 550+ threat groups using a single infrastructure reveals how criminals share operational resources
Users whose devices were enrolled became unwitting participants in attacks and exposed their own networks to compromise
What to do about it
Block known IPIDEA IP ranges at your network perimeter where feasible
Implement network traffic analysis capable of detecting residential proxy behavior patterns
Educate users about the risks of apps promising to “monetize” their bandwidth
Rock’s Musings
550 threat groups. One proxy network. One week. That’s the scale of the shared infrastructure criminals operate today.
Google deserves credit for coordinating this disruption. Taking down distributed infrastructure requires legal action, technical coordination across multiple companies, and sustained effort. But IPIDEA isn’t unique. It operated 19 residential proxy brands under centralized control. When this one gets degraded, criminals will migrate to competitors.
The deeper problem is the business model: pay app developers to embed proxy SDKs, enroll millions of devices without clear user consent, then sell access to anyone willing to pay. Until we address that economic model, these networks will keep regenerating. Your defense can’t depend on Google’s enforcement efforts. Assume attackers will always have residential IP access and design your detection accordingly.
6. North Korean Konni Group Deploys AI-Generated Malware Against Blockchain Developers
Check Point Research reported on January 23 that the North Korean Konni group (also known as Opal Sleet/TA406) is using AI-generated PowerShell malware to target blockchain developers in Japan, Australia, and India. The malware exhibits clear signs of LLM-assisted development, including structured documentation, modular code layout, and placeholder comments like “# ← your permanent project UUID” that are characteristic of AI-generated code (BleepingComputer). Attacks begin with Discord-hosted phishing links delivering ZIP archives containing malicious LNK shortcuts. This campaign marks a shift from Konni’s traditional focus on South Korean diplomatic targets toward APAC blockchain and cryptocurrency developers (Check Point).
Why it matters
This is among the first documented cases of APT groups using AI to accelerate malware development
AI-assisted malware can iterate faster and customize more easily, challenging signature-based detection
The targeting shift toward blockchain developers indicates North Korea’s continued focus on cryptocurrency theft
What to do about it
Brief blockchain and cryptocurrency development teams on targeted spear-phishing tactics
Block Discord CDN URLs at the perimeter or implement additional inspection for files downloaded from Discord
Update endpoint detection rules to identify PowerShell behaviors associated with AI-generated code patterns
Rock’s Musings
We’ve been warning about AI-assisted malware as a future threat. The future arrived this week.
What makes this significant isn’t that the malware is dramatically more sophisticated. It’s that the development process accelerated. Clean documentation. Modular structure. Placeholder comments explaining customization. The Konni operators didn’t become better programmers. They got faster.
For defenders, the implication is uncomfortable. Threat actors who previously took weeks to develop custom tooling can now iterate in days or hours. The asymmetry that favored attackers just got worse. Your threat models need updating. Your detection capabilities need to assume a higher adversary velocity. And your developers working on anything cryptocurrency-related need to understand they’re targets, not just builders.
7. CISA Adds VMware vCenter Vulnerability to KEV Catalog After Active Exploitation Confirmed
CISA added CVE-2024-37079, a critical heap overflow vulnerability in VMware vCenter Server, to its Known Exploited Vulnerabilities catalog on January 23, 2026. The vulnerability carries a CVSS score of 9.8 and allows unauthenticated remote code execution via specially crafted network packets (CISA). Broadcom updated its security advisory on January 23 to confirm active exploitation in the wild, seven months after patches were first released in June 2024. Federal civilian agencies must remediate by February 13, 2026 (BleepingComputer). The vulnerability affects the DCE/RPC protocol implementation in vCenter Server.
Why it matters
vCenter Server is the management plane for VMware environments, making it a high-value target
The seven-month gap between patch availability and confirmed exploitation highlights the cost of delayed patching
Unauthenticated RCE vulnerabilities in the management infrastructure enable complete environment compromise
What to do about it
Verify all vCenter Server instances are patched to versions released after June 2024
Restrict network access to vCenter management interfaces to authorized administrator networks only
Review vCenter audit logs for evidence of exploitation attempts since June 2024
Rock’s Musings
This vulnerability was patched seven months ago. Let that be your reminder that attackers don’t follow your quarterly patch cycle.
The pattern is familiar: critical vulnerability disclosed, patch released, organizations deprioritize because “we haven’t seen exploitation yet,” then CISA adds it to the KEV catalog confirming it’s being actively exploited. The time to patch was June 2024. The second-best time is right now.
vCenter compromise gives attackers access to your entire virtualized infrastructure. They can access VMs, exfiltrate data, and move laterally without touching the network. If you’ve been treating vCenter patching as lower priority because it’s “just management infrastructure,” you’ve had the risk assessment backwards.
8. Fake AI Coding Assistant Drops Remote Access Malware on VS Code Users
Aikido Security detected on January 27 a malicious Visual Studio Code extension named “ClawdBot Agent - AI Coding Assistant” mimicking the legitimate Moltbot coding tool. The extension dropped ConnectWise ScreenConnect for remote access on victims’ machines (The Hacker News). Microsoft removed the extension from the VS Code Marketplace after notification. The attack capitalized on Moltbot’s popularity, which has over 85,000 GitHub stars. This incident follows broader concerns about Moltbot’s security posture, including hundreds of exposed instances with unauthenticated admin ports and API keys stored in plaintext (BleepingComputer).
Why it matters
Developer tools represent high-value targets with access to source code, secrets, and infrastructure credentials
The VS Code Marketplace’s extension approval process allowed a malicious extension to reach users
Supply chain attacks targeting developers can compromise entire software ecosystems
What to do about it
Audit VS Code extensions installed across your development environment for suspicious entries
Implement allowlisting for approved extensions rather than relying on marketplace vetting alone
Scan for ConnectWise ScreenConnect installations that weren’t deployed by IT
Rock’s Musings
Developers downloading coding assistants from official marketplaces expect some baseline vetting. That expectation is increasingly misplaced. The VS Code Marketplace, npm, PyPI, and every other package repository has become an attack surface.
This particular attack got caught quickly because Aikido was watching. How many similar attacks have succeeded without detection? The extension was named to exploit confusion with a popular legitimate tool. The payload was commodity remote access software. This isn’t sophisticated. It’s opportunistic, scalable, and apparently effective enough to keep happening.
If your developers install their own tools without security review, you have supply chain exposure you’re not measuring. The answer isn’t locking everything down and killing productivity. It’s building detection capabilities that catch malicious tooling when it inevitably slips through.
9. WEF Report: 94% of Leaders Expect AI to Dominate Cybersecurity in 2026
The World Economic Forum released its Global Cybersecurity Outlook 2026 report in January, surveying executives and security leaders worldwide. Key findings: 94% expect AI to be the most consequential force shaping cybersecurity this year. 87% reported experiencing rising AI-related vulnerabilities in 2025. Cyber-enabled fraud has overtaken ransomware as CEOs’ top concern. 31% of respondents expressed low confidence in their nation’s ability to respond to critical infrastructure attacks, up from 26% the previous year (World Economic Forum). The report noted that 64% of organizations now factor geopolitically motivated attacks into their risk strategies.
Why it matters
The perception gap between AI as an opportunity and AI as a threat continues widening
CEO concern shifting from ransomware to fraud reflects changing attacker economics
Declining confidence in national cyber preparedness signals growing infrastructure vulnerability
What to do about it
Update board reporting to address AI-specific security risks alongside traditional threat categories
Review fraud detection capabilities given the shifting attacker focus toward BEC and social engineering
Incorporate geopolitical threat intelligence into risk assessments for organizations with international exposure
Rock’s Musings
Survey says: everyone thinks AI will change everything. This tracks with what I hear from every CISO I talk to. The challenge is translating that awareness into operational reality.
What caught my attention is the shift from fraud to ransomware. Ransomware gets the headlines and the policy attention. But fraud, phishing, and BEC attacks now worry CEOs more. Why? Because they’re working. Ransomware payments are declining as organizations improve backups and refuse to pay. Fraud attacks are getting more convincing thanks to deepfakes and AI-generated content.
The 31% expressing low confidence in national cyber preparedness should concern policymakers. If a third of security leaders don’t trust their country’s ability to respond to critical infrastructure attacks, that’s a vote of no confidence in government cyber capabilities. Poland just demonstrated that good national defense is possible. Most countries haven’t made those investments.
10. EU AI Act High-Risk Deadlines Loom as Code of Practice Deadline Passes
The EU AI Office’s first draft Code of Practice on AI-generated content transparency closed for feedback on January 23, 2026. High-risk AI system provisions take effect August 2, 2026, giving organizations roughly six months to achieve compliance. The EU AI Office gains full enforcement authority on that date, with fine authority reaching 3% of global turnover (EU AI Act). The European Commission’s Digital Omnibus proposal aims to streamline compliance requirements across overlapping regulations, including the AI Act, GDPR, and Digital Services Act. Each member state must establish at least one regulatory sandbox by August 2, 2026 (K&L Gates).
Why it matters
Six months to compliance is insufficient for organizations that haven’t started preparation
3% global turnover fines rival GDPR enforcement exposure for major violations
Regulatory sandbox requirements signal an EU intent to enable compliant AI development, not just prohibit violations
What to do about it
Inventory AI systems and classify against EU AI Act risk categories immediately if you haven’t already
Engage legal counsel with EU AI Act expertise to assess compliance gaps for high-risk systems
Monitor member state sandbox programs for compliance pathways relevant to your AI use cases
Rock’s Musings
Six months until the high-risk provisions take effect. If your organization uses AI in healthcare, education, employment, or critical infrastructure, and you’re reading about EU AI Act compliance for the first time, you’re behind.
I’ve talked to organizations that assume they can treat AI compliance like early GDPR: wait for enforcement actions, learn from others’ mistakes, then respond. That worked poorly for GDPR. It will work less well here because the AI Act’s technical requirements require architectural changes, not just policy updates.
The silver lining is regulatory sandboxes. The EU is signaling that it wants to enable compliant AI development, not just punish violations. If you’re building AI systems that might face scrutiny, engaging with sandbox programs early gives you a compliance pathway and a regulator relationship that pure avoidance strategies lack.
The One Thing You Won’t Hear About But You Need To
WhatsApp Launches Lockdown Mode for High-Risk Users
Meta announced WhatsApp’s new “Strict Account Settings” feature on January 27, providing a one-click security mode for journalists, activists, and other high-risk users. The feature blocks media from unknown senders, disables link previews, silences calls from unknown contacts, and enables two-step verification by default (BleepingComputer). The announcement follows similar offerings from Apple (Lockdown Mode, 2022) and Google (Android Advanced Protection Mode, 2025). Citizen Lab researcher John Scott-Railton called the announcement “a very welcome development” (TechRepublic). The rollout comes days after Meta faced a lawsuit alleging false privacy claims, which WhatsApp head Will Cathcart called “a no-merit, headline-seeking lawsuit.”
Why it matters
WhatsApp’s 2+ billion users include many potential targets of state-sponsored surveillance
Zero-click exploits often leverage media processing and link previews as attack vectors
Industry standardization of “lockdown modes” creates consistent expectations for high-risk user protection
What to do about it
Identify employees in high-risk roles and recommend enabling Strict Account Settings
Update security awareness training to cover lockdown features across platforms
Review organizational policies for communication tools used by executives and other potential targets
Rock’s Musings
WhatsApp, Apple, and Google now all offer lockdown modes for high-risk users. The security industry has normalized the idea that some people face threats requiring extreme countermeasures. That’s progress.
The features themselves are common sense: block unknown attachments, disable link previews, mute unknown callers. These are attack vectors that spyware vendors have exploited for years. What took so long? Better late than never, I suppose.
If you have employees who might be targeted by nation-states or sophisticated criminals, talk to them about these features. Executives, journalists, researchers, and anyone handling sensitive information. The threat isn’t theoretical. Pegasus infections happen. These tools aren’t perfect protection, but they meaningfully shrink the attack surface.
If you found this analysis useful, subscribe at rockcybermusings.com for weekly intelligence on AI security developments.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceToolkit.com
👉 Subscribe for more AI and cyber insights with the occasional rant.
References
Aikido Security. (2026, January 27). Malicious VS Code extension analysis. https://www.aikido.dev
BleepingComputer. (2026, January 23). Konni hackers target blockchain engineers with AI-built malware. https://www.bleepingcomputer.com/news/security/konni-hackers-target-blockchain-engineers-with-ai-built-malware/
BleepingComputer. (2026, January 24). CISA adds actively exploited VMware vCenter flaw to KEV catalog. https://www.bleepingcomputer.com/news/security/cisa-warns-of-actively-exploited-vmware-vcenter-flaw/
BleepingComputer. (2026, January 27). Fortinet warns of new zero-day actively exploited in attacks. https://www.bleepingcomputer.com/news/security/fortinet-warns-of-new-zero-day-actively-exploited-in-attacks/
BleepingComputer. (2026, January 28). New WhatsApp lockdown feature protects high-risk users from hackers. https://www.bleepingcomputer.com/news/security/whatsapp-gets-new-lockdown-feature-that-blocks-cyberattacks/
BleepingComputer. (2026, January 29). Google disrupts IPIDEA proxy network abused by criminals. https://www.bleepingcomputer.com/news/security/google-disrupts-ipidea-proxy-network-abused-by-criminals/
Broadcom. (2026, January 23). VMSA-2024-0012.1: VMware vCenter Server security advisory update. https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24453
Check Point Research. (2026, January 23). AI-powered KONNI malware targets developers. https://blog.checkpoint.com/research/ai-powered-north-korean-konni-malware-targets-developers
CISA. (2026, January 23). CISA adds one known exploited vulnerability to catalog. https://www.cisa.gov/news-events/alerts/2026/01/23/cisa-adds-one-known-exploited-vulnerability-catalog
CISA. (2026, January 27). CISA adds Fortinet vulnerability to KEV catalog. https://www.cisa.gov/known-exploited-vulnerabilities-catalog
CyberScoop. (2026, January 28). Coalition Insurance issues 14th Fortinet zero-day advisory. https://cyberscoop.com/fortinet-zero-day-coalition-insurance-advisory/
ESET. (2026, January 24). ESET Research: Sandworm behind cyberattack on Poland’s power grid in late 2025. https://www.welivesecurity.com/en/eset-research/eset-research-sandworm-cyberattack-poland-power-grid-late-2025/
EU AI Act. (2026). High-risk AI system requirements and enforcement timeline. https://artificialintelligenceact.eu/
Fortinet. (2026, January 27). FortiCloud SSO authentication bypass vulnerability advisory. https://www.fortinet.com/psirt
Google Cloud Blog. (2026, January 29). Disrupting IPIDEA: Taking action against residential proxy abuse. https://cloud.google.com/blog/topics/threat-intelligence/disrupting-ipidea-residential-proxy
Help Net Security. (2026, January 26). ESET attributes DynoWiper-powered attack on Poland’s power grid to Russia-aligned Sandworm group. https://www.helpnetsecurity.com/2026/01/26/poland-energy-malware-attack/
K&L Gates. (2026, January). EU AI Act compliance update: High-risk deadlines and regulatory sandboxes. https://www.klgates.com/
TechRepublic. (2026, January 28). WhatsApp adds one-tap security settings for added privacy. https://www.techrepublic.com/article/news-whatsapp-strict-account-settings-lockdown-security-mode/
The Hacker News. (2026, January 26). Konni hackers deploy AI-generated PowerShell backdoor. https://thehackernews.com/2026/01/konni-hackers-deploy-ai-generated.html
The Hacker News. (2026, January 28). Fake Moltbot VS Code extension drops remote access malware. https://thehackernews.com/2026/01/fake-moltbot-vscode-extension.html
The Hacker News. (2026, January 30). Italy blocks DeepSeek over data privacy concerns. https://thehackernews.com/2026/01/italy-blocks-deepseek-privacy.html
The Register. (2026, January 26). ESET: Russia likely behind Poland power grid attack. https://www.theregister.com/2026/01/26/moscow_likely_behind_wiper_attack/
The Register. (2026, January 27). Fake AI coding assistant malware hits VS Code marketplace. https://www.theregister.com/2026/01/27/moltbot_vscode_malware/
The Register. (2026, January 29). Google cripples IPIDEA proxy network abused by crims. https://www.theregister.com/2026/01/29/google_ipidea_crime_network/
Wiz Research. (2026, January 29). DeepSeek database exposure analysis. https://www.wiz.io/blog/deepseek-database-exposed
World Economic Forum. (2026, January 12). Global Cybersecurity Outlook 2026. https://reports.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2026.pdf
Zetter, K. (2026, January 28). Cyberattack targeting Poland’s energy grid used a wiper. Zero Day. https://www.zetter-zeroday.com/cyberattack-targeting-polands-energy-grid-used-a-wiper/



