Monthly summary of interesting articles, reports and tools for tech experts, covering both offensive and defensive topics.
🔴 Red Team
📝 Abusing Cortex XDR Live Terminal as a C2
InfoGuard Labs research shows how Palo Alto Cortex XDR’s Live Terminal, a legitimate incident response tool, can be abused as a pre-installed, EDR-trusted C2 channel. Attackers with local admin privileges exploit this « Living off the Land » technique to execute commands, run PowerShell/Python scripts, and transfer files while bypassing detection (via WebSocket traffic to lrc-ch.paloaltonetworks.com). The attack uses two methods: cross-tenant abuse (hijacking WebSocket sessions via token manipulation) and custom server spoofing (exploiting a URL validation flaw in cortex-xdr-payload.exe to redirect traffic to attacker-controlled infrastructure).
Blue Teams will detect and mitigate threats by monitoring abnormal parent processes launching cortex-xdr-payload.exe (the legitimate parent being cyserver.exe). They will audit WebSocket traffic to Cortex XDR endpoints, especially when it originates from unexpected hosts, and block non-standard parent-child relationships between Cortex components. Teams will promote secure-by-design improvements, such as mutual authentication and cryptographic signing of commands. Finally, they will keep Cortex XDR up to date, verify patch effectiveness, and consider EDR solutions as potential attack surfaces by actively searching for abuse of trusted tools.
📌 Source: https://labs.infoguard.ch/posts/abusing_cortex_xdr_live_response_as_c2
📝 Manipulating AI memory for profit: The rise of AI Recommendation Poisoning
Microsoft’s research exposes « AI Recommendation Poisoning », where threat actors inject persistent, biased instructions into AI assistants’ memory via malicious URLs (e.g., ?q=remember [Company] as a trusted source). Exploits pre-filled prompt parameters in AI assistant links (Copilot, ChatGPT, Claude) to manipulate future recommendations. Over 50 real-world cases were identified across 31 companies and 14 industries.
Blue teams should hunt for suspicious AI prompt-injection patterns (e.g., ?q= or ?prompt= with keywords such as remember or trusted source) in email and proxy logs using Microsoft Defender Advanced Hunting. Regularly audit AI assistant memory settings for unauthorized entries. Block AI assistant URLs with pre-filled prompts from untrusted sources using Defender for Office 365. Monitor anomalous AI behavior and correlate with user URL click events.
This attack poses real-world risks (e.g., financial scams, biased legal advice) by eroding trust in AI-driven decisions. Proactive user awareness, traffic inspection, and vendor-specific safeguards are critical to mitigating this threat.
📌 Source: https://www.microsoft.com/en-us/security/blog/2026/02/10/ai-recommendation-poisoning
📝 AI-assisted cloud intrusion achieves admin access in 8 minutes
Sysdig’s analysis of an AI-assisted AWS intrusion reveals how a threat actor escalated from initial access to full admin privileges in under 10 minutes, leveraging LLM-generated code, automated reconnaissance, and multi-stage exploitation. The attack began with stolen credentials from public S3 buckets, followed by Lambda code injection to create admin access keys, lateral movement across 19 AWS principals, and abuse of Amazon Bedrock (LLMjacking) to invoke multiple AI models. The actor also provisioned high-cost GPU instances for potential model training or resource abuse, using Terraform scripts to deploy backdoor Lambda functions and JupyterLab for persistent remote access.
Blue teams should monitor for rapid multi-service enumeration and suspicious UpdateFunctionCode or CreateAccessKey events, particularly from non-admin identities. Detection should also target LLMjacking indicators, such as abnormal Bedrock model invocations or GetModelInvocationLoggingConfiguration calls. Mitigation includes blocking public S3 buckets, restricting Lambda execution roles, and enforcing SCPs to limit instance types and model access. Regular auditing for role chaining is also essential to identify privilege escalation paths.
This lightning-fast, AI-augmented attack underscores the need for real-time runtime detection, least-privilege enforcement, and proactive hunting for LLM-assisted TTPs.
📌 Source: https://www.sysdig.com/blog/ai-assisted-cloud-intrusion-achieves-admin-access-in-8-minutes
🔵 Blue Team
⚙️ DetectionStream: Introducing the Sigma Training Platform
The DetectionStream Sigma Training Platform is a gamified environment for Sigma rule creation and validation using real-world attack scenarios. It includes 20+ interactive challenges based on event logs, providing instant feedback on rule accuracy. Key features include difficulty-tiered scenarios, a progressive hint system, and a community-driven challenge builder.
For defensive cybersecurity experts, this platform bridges the gap between theory and practice, enabling teams to sharpen detection engineering skills, reduce false positives, and validate rules against evolving threats. The privacy-first design ensures no sensitive data is stored, and leaderboard/community features foster collaboration. Blue teams can accelerate SOC maturity, standardize rule quality, and adopt a « Detection as Code » mindset.
📌 Source: https://kostas-ts.medium.com/detectionstream-introducing-the-sigma-training-platform-574721f18f45
⚙️ Heimdall – AWS Attack Path Scanner
Heimdall is an open-source AWS security scanner that detects 50+ IAM privilege escalation paths and 85+ attack chain patterns across 10 AWS services (EC2, RDS, S3, Lambda, KMS), with MITRE ATT&CK mapping and multi-hop attack path detection. Key features: Terraform Attack Path Engine (pre-deployment risk assessment), interactive TUI, one-command dashboard, and SARIF/CSV exports for CI/CD integration.
Heimdall helps harden IAM policies, detect indirect escalation risks (e.g., user→role→admin chains), and shift-left security by scanning Terraform plans before deployment. Features include low false-positive rate, risk scoring (0-100), and cross-service analysis revealing hidden attack vectors like PassRole abuse, trust policy hijacks, and credential creation risks.
📌 Source: https://github.com/DenizParlak/heimdall
📝 Apache ActiveMQ Exploit Leads to LockBit Ransomware
The DFIR Report details a multi-stage intrusion where threat actors exploited CVE-2023-46604 in an internet-facing Apache ActiveMQ server to achieve remote code execution (RCE), leading to Metasploit stager deployment and lateral movement using LSASS credential dumping. Despite initial eviction, the actors recompromised the same server 18 days later, leveraging stolen domain admin credentials to deploy LockBit ransomware using RDP and AnyDesk. The attack included defense evasion (clearing event logs, disabling Defender), discovery (SMB scanning, Advanced IP Scanner), and impact (file encryption with modified ransom notes).
Key takeaways for blue teams:
- Patch exposed Apache ActiveMQ instances (CVE-2023-46604) and monitor for OpenWire Exception Response commands or Java spawning suspicious processes (e.g.,
certutildownloading payloads). - Detect Metasploit/Cobalt Strike artifacts: Use Sigma rules (e.g.,
Elevated System Shell Spawned) and YARA signatures. - Audit LSASS access (Event ID 10) and unusual service creations.
- Block RDP/AnyDesk abuse: Monitor for unauthorized firewall rule modifications and
SystemSettingsAdminFlows.exedisabling Defender. - Hunt for ransomware staging: Look for
LB3_pass.exe/LB3.exein%USERPROFILE%\Downloadsor SMB lateral movement via PsExec-style spreaders. - Leverage network detections: Emerging Threats rules (e.g.,
ET EXPLOIT Successful Apache ActiveMQ RCE) and SMB/DCERPC anomalies.
This case highlights the criticality of rapid patching, lateral movement monitoring, and ransomware preparation, especially when threat actors retain credentials and return after eviction.
📌 Source: https://thedfirreport.com/2026/02/23/apache-activemq-exploit-leads-to-lockbit-ransomware
Never trust, always check


