Monthly summary of interesting articles, reports and tools for tech experts, covering both offensive and defensive topics.
🔵 Blue Team
📝 Pattern Detection and Correlation in JSON Logs
RSigma is a Rust-based command-line tool for testing Sigma detection rules against JSON log streams, such as CloudTrail, Okta audit events, or incident exports, without needing a SIEM. It reads Sigma’s YAML rules (3,800+ community detections for Windows, Linux, cloud, and network threats) into fast pattern checkers, enabling real-time pattern detection, including single-event matches (e.g., whoami execution) and cross-event correlations (e.g., brute-force login attempts). Its architecture includes a parser (PEG grammar), evaluation engine (fast matching with no extra memory), correlation engine (8 Sigma-specified types with sliding windows), and linter (65 validation rules), delivering high throughput and Language Server Protocol (LSP) support for rule authoring.
Defensive teams can pipe logs through CLI commands, apply field-mapping pipelines (e.g., ECS to Windows Security log alignment), and chain correlations for multi-stage attack detection. The tool’s memory-efficient state management, auto-fix linting, and suppression mechanisms reduce false positives. With an open-source MIT license and crates.io availability, RSigma enables fast, scalable, and affordable log analysis with Sigma’s battle-tested rules.
📌 Source: https://mostafa.dev/pattern-detection-and-correlation-in-json-logs-fab16334e4ee
📝 Detection Pipeline Maturity Model
A strong detection pipeline must balance accurate alerts with smooth operations, ensuring important signals stand out while supporting both custom analytics and closed-source security tools (e.g., CrowdStrike, SentinelOne, AWS GuardDuty). The pipeline relies on two core data sources: security tools (rules from vendors or your own) and telemetry (data from endpoints, networks, cloud trails, and more), both needing normalization and enrichment for good correlation. Maturity progresses through five levels: None (manual work, easy mistakes, jumping between tools), Basic (centralized case management with some automation), Standard+ (all data through an analytics platform with a risk engine to rank alerts), Advanced (custom, accurate rules tested with simulated attacks), and Leading (detections built with data science and deception techniques). Organizations should rely less on unproven closed-source rules, adjust alert thresholds for noisy signals, and regularly check that detections work by searching for threats and adjusting settings, ensuring clear, useful, and strong monitoring everywhere you operate.
📌 Source: https://detect.fyi/detection-pipeline-maturity-model-076984779651
📝 Stop Enabling Every AWS Security Service
Securing AWS environments requires a step-by-step approach: find critical threats by planning for potential problems, learn how your team works (IAM practices, resource creation), and keep safe sensitive data and systems that would cause big problems if they fail. Check for duplication between native AWS tools and third-party platforms to avoid redundant alerts.
Compare cost-benefit ratios and use custom automations (Lambda + EventBridge) for cost control. Bring together identity via AWS SSO (IAM Identity Center) and automate monitoring tasks to ensure consistency. A step-by-step, automated, and financially sustainable architecture enables confident operations.
📌 Source: https://aws.plainenglish.io/stop-enabling-every-aws-security-service-fb171635a25c
🔴 Red Team
📝 Help on the line: How a Microsoft Teams support call led to compromise
A Microsoft DART investigation found a complex attack that started with identity-first intrusion using Microsoft Teams voice phishing (vishing), . Attackers pretended to be IT support and tricked employees into allowing remote access using Quick Assist. After two failed tries, they fooled a third person and used that access to send them to a fake login page and install a malicious MSI package that secretly loaded a DLL to control the computer (C2). Later attacks grew their access using encrypted loaders, living-off-the-land binaries (LOLBins), , and proxy-based C2, to steal login information and take over user sessions without being noticed.
To protect against this, defenders should limit outside teamwork in Teams by only allowing trusted domains and turn off or limit remote access tools like Quick Assist unless the business really needs them. To find attacks, focus on unusual Quick Assist use, unexpected MSI programs running, , and connections going out to unknown websites after Teams calls. Security teams should also watch for secret DLL loading and proxy-based control traffic, and train users to spot rushed tricks that pressure them and check support requests using a different way to contact IT.
📝 Torg Grabber: Anatomy of a New Credential Stealer
The newly identified Torg Grabber is a credential stealer that uses ChaCha20 encryption, HMAC-SHA256 authentication, and a modular REST API design. It changed from Telegram-based data theft through a custom TCP protocol to HTTPS. It includes a 20 KB reflective DLL to get around Chrome’s Application Bound Encryption (ABE).
Security teams should watch for environment variable usage (GRABBER_*), PowerShell BITS transfers, and TCP traffic on port 50443. Update detection rules for ChaCha20-encrypted HTTPS traffic to track C2 domains. Use behavior analysis and memory forensics to detect direct NT syscalls and reflective loading. Limit PowerShell execution, block known attacker C2 IPs (e.g., 84.200.125.231), and use YARA rules for operator tags and ABE bypass DLL artifacts.
📌 Source: https://www.gendigital.com/blog/insights/research/torg-grabber-credential-stealer-analysis
📝 How a Poisoned Security Scanner Became the Key to Backdooring LiteLLM
Attackers compromised LiteLLM (versions 1.82.7 and 1.82.8) on March 24, 2026. The group TeamPCP carried out a supply chain attack by tampering with a Trivy security scanner in LiteLLM’s CI/CD pipeline to steal PyPI credentials. The attack delivered malware two ways: by injecting code directly into proxy_server.py (1.82.7) and via a .pth file (1.82.8) that runs on Python startup. The attack had three steps: stealing credentials (cloud, Kubernetes, and crypto secrets), sending them encrypted to models.litellm.cloud, and installing a backdoor using a systemd service that checks checkmarx.zone for more payloads. The attack used MITRE ATT&CK techniques T1546.018 (Python Startup Hooks), T1003 (Credential Dumping), and T1610 (Deploy Container), with Kubernetes spread through alpine:latest pods named node-setup-*.
Defenders should immediately check systems for signs of compromise, including the .pth file, sysmon.py backdoor, or tpcp.tar.gz archives, and rotate all credentials, especially cloud, SSH, and Kubernetes tokens. To prevent reinfection, pin LiteLLM to ≤1.82.6, check CI/CD pipelines for unlocked dependencies, and monitor for suspicious pods or network traffic to attacker-controlled domains. This incident shows the risk of .pth-based persistence and the need for live integrity checks that go beyond hash verification, since the malicious packages passed all standard PyPI checks.
📌 Source: https://snyk.io/fr/blog/poisoned-security-scanner-backdooring-litellm
Never trust, always check


