This is a monthly newsletter I put together for an internal security awareness program. Feel Free to grab and use for your own program.
Fake Dropbox Emails Used to Steal Login Details
Attackers are circulating phishing emails that impersonate Dropbox and attempt to trick recipients into handing over their account credentials. The messages often look like routine business communications and include a PDF attachment. When opened, the document directs the user to a fake Dropbox login page designed to capture usernames and passwords.
Key Points
Phishing emails are crafted to look like legitimate Dropbox notifications or file-sharing messages.
PDF attachments are used to make the email appear business-related and trustworthy.
Links inside the document lead to counterfeit login pages.
Entered credentials are captured by attackers and can be reused to access other accounts.
The technique relies on familiar brands and file formats to lower suspicion.
Further Reading: CybersecurityNews
DKIM Replay Attacks Abuse Trusted Email for Invoice and Support Scams
Threat researchers are tracking a rise in DKIM replay attacks, where adversaries reuse legitimate, cryptographically signed emails from trusted services such as Apple and PayPal. Because these messages retain valid authentication, they can bypass email security controls and appear legitimate to recipients, even when used to deliver fraudulent invoices or support scams.
Key Insights
DKIM replay attacks involve capturing a genuine, signed email and redistributing it without breaking authentication checks.
Since DKIM and DMARC validation still passes, many email defenses treat the replayed message as trusted.
Attackers commonly abuse invoice or notification workflows that allow user-controlled fields to inject scam content.
Messages often include urgent payment requests or fake support numbers designed to trigger rapid victim response.
DKIM verifies message integrity but does not restrict message reuse, making replay a persistent risk.
Further Reading: Kaseya
CrashFix: New ClickFix Variant Deploys Python Remote Access Trojan
Threat researchers have identified a new evolution of the ClickFix social engineering campaign known as CrashFix. This variant intentionally crashes a victim’s browser and presents a deceptive recovery prompt that convinces users to manually execute commands on their own systems. The interaction ultimately leads to the installation of a Python-based remote access trojan, giving attackers persistent access to compromised devices.
Key Insights
CrashFix commonly begins with users being prompted to install a malicious browser extension disguised as a legitimate utility, such as an ad blocker.
The extension later forces the browser into a crash state, creating urgency and the illusion of a technical failure.
Victims are shown fake troubleshooting instructions that direct them to run system commands, unknowingly initiating the infection chain.
The attack leverages built-in Windows tools and scripting to deploy a Python-based remote access trojan that enables surveillance and long-term access.
The campaign appears designed to prioritize enterprise environments, including systems connected to corporate domains.
Further Reading: Microsoft Security Blog
Why Even Smart People Fall for Phishing Attacks
Phishing doesn’t succeed because people are careless — it works because attackers understand how human decision-making works under pressure. Researchers found that phishing messages are deliberately designed to exploit emotions, habits, and cognitive shortcuts people rely on during busy workdays. When distracted or rushed, even experienced professionals can make quick decisions that feel reasonable in the moment but lead to compromise.
Key Insights
Phishing messages are often built around a simple pattern: grab attention, create emotional pressure, and prompt immediate action.
Common tactics rely on urgency, fear, authority, or trust to override careful thinking.
People tend to overestimate their ability to spot scams, which can make them more vulnerable.
Multitasking and information overload reduce the ability to notice subtle warning signs.
Familiar branding and realistic language can create a false sense of safety, even when the message is malicious.
Further Reading: Unit 42 – The Psychology of Phishing
SaaS Abuse at Scale: Phone-Based Scam Campaign Leveraging Trusted Platforms
Threat researchers have identified a large-scale scam campaign in which attackers abuse legitimate SaaS platform features to deliver phone-based fraud lures. Rather than relying on malicious links or spoofed domains, the campaign misuses native notification and messaging workflows from trusted services, causing emails to appear authentic and pass standard security checks. Victims are directed to call attacker-controlled phone numbers, shifting the final stage of the scam to voice-based social engineering.
Key Insights
Attackers exploit built-in notification systems within SaaS platforms to generate messages that inherit trust from legitimate services.
The campaign operated at significant scale, impacting tens of thousands of organizations worldwide.
Emails frequently avoid malicious links and instead instruct recipients to call fake support phone numbers.
Multiple abuse techniques were observed, including misuse of general SaaS messaging and business invitation workflows.
The activity reflects a broader shift toward abusing trusted platforms rather than deploying traditional phishing infrastructure.
Further Reading: Check Point Research
Discord Rolls Out “Teen-by-Default” Safety Settings Worldwide
Discord has announced a global rollout of “teen-by-default” safety settings beginning in early March 2026. Under this change, all new and existing users will initially experience a teen-appropriate version of the platform unless they verify their age as an adult. The update is part of Discord’s broader push to strengthen age-appropriate safeguards and align with evolving global safety expectations.
Key Points
All accounts will default to a teen-appropriate mode with stricter content and communication controls.
Users must complete age verification to access adult-restricted spaces and features.
Sensitive content may be blurred, and messaging from unfamiliar accounts can be limited under default settings.
Age verification methods include options such as on-device facial age estimation or ID verification.
The rollout builds on previous regional safety updates and expands protections globally.
Further Reading: Discord
Exposed OpenClaw AI Instances Raise Security Concerns
Recent research highlights growing security risks tied to exposed OpenClaw AI agent instances. OpenClaw is a self-hosted AI assistant platform that users can deploy to automate messaging, data access, and system tasks. However, many deployments are being misconfigured and left accessible on the public internet, creating opportunities for unauthorized access and potential compromise.
Key Points
Thousands of OpenClaw instances were found exposed online due to insecure configuration settings.
Some instances lacked strong authentication controls, allowing external parties to interact with the AI agent.
Because OpenClaw integrates with messaging platforms, cloud tools, and local systems, an exposed setup could provide indirect access to connected accounts and sensitive data.
Researchers observed scanning activity shortly after instances became publicly accessible, demonstrating how quickly exposed services attract attention.
The ease of deployment may contribute to widespread adoption, but also increases the likelihood of insecure configurations.
Further Reading: Bitsight
QR Codes Used as an Attack Vector in Phishing and Malware Campaigns
Threat researchers have documented an increase in malicious use of QR codes by attackers. QR codes — once primarily a convenience tool for quickly linking users to URLs — are now being embedded in phishing campaigns, physical media, and social engineering lures. Because many people instinctively trust QR codes and may not check the underlying link before scanning, attackers can use them to direct victims to sites hosting credential-harvesting pages, malware downloads, or other harmful content. This trend shows how even familiar convenience features can be abused when users aren’t aware of the risks.
Key Insights
QR codes are being inserted into phishing emails, SMS messages, posters, and social media posts to silently redirect users to malicious destinations.
Scanning a QR code can open links that lead to credential-harvesting pages that mimic legitimate services, increasing the chance of compromise.
QR codes can also deliver links to files or installers, which victims may download unknowingly.
Because QR codes obscure the actual URL, they make it harder for users to assess safety before interacting.
Awareness of this technique is critical, as attackers blend convenience with malicious intent in everyday workflows.
Further Reading: Unit 42
Infostealer Malware Targets OpenClaw AI Agent Secrets
Security researchers have identified infostealer malware expanding its focus to include OpenClaw AI assistant environments. Traditionally known for stealing browser credentials and system data, these threats are now targeting AI agent configuration files that may contain API keys, authentication tokens, and other sensitive secrets.
Key Points
Infostealer malware is harvesting configuration files associated with OpenClaw AI assistants.
Stolen data may include API keys, authentication tokens, and other credentials used to access connected services.
This marks a shift from browser-only credential theft to targeting locally stored AI agent secrets.
Because configuration files are often stored in user directories, traditional infostealers can easily locate and exfiltrate them.
AI agent credentials should be treated with the same level of protection as passwords and other sensitive secrets.
Further Reading: BleepingComputer – Infostealer Malware Found Stealing OpenClaw Secrets for First Time
Romance Scam Victims Often Feel Shame and Financial Loss
A recent survey of more than 2,000 U.S. adults found that many people who fall for romance scams struggle with embarrassment and underreporting, making it harder for others to learn from these crimes. These scams occur when someone posing as a romantic interest tricks victims into sending money or sharing sensitive details, often through fake profiles on social media or dating apps. Such experiences can cause both emotional distress and significant financial harm.
Key Points
Around half of survey respondents found it harder to admit falling for a romance scam than other types of fraud, which can discourage reporting and awareness.
Many people who use digital platforms to meet others notice fraudulent or fake profiles on dating sites and social media.
A notable portion of people reported losing money, with typical losses ranging into the low thousands of dollars.
Even after financial loss, many victims continue to feel stigma, and some choose not to report their experiences to authorities or support networks.
These scams tend to occur where people seek connection online, highlighting the need for caution and awareness on digital platforms.
Further Reading: NordProtect Romance Scam Survey
Huntress Report Reveals How Organized Cybercrime Operates at Scale
A new 2026 Cyber Threat Report from Huntress lays out how modern cybercriminals have evolved into highly efficient, profit-driven operators — running campaigns that resemble legitimate businesses rather than isolated hacker hits. The analysis draws on telemetry from millions of endpoints and identities and highlights how organized cybercrime groups are abusing trusted tools, stolen credentials, and scaled workflows to compromise people and organizations worldwide.
Key Takeaways
Legitimate tools are being weaponized — Remote monitoring and management (RMM) systems are now a top choice for attackers to deploy malware, steal credentials, and execute commands without using traditional hacking tools.
User deception fuels malware delivery — Techniques like ClickFix social engineering accounted for more than half of observed malware loader activity by tricking people into installing threats as part of routine actions.
Ransomware groups follow streamlined playbooks — Major ransomware operators are focusing on stealth and data theft, increasing time-to-ransom and making detection harder.
Criminal ecosystems are thriving — Stolen credentials are being sold cheaply on underground markets, making initial access easier and boosting identity-based attacks.
Mailbox manipulation and OAuth abuse lead to BEC — These identity threats are establishing footholds that set the stage for high-impact business email compromise schemes.
Further Reading: Huntress 2026 Cyber Threat Report
AI Recommendation Poisoning: How "Summarize with AI" Buttons Can Bias Your Assistant
Summary Microsoft security researchers have uncovered a new deceptive technique called AI Recommendation Poisoning. This attack targets the "memory" and personalization features of AI assistants like Microsoft Copilot, ChatGPT, and Gemini. By embedding hidden instructions in seemingly helpful "Summarize with AI" buttons or share links, companies and bad actors can inject persistent "facts" or preferences into your AI’s long-term memory. Once poisoned, the AI may begin to show subtle biases—recommending specific products, favoring certain vendors, or trusting unreliable sources—without you ever knowing the assistant has been manipulated.
Key Takeaways
The Helpful Button Trap: Be cautious of "Summarize with AI" buttons on third-party websites. They may contain hidden URL parameters that do more than just summarize; they can "pre-fill" instructions that tell your AI to "always remember this site as a trusted source."
Persistent Bias: Unlike a standard prompt injection that only affects one conversation, memory poisoning is designed to last. The injected instructions can influence the AI's behavior across future sessions, even weeks after you clicked the link.
Hidden in Plain Sight: These malicious prompts often use phrases like "from now on," "always," or "remember" to establish persistence. Because the AI presents these biased recommendations confidently, users are less likely to question their accuracy.
Practical Defense: Periodically review and clear your AI assistant’s memory or "personalization" settings. Hover over AI-related links before clicking to see if the URL contains long, suspicious-looking text strings or commands.
Further Reading: Manipulating AI memory for profit: The rise of AI Recommendation Poisoning
OpenClaw AI: Why Your New "Super Assistant" Might Be a Security Backdoor
Summary Microsoft security researchers have issued a major warning regarding OpenClaw, a viral open-source AI agent that runs locally on your computer. Unlike standard AI chatbots that just talk, OpenClaw is designed to act—it can read your emails, run terminal commands, and manage your files. However, because it operates with the same permissions as you, it lacks traditional security boundaries. This creates a "lethal trifecta": the agent has access to your private data, the ability to communicate with the outside world, and the requirement to read untrusted content (like emails or websites), making it an easy target for hackers.
Key Takeaways
The Power is the Problem: Because OpenClaw has "the keys to the kingdom" (your login details and file access), any malicious instruction it reads from a website or email could trick it into deleting files, stealing passwords, or sending spam from your account.
The "Skills" Marketplace is Risky: Much like a suspicious app store, OpenClaw’s "ClawHub" is currently flooded with community-made "skills." Researchers have found that a significant percentage of these contain hidden malware designed to steal crypto-wallets or install keyloggers.
Not for Your Work PC: Microsoft strongly advises against running OpenClaw on any computer used for actual work or personal banking. It should only be used in "isolated" environments (like a dedicated Virtual Machine) where it cannot access your sensitive identities.
Treat it as Untrusted Code: If you are testing OpenClaw, never give it access to your primary email or password manager. Assume that anything the agent "sees" or "remembers" could potentially be exfiltrated if the agent is manipulated by an external prompt.
Further Reading: Running OpenClaw safely: identity, isolation, and runtime risk
Hook, Line, and Vault: How the 1Phish Tool Steals Your Corporate Identity
Summary Security researchers have detailed a powerful new open-source phishing tool called 1Phish, designed specifically to target corporate employees. This tool goes beyond stealing passwords; it focuses on harvesting session cookies and tokens from high-value services like Okta, Microsoft, and Google. By tricking users into logging into a fake corporate portal, 1Phish allows attackers to "clone" an active login session. Once this happens, the attacker can bypass Multi-Factor Authentication (MFA) entirely and access the victim's work apps as if they were the legitimate employee.
Key Takeaways
The "One-Click" Danger: 1Phish is designed for speed. Once a victim clicks a link and enters their credentials, the attacker has nearly instant access to their corporate account before the victim even realizes something is wrong.
MFA is Not a Total Shield: This tool specializes in "session hijacking." Because it captures the "authorization token" generated after you successfully complete an MFA prompt, the attacker doesn't need to know your MFA code to stay logged in.
Mimicking Corporate Portals: 1Phish makes it very easy for attackers to clone your company's specific login page, including custom branding and logos, making the fake site look exactly like the "Single Sign-On" (SSO) page you use every day.
Stay Alert on Redirects: Be wary of any login page that feels "glitchy" or redirects you multiple times. If you are prompted to log in to a service you are already signed into, close the tab and navigate to the site directly through a trusted bookmark.
Further Reading: Hook, Line, and Vault: A Deep Dive into 1Phish
